Sample records for original source code

  1. Pre-coding method and apparatus for multiple source or time-shifted single source data and corresponding inverse post-decoding method and apparatus

    NASA Technical Reports Server (NTRS)

    Yeh, Pen-Shu (Inventor)

    1997-01-01

    A pre-coding method and device for improving data compression performance by removing correlation between a first original data set and a second original data set, each having M members, respectively. The pre-coding method produces a compression-efficiency-enhancing double-difference data set. The method and device produce a double-difference data set, i.e., an adjacent-delta calculation performed on a cross-delta data set or a cross-delta calculation performed on two adjacent-delta data sets, from either one of (1) two adjacent spectral bands coming from two discrete sources, respectively, or (2) two time-shifted data sets coming from a single source. The resulting double-difference data set is then coded using either a distortionless data encoding scheme (entropy encoding) or a lossy data compression scheme. Also, a post-decoding method and device for recovering a second original data set having been represented by such a double-difference data set.

  2. Pre-coding method and apparatus for multiple source or time-shifted single source data and corresponding inverse post-decoding method and apparatus

    NASA Technical Reports Server (NTRS)

    Yeh, Pen-Shu (Inventor)

    1998-01-01

    A pre-coding method and device for improving data compression performance by removing correlation between a first original data set and a second original data set, each having M members, respectively. The pre-coding method produces a compression-efficiency-enhancing double-difference data set. The method and device produce a double-difference data set, i.e., an adjacent-delta calculation performed on a cross-delta data set or a cross-delta calculation performed on two adjacent-delta data sets, from either one of (1) two adjacent spectral bands coming from two discrete sources, respectively, or (2) two time-shifted data sets coming from a single source. The resulting double-difference data set is then coded using either a distortionless data encoding scheme (entropy encoding) or a lossy data compression scheme. Also, a post-decoding method and device for recovering a second original data set having been represented by such a double-difference data set.

  3. Hybrid concatenated codes and iterative decoding

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush (Inventor); Pollara, Fabrizio (Inventor)

    2000-01-01

    Several improved turbo code apparatuses and methods. The invention encompasses several classes: (1) A data source is applied to two or more encoders with an interleaver between the source and each of the second and subsequent encoders. Each encoder outputs a code element which may be transmitted or stored. A parallel decoder provides the ability to decode the code elements to derive the original source information d without use of a received data signal corresponding to d. The output may be coupled to a multilevel trellis-coded modulator (TCM). (2) A data source d is applied to two or more encoders with an interleaver between the source and each of the second and subsequent encoders. Each of the encoders outputs a code element. In addition, the original data source d is output from the encoder. All of the output elements are coupled to a TCM. (3) At least two data sources are applied to two or more encoders with an interleaver between each source and each of the second and subsequent encoders. The output may be coupled to a TCM. (4) At least two data sources are applied to two or more encoders with at least two interleavers between each source and each of the second and subsequent encoders. (5) At least one data source is applied to one or more serially linked encoders through at least one interleaver. The output may be coupled to a TCM. The invention includes a novel way of terminating a turbo coder.

  4. Relay selection in energy harvesting cooperative networks with rateless codes

    NASA Astrophysics Data System (ADS)

    Zhu, Kaiyan; Wang, Fei

    2018-04-01

    This paper investigates the relay selection in energy harvesting cooperative networks, where the relays harvests energy from the radio frequency (RF) signals transmitted by a source, and the optimal relay is selected and uses the harvested energy to assist the information transmission from the source to its destination. Both source and the selected relay transmit information using rateless code, which allows the destination recover original information after collecting codes bits marginally surpass the entropy of original information. In order to improve transmission performance and efficiently utilize the harvested power, the optimal relay is selected. The optimization problem are formulated to maximize the achievable information rates of the system. Simulation results demonstrate that our proposed relay selection scheme outperform other strategies.

  5. The Particle Accelerator Simulation Code PyORBIT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gorlov, Timofey V; Holmes, Jeffrey A; Cousineau, Sarah M

    2015-01-01

    The particle accelerator simulation code PyORBIT is presented. The structure, implementation, history, parallel and simulation capabilities, and future development of the code are discussed. The PyORBIT code is a new implementation and extension of algorithms of the original ORBIT code that was developed for the Spallation Neutron Source accelerator at the Oak Ridge National Laboratory. The PyORBIT code has a two level structure. The upper level uses the Python programming language to control the flow of intensive calculations performed by the lower level code implemented in the C++ language. The parallel capabilities are based on MPI communications. The PyORBIT ismore » an open source code accessible to the public through the Google Open Source Projects Hosting service.« less

  6. Universal Noiseless Coding Subroutines

    NASA Technical Reports Server (NTRS)

    Schlutsmeyer, A. P.; Rice, R. F.

    1986-01-01

    Software package consists of FORTRAN subroutines that perform universal noiseless coding and decoding of integer and binary data strings. Purpose of this type of coding to achieve data compression in sense that coded data represents original data perfectly (noiselessly) while taking fewer bits to do so. Routines universal because they apply to virtually any "real-world" data source.

  7. Towards Holography via Quantum Source-Channel Codes.

    PubMed

    Pastawski, Fernando; Eisert, Jens; Wilming, Henrik

    2017-07-14

    While originally motivated by quantum computation, quantum error correction (QEC) is currently providing valuable insights into many-body quantum physics, such as topological phases of matter. Furthermore, mounting evidence originating from holography research (AdS/CFT) indicates that QEC should also be pertinent for conformal field theories. With this motivation in mind, we introduce quantum source-channel codes, which combine features of lossy compression and approximate quantum error correction, both of which are predicted in holography. Through a recent construction for approximate recovery maps, we derive guarantees on its erasure decoding performance from calculations of an entropic quantity called conditional mutual information. As an example, we consider Gibbs states of the transverse field Ising model at criticality and provide evidence that they exhibit nontrivial protection from local erasure. This gives rise to the first concrete interpretation of a bona fide conformal field theory as a quantum error correcting code. We argue that quantum source-channel codes are of independent interest beyond holography.

  8. Towards Holography via Quantum Source-Channel Codes

    NASA Astrophysics Data System (ADS)

    Pastawski, Fernando; Eisert, Jens; Wilming, Henrik

    2017-07-01

    While originally motivated by quantum computation, quantum error correction (QEC) is currently providing valuable insights into many-body quantum physics, such as topological phases of matter. Furthermore, mounting evidence originating from holography research (AdS/CFT) indicates that QEC should also be pertinent for conformal field theories. With this motivation in mind, we introduce quantum source-channel codes, which combine features of lossy compression and approximate quantum error correction, both of which are predicted in holography. Through a recent construction for approximate recovery maps, we derive guarantees on its erasure decoding performance from calculations of an entropic quantity called conditional mutual information. As an example, we consider Gibbs states of the transverse field Ising model at criticality and provide evidence that they exhibit nontrivial protection from local erasure. This gives rise to the first concrete interpretation of a bona fide conformal field theory as a quantum error correcting code. We argue that quantum source-channel codes are of independent interest beyond holography.

  9. shiftNMFk 1.1: Robust Nonnegative matrix factorization with kmeans clustering and signal shift, for allocation of unknown physical sources, toy version for open sourcing with publications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alexandrov, Boian S.; Lliev, Filip L.; Stanev, Valentin G.

    This code is a toy (short) version of CODE-2016-83. From a general perspective, the code represents an unsupervised adaptive machine learning algorithm that allows efficient and high performance de-mixing and feature extraction of a multitude of non-negative signals mixed and recorded by a network of uncorrelated sensor arrays. The code identifies the number of the mixed original signals and their locations. Further, the code also allows deciphering of signals that have been delayed in regards to the mixing process in each sensor. This code is high customizable and it can be efficiently used for a fast macro-analyses of data. Themore » code is applicable to a plethora of distinct problems: chemical decomposition, pressure transient decomposition, unknown sources/signal allocation, EM signal decomposition. An additional procedure for allocation of the unknown sources is incorporated in the code.« less

  10. Doclet To Synthesize UML

    NASA Technical Reports Server (NTRS)

    Barry, Matthew R.; Osborne, Richard N.

    2005-01-01

    The RoseDoclet computer program extends the capability of Java doclet software to automatically synthesize Unified Modeling Language (UML) content from Java language source code. [Doclets are Java-language programs that use the doclet application programming interface (API) to specify the content and format of the output of Javadoc. Javadoc is a program, originally designed to generate API documentation from Java source code, now also useful as an extensible engine for processing Java source code.] RoseDoclet takes advantage of Javadoc comments and tags already in the source code to produce a UML model of that code. RoseDoclet applies the doclet API to create a doclet passed to Javadoc. The Javadoc engine applies the doclet to the source code, emitting the output format specified by the doclet. RoseDoclet emits a Rose model file and populates it with fully documented packages, classes, methods, variables, and class diagrams identified in the source code. The way in which UML models are generated can be controlled by use of new Javadoc comment tags that RoseDoclet provides. The advantage of using RoseDoclet is that Javadoc documentation becomes leveraged for two purposes: documenting the as-built API and keeping the design documentation up to date.

  11. Methods for Coding Tobacco-Related Twitter Data: A Systematic Review.

    PubMed

    Lienemann, Brianna A; Unger, Jennifer B; Cruz, Tess Boley; Chu, Kar-Hai

    2017-03-31

    As Twitter has grown in popularity to 313 million monthly active users, researchers have increasingly been using it as a data source for tobacco-related research. The objective of this systematic review was to assess the methodological approaches of categorically coded tobacco Twitter data and make recommendations for future studies. Data sources included PsycINFO, Web of Science, PubMed, ABI/INFORM, Communication Source, and Tobacco Regulatory Science. Searches were limited to peer-reviewed journals and conference proceedings in English from January 2006 to July 2016. The initial search identified 274 articles using a Twitter keyword and a tobacco keyword. One coder reviewed all abstracts and identified 27 articles that met the following inclusion criteria: (1) original research, (2) focused on tobacco or a tobacco product, (3) analyzed Twitter data, and (4) coded Twitter data categorically. One coder extracted data collection and coding methods. E-cigarettes were the most common type of Twitter data analyzed, followed by specific tobacco campaigns. The most prevalent data sources were Gnip and Twitter's Streaming application programming interface (API). The primary methods of coding were hand-coding and machine learning. The studies predominantly coded for relevance, sentiment, theme, user or account, and location of user. Standards for data collection and coding should be developed to be able to more easily compare and replicate tobacco-related Twitter results. Additional recommendations include the following: sample Twitter's databases multiple times, make a distinction between message attitude and emotional tone for sentiment, code images and URLs, and analyze user profiles. Being relatively novel and widely used among adolescents and black and Hispanic individuals, Twitter could provide a rich source of tobacco surveillance data among vulnerable populations. ©Brianna A Lienemann, Jennifer B Unger, Tess Boley Cruz, Kar-Hai Chu. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 31.03.2017.

  12. Modeling the Volcanic Source at Long Valley, CA, Using a Genetic Algorithm Technique

    NASA Technical Reports Server (NTRS)

    Tiampo, Kristy F.

    1999-01-01

    In this project, we attempted to model the deformation pattern due to the magmatic source at Long Valley caldera using a real-value coded genetic algorithm (GA) inversion similar to that found in Michalewicz, 1992. The project has been both successful and rewarding. The genetic algorithm, coded in the C programming language, performs stable inversions over repeated trials, with varying initial and boundary conditions. The original model used a GA in which the geophysical information was coded into the fitness function through the computation of surface displacements for a Mogi point source in an elastic half-space. The program was designed to invert for a spherical magmatic source - its depth, horizontal location and volume - using the known surface deformations. It also included the capability of inverting for multiple sources.

  13. Finding Resolution for the Responsible Transparency of Economic Models in Health and Medicine.

    PubMed

    Padula, William V; McQueen, Robert Brett; Pronovost, Peter J

    2017-11-01

    The Second Panel on Cost-Effectiveness in Health and Medicine recommendations for conduct, methodological practices, and reporting of cost-effectiveness analyses has a number of questions unanswered with respect to the implementation of transparent, open source code interface for economic models. The possibility of making economic model source code could be positive and progressive for the field; however, several unintended consequences of this system should be first considered before complete implementation of this model. First, there is the concern regarding intellectual property rights that modelers have to their analyses. Second, the open source code could make analyses more accessible to inexperienced modelers, leading to inaccurate or misinterpreted results. We propose several resolutions to these concerns. The field should establish a licensing system of open source code such that the model originators maintain control of the code use and grant permissions to other investigators who wish to use it. The field should also be more forthcoming towards the teaching of cost-effectiveness analysis in medical and health services education so that providers and other professionals are familiar with economic modeling and able to conduct analyses with open source code. These types of unintended consequences need to be fully considered before the field's preparedness to move forward into an era of model transparency with open source code.

  14. 22 CFR 228.03 - Identification of principal geographic code numbers.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 22 Foreign Relations 1 2010-04-01 2010-04-01 false Identification of principal geographic code numbers. 228.03 Section 228.03 Foreign Relations AGENCY FOR INTERNATIONAL DEVELOPMENT RULES ON SOURCE, ORIGIN AND NATIONALITY FOR COMMODITIES AND SERVICES FINANCED BY USAID Definitions and Scope of This Part...

  15. 77 FR 64435 - Branch Technical Position on the Import of Non-U.S. Origin Radioactive Sources

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-10-22

    ... exclusion, initially adopted in a 1995 rule.\\3\\ In accordance with International Atomic Energy Agency (IAEA) Code of Conduct on the Safety and Security of Radioactive Sources and the IAEA supplemental Guidance on...

  16. Coupled Hydrodynamic and Wave Propagation Modeling for the Source Physics Experiment: Study of Rg Wave Sources for SPE and DAG series.

    NASA Astrophysics Data System (ADS)

    Larmat, C. S.; Delorey, A.; Rougier, E.; Knight, E. E.; Steedman, D. W.; Bradley, C. R.

    2017-12-01

    This presentation reports numerical modeling efforts to improve knowledge of the processes that affect seismic wave generation and propagation from underground explosions, with a focus on Rg waves. The numerical model is based on the coupling of hydrodynamic simulation codes (Abaqus, CASH and HOSS), with a 3D full waveform propagation code, SPECFEM3D. Validation datasets are provided by the Source Physics Experiment (SPE) which is a series of highly instrumented chemical explosions at the Nevada National Security Site with yields from 100kg to 5000kg. A first series of explosions in a granite emplacement has just been completed and a second series in alluvium emplacement is planned for 2018. The long-term goal of this research is to review and improve current existing seismic sources models (e.g. Mueller & Murphy, 1971; Denny & Johnson, 1991) by providing first principles calculations provided by the coupled codes capability. The hydrodynamic codes, Abaqus, CASH and HOSS, model the shocked, hydrodynamic region via equations of state for the explosive, borehole stemming and jointed/weathered granite. A new material model for unconsolidated alluvium materials has been developed and validated with past nuclear explosions, including the 10 kT 1965 Merlin event (Perret, 1971) ; Perret and Bass, 1975). We use the efficient Spectral Element Method code, SPECFEM3D (e.g. Komatitsch, 1998; 2002), and Geologic Framework Models to model the evolution of wavefield as it propagates across 3D complex structures. The coupling interface is a series of grid points of the SEM mesh situated at the edge of the hydrodynamic code domain. We will present validation tests and waveforms modeled for several SPE tests which provide evidence that the damage processes happening in the vicinity of the explosions create secondary seismic sources. These sources interfere with the original explosion moment and reduces the apparent seismic moment at the origin of Rg waves up to 20%.

  17. Binary Code Extraction and Interface Identification for Security Applications

    DTIC Science & Technology

    2009-10-02

    the functions extracted during the end-to-end applications and at the bottom some additional functions extracted from the OpenSSL library. fact that as...mentioned in Section 5.1 through Section 5.3 and some additional functions that we extract from the OpenSSL library for evaluation purposes. The... OpenSSL functions, the false positives and negatives are measured by comparison with the original C source code. For the malware samples, no source is

  18. Authorship attribution of source code by using back propagation neural network based on particle swarm optimization

    PubMed Central

    Xu, Guoai; Li, Qi; Guo, Yanhui; Zhang, Miao

    2017-01-01

    Authorship attribution is to identify the most likely author of a given sample among a set of candidate known authors. It can be not only applied to discover the original author of plain text, such as novels, blogs, emails, posts etc., but also used to identify source code programmers. Authorship attribution of source code is required in diverse applications, ranging from malicious code tracking to solving authorship dispute or software plagiarism detection. This paper aims to propose a new method to identify the programmer of Java source code samples with a higher accuracy. To this end, it first introduces back propagation (BP) neural network based on particle swarm optimization (PSO) into authorship attribution of source code. It begins by computing a set of defined feature metrics, including lexical and layout metrics, structure and syntax metrics, totally 19 dimensions. Then these metrics are input to neural network for supervised learning, the weights of which are output by PSO and BP hybrid algorithm. The effectiveness of the proposed method is evaluated on a collected dataset with 3,022 Java files belong to 40 authors. Experiment results show that the proposed method achieves 91.060% accuracy. And a comparison with previous work on authorship attribution of source code for Java language illustrates that this proposed method outperforms others overall, also with an acceptable overhead. PMID:29095934

  19. Hippocratic, religious, and secular ethics: the points of conflict.

    PubMed

    Veatch, Robert M

    2012-02-01

    The origins of professional ethical codes and oaths are explored. Their legitimacy and usefulness within the profession are questioned and an alternative ethical source is suggested. This source relies on a commonly shared, naturally knowable set of principles known as common morality.

  20. A source-channel coding approach to digital image protection and self-recovery.

    PubMed

    Sarreshtedari, Saeed; Akhaee, Mohammad Ali

    2015-07-01

    Watermarking algorithms have been widely applied to the field of image forensics recently. One of these very forensic applications is the protection of images against tampering. For this purpose, we need to design a watermarking algorithm fulfilling two purposes in case of image tampering: 1) detecting the tampered area of the received image and 2) recovering the lost information in the tampered zones. State-of-the-art techniques accomplish these tasks using watermarks consisting of check bits and reference bits. Check bits are used for tampering detection, whereas reference bits carry information about the whole image. The problem of recovering the lost reference bits still stands. This paper is aimed at showing that having the tampering location known, image tampering can be modeled and dealt with as an erasure error. Therefore, an appropriate design of channel code can protect the reference bits against tampering. In the present proposed method, the total watermark bit-budget is dedicated to three groups: 1) source encoder output bits; 2) channel code parity bits; and 3) check bits. In watermark embedding phase, the original image is source coded and the output bit stream is protected using appropriate channel encoder. For image recovery, erasure locations detected by check bits help channel erasure decoder to retrieve the original source encoded image. Experimental results show that our proposed scheme significantly outperforms recent techniques in terms of image quality for both watermarked and recovered image. The watermarked image quality gain is achieved through spending less bit-budget on watermark, while image recovery quality is considerably improved as a consequence of consistent performance of designed source and channel codes.

  1. New features and applications of PRESTO, a computer code for the performance of regenerative, superheated steam turbine cycles

    NASA Technical Reports Server (NTRS)

    Choo, Y. K.; Staiger, P. J.

    1982-01-01

    The code was designed to analyze performance at valves-wide-open design flow. The code can model conventional steam cycles as well as cycles that include such special features as process steam extraction and induction and feedwater heating by external heat sources. Convenience features and extensions to the special features were incorporated into the PRESTO code. The features are described, and detailed examples illustrating the use of both the original and the special features are given.

  2. Effect of the diffusion parameters on the observed γ-ray spectrum of sources and their contribution to the local all-electron spectrum: The EDGE code

    NASA Astrophysics Data System (ADS)

    López-Coto, R.; Hahn, J.; BenZvi, S.; Dingus, B.; Hinton, J.; Nisa, M. U.; Parsons, R. D.; Greus, F. Salesa; Zhang, H.; Zhou, H.

    2018-11-01

    The positron excess measured by PAMELA and AMS can only be explained if there is one or several sources injecting them. Moreover, at the highest energies, it requires the presence of nearby ( ∼ hundreds of parsecs) and middle age (maximum of ∼ hundreds of kyr) sources. Pulsars, as factories of electrons and positrons, are one of the proposed candidates to explain the origin of this excess. To calculate the contribution of these sources to the electron and positron flux at the Earth, we developed EDGE (Electron Diffusion and Gamma rays to the Earth), a code to treat the propagation of electrons and compute their diffusion from a central source with a flexible injection spectrum. Using this code, we can derive the source's gamma-ray spectrum, spatial extension, the all-electron density in space, the electron and positron flux reaching the Earth and the positron fraction measured at the Earth. We present in this paper the foundations of the code and study how different parameters affect the gamma-ray spectrum of a source and the electron flux measured at the Earth. We also studied the effect of several approximations usually performed in these studies. This code has been used to derive the results of the positron flux measured at the Earth in [1].

  3. The Lewis heat pipe code with application to SP-100 GES heat pipes

    NASA Astrophysics Data System (ADS)

    Baker, Karl W.; Tower, Leonard K.

    The NASA Lewis Research Center has a thermal management program supporting SP-100 goals, which includes heat pipe radiator development. As a part of the program Lewis has elected to prepare an in-house heat pipe code tailored to the needs of its SP-100 staff to supplement codes from other sources. The latter, designed to meet the needs of the originating organizations, were deemed not entirely appropriate for use at Lewis. However, a review of their features proved most beneficial in the design of the Lewis code.

  4. Benchmarking Defmod, an open source FEM code for modeling episodic fault rupture

    NASA Astrophysics Data System (ADS)

    Meng, Chunfang

    2017-03-01

    We present Defmod, an open source (linear) finite element code that enables us to efficiently model the crustal deformation due to (quasi-)static and dynamic loadings, poroelastic flow, viscoelastic flow and frictional fault slip. Ali (2015) provides the original code introducing an implicit solver for (quasi-)static problem, and an explicit solver for dynamic problem. The fault constraint is implemented via Lagrange Multiplier. Meng (2015) combines these two solvers into a hybrid solver that uses failure criteria and friction laws to adaptively switch between the (quasi-)static state and dynamic state. The code is capable of modeling episodic fault rupture driven by quasi-static loadings, e.g. due to reservoir fluid withdraw or injection. Here, we focus on benchmarking the Defmod results against some establish results.

  5. Resurrecting Legacy Code Using Ontosoft Knowledge-Sharing and Digital Object Management to Revitalize and Reproduce Software for Groundwater Management Research

    NASA Astrophysics Data System (ADS)

    Kwon, N.; Gentle, J.; Pierce, S. A.

    2015-12-01

    Software code developed for research is often used for a relatively short period of time before it is abandoned, lost, or becomes outdated. This unintentional abandonment of code is a valid problem in the 21st century scientific process, hindering widespread reusability and increasing the effort needed to develop research software. Potentially important assets, these legacy codes may be resurrected and documented digitally for long-term reuse, often with modest effort. Furthermore, the revived code may be openly accessible in a public repository for researchers to reuse or improve. For this study, the research team has begun to revive the codebase for Groundwater Decision Support System (GWDSS), originally developed for participatory decision making to aid urban planning and groundwater management, though it may serve multiple use cases beyond those originally envisioned. GWDSS was designed as a java-based wrapper with loosely federated commercial and open source components. If successfully revitalized, GWDSS will be useful for both practical applications as a teaching tool and case study for groundwater management, as well as informing theoretical research. Using the knowledge-sharing approaches documented by the NSF-funded Ontosoft project, digital documentation of GWDSS is underway, from conception to development, deployment, characterization, integration, composition, and dissemination through open source communities and geosciences modeling frameworks. Information assets, documentation, and examples are shared using open platforms for data sharing and assigned digital object identifiers. Two instances of GWDSS version 3.0 are being created: 1) a virtual machine instance for the original case study to serve as a live demonstration of the decision support tool, assuring the original version is usable, and 2) an open version of the codebase, executable installation files, and developer guide available via an open repository, assuring the source for the application is accessible with version control and potential for new branch developments. Finally, metadata about the software has been completed within the OntoSoft portal to provide descriptive curation, make GWDSS searchable, and complete documentation of the scientific software lifecycle.

  6. Modern Teaching Methods in Physics with the Aid of Original Computer Codes and Graphical Representations

    ERIC Educational Resources Information Center

    Ivanov, Anisoara; Neacsu, Andrei

    2011-01-01

    This study describes the possibility and advantages of utilizing simple computer codes to complement the teaching techniques for high school physics. The authors have begun working on a collection of open source programs which allow students to compare the results and graphics from classroom exercises with the correct solutions and further more to…

  7. Method for coding low entrophy data

    NASA Technical Reports Server (NTRS)

    Yeh, Pen-Shu (Inventor)

    1995-01-01

    A method of lossless data compression for efficient coding of an electronic signal of information sources of very low information rate is disclosed. In this method, S represents a non-negative source symbol set, (s(sub 0), s(sub 1), s(sub 2), ..., s(sub N-1)) of N symbols with s(sub i) = i. The difference between binary digital data is mapped into symbol set S. Consecutive symbols in symbol set S are then paired into a new symbol set Gamma which defines a non-negative symbol set containing the symbols (gamma(sub m)) obtained as the extension of the original symbol set S. These pairs are then mapped into a comma code which is defined as a coding scheme in which every codeword is terminated with the same comma pattern, such as a 1. This allows a direct coding and decoding of the n-bit positive integer digital data differences without the use of codebooks.

  8. QUANTUM ESPRESSO: a modular and open-source software project for quantum simulations of materials.

    PubMed

    Giannozzi, Paolo; Baroni, Stefano; Bonini, Nicola; Calandra, Matteo; Car, Roberto; Cavazzoni, Carlo; Ceresoli, Davide; Chiarotti, Guido L; Cococcioni, Matteo; Dabo, Ismaila; Dal Corso, Andrea; de Gironcoli, Stefano; Fabris, Stefano; Fratesi, Guido; Gebauer, Ralph; Gerstmann, Uwe; Gougoussis, Christos; Kokalj, Anton; Lazzeri, Michele; Martin-Samos, Layla; Marzari, Nicola; Mauri, Francesco; Mazzarello, Riccardo; Paolini, Stefano; Pasquarello, Alfredo; Paulatto, Lorenzo; Sbraccia, Carlo; Scandolo, Sandro; Sclauzero, Gabriele; Seitsonen, Ari P; Smogunov, Alexander; Umari, Paolo; Wentzcovitch, Renata M

    2009-09-30

    QUANTUM ESPRESSO is an integrated suite of computer codes for electronic-structure calculations and materials modeling, based on density-functional theory, plane waves, and pseudopotentials (norm-conserving, ultrasoft, and projector-augmented wave). The acronym ESPRESSO stands for opEn Source Package for Research in Electronic Structure, Simulation, and Optimization. It is freely available to researchers around the world under the terms of the GNU General Public License. QUANTUM ESPRESSO builds upon newly-restructured electronic-structure codes that have been developed and tested by some of the original authors of novel electronic-structure algorithms and applied in the last twenty years by some of the leading materials modeling groups worldwide. Innovation and efficiency are still its main focus, with special attention paid to massively parallel architectures, and a great effort being devoted to user friendliness. QUANTUM ESPRESSO is evolving towards a distribution of independent and interoperable codes in the spirit of an open-source project, where researchers active in the field of electronic-structure calculations are encouraged to participate in the project by contributing their own codes or by implementing their own ideas into existing codes.

  9. Bayesian Atmospheric Radiative Transfer (BART)Thermochemical Equilibrium Abundance (TEA) Code and Application to WASP-43b

    NASA Astrophysics Data System (ADS)

    Blecic, Jasmina; Harrington, Joseph; Bowman, Matthew O.; Cubillos, Patricio E.; Stemm, Madison; Foster, Andrew

    2014-11-01

    We present a new, open-source, Thermochemical Equilibrium Abundances (TEA) code that calculates the abundances of gaseous molecular species. TEA uses the Gibbs-free-energy minimization method with an iterative Lagrangian optimization scheme. It initializes the radiative-transfer calculation in our Bayesian Atmospheric Radiative Transfer (BART) code. Given elemental abundances, TEA calculates molecular abundances for a particular temperature and pressure or a list of temperature-pressure pairs. The code is tested against the original method developed by White at al. (1958), the analytic method developed by Burrows and Sharp (1999), and the Newton-Raphson method implemented in the open-source Chemical Equilibrium with Applications (CEA) code. TEA is written in Python and is available to the community via the open-source development site GitHub.com. We also present BART applied to eclipse depths of WASP-43b exoplanet, constraining atmospheric thermal and chemical parameters. This work was supported by NASA Planetary Atmospheres grant NNX12AI69G and NASA Astrophysics Data Analysis Program grant NNX13AF38G. JB holds a NASA Earth and Space Science Fellowship.

  10. Computer Simulation of the VASIMR Engine

    NASA Technical Reports Server (NTRS)

    Garrison, David

    2005-01-01

    The goal of this project is to develop a magneto-hydrodynamic (MHD) computer code for simulation of the VASIMR engine. This code is designed be easy to modify and use. We achieve this using the Cactus framework, a system originally developed for research in numerical relativity. Since its release, Cactus has become an extremely powerful and flexible open source framework. The development of the code will be done in stages, starting with a basic fluid dynamic simulation and working towards a more complex MHD code. Once developed, this code can be used by students and researchers in order to further test and improve the VASIMR engine.

  11. Methods for Coding Tobacco-Related Twitter Data: A Systematic Review

    PubMed Central

    Unger, Jennifer B; Cruz, Tess Boley; Chu, Kar-Hai

    2017-01-01

    Background As Twitter has grown in popularity to 313 million monthly active users, researchers have increasingly been using it as a data source for tobacco-related research. Objective The objective of this systematic review was to assess the methodological approaches of categorically coded tobacco Twitter data and make recommendations for future studies. Methods Data sources included PsycINFO, Web of Science, PubMed, ABI/INFORM, Communication Source, and Tobacco Regulatory Science. Searches were limited to peer-reviewed journals and conference proceedings in English from January 2006 to July 2016. The initial search identified 274 articles using a Twitter keyword and a tobacco keyword. One coder reviewed all abstracts and identified 27 articles that met the following inclusion criteria: (1) original research, (2) focused on tobacco or a tobacco product, (3) analyzed Twitter data, and (4) coded Twitter data categorically. One coder extracted data collection and coding methods. Results E-cigarettes were the most common type of Twitter data analyzed, followed by specific tobacco campaigns. The most prevalent data sources were Gnip and Twitter’s Streaming application programming interface (API). The primary methods of coding were hand-coding and machine learning. The studies predominantly coded for relevance, sentiment, theme, user or account, and location of user. Conclusions Standards for data collection and coding should be developed to be able to more easily compare and replicate tobacco-related Twitter results. Additional recommendations include the following: sample Twitter’s databases multiple times, make a distinction between message attitude and emotional tone for sentiment, code images and URLs, and analyze user profiles. Being relatively novel and widely used among adolescents and black and Hispanic individuals, Twitter could provide a rich source of tobacco surveillance data among vulnerable populations. PMID:28363883

  12. A plug-in to Eclipse for VHDL source codes: functionalities

    NASA Astrophysics Data System (ADS)

    Niton, B.; Poźniak, K. T.; Romaniuk, R. S.

    The paper presents an original application, written by authors, which supports writing and edition of source codes in VHDL language. It is a step towards fully automatic, augmented code writing for photonic and electronic systems, also systems based on FPGA and/or DSP processors. An implementation is described, based on VEditor. VEditor is a free license program. Thus, the work presented in this paper supplements and extends this free license. The introduction characterizes shortly available tools on the market which serve for aiding the design processes of electronic systems in VHDL. Particular attention was put on plug-ins to the Eclipse environment and Emacs program. There are presented detailed properties of the written plug-in such as: programming extension conception, and the results of the activities of formatter, re-factorizer, code hider, and other new additions to the VEditor program.

  13. An algorithm to identify rheumatoid arthritis in primary care: a Clinical Practice Research Datalink study

    PubMed Central

    Muller, Sara; Hider, Samantha L; Raza, Karim; Stack, Rebecca J; Hayward, Richard A; Mallen, Christian D

    2015-01-01

    Objective Rheumatoid arthritis (RA) is a multisystem, inflammatory disorder associated with increased levels of morbidity and mortality. While much research into the condition is conducted in the secondary care setting, routinely collected primary care databases provide an important source of research data. This study aimed to update an algorithm to define RA that was previously developed and validated in the General Practice Research Database (GPRD). Methods The original algorithm consisted of two criteria. Individuals meeting at least one were considered to have RA. Criterion 1: ≥1 RA Read code and a disease modifying antirheumatic drug (DMARD) without an alternative indication. Criterion 2: ≥2 RA Read codes, with at least one ‘strong’ code and no alternative diagnoses. Lists of codes for consultations and prescriptions were obtained from the authors of the original algorithm where these were available, or compiled based on the original description and clinical knowledge. 4161 people with a first Read code for RA between 1 January 2010 and 31 December 2012 were selected from the Clinical Practice Research Datalink (CPRD, successor to the GPRD), and the criteria applied. Results Code lists were updated for the introduction of new Read codes and biological DMARDs. 3577/4161 (86%) of people met the updated algorithm for RA, compared to 61% in the original development study. 62.8% of people fulfilled both Criterion 1 and Criterion 2. Conclusions Those wishing to define RA in the CPRD, should consider using this updated algorithm, rather than a single RA code, if they wish to identify only those who are most likely to have RA. PMID:26700281

  14. Fac-Back-OPAC: An Open Source Interface to Your Library System

    ERIC Educational Resources Information Center

    Beccaria, Mike; Scott, Dan

    2007-01-01

    The new Fac-Back-OPAC (a faceted backup OPAC) is built on code that was originally developed by Casey Durfee in February 2007. It represents the convergence of two prominent trends in library tools: the decoupling of discovery tools from the traditional integrated library system (ILS) and the use of readily available open source components to…

  15. The reliability of information on work-related injuries available from hospitalisation data in Australia.

    PubMed

    McKenzie, Kirsten; Mitchell, Rebecca; Scott, Deborah Anne; Harrison, James Edward; McClure, Roderick John

    2009-08-01

    To examine the reliability of work-related activity coding for injury-related hospitalisations in Australia. A random sample of 4,373 injury-related hospital separations from 1 July 2002 to 30 June 2004 were obtained from a stratified random sample of 50 hospitals across four states in Australia. From this sample, cases were identified as work-related if they contained an ICD-10-AM work-related activity code (U73) allocated by either: (i) the original coder; (ii) an independent auditor, blinded to the original code; or (iii) a research assistant, blinded to both the original and auditor codes, who reviewed narrative text extracted from the medical record. The concordance of activity coding and number of cases identified as work-related using each method were compared. Of the 4,373 cases sampled, 318 cases were identified as being work-related using any of the three methods for identification. The original coder identified 217 and the auditor identified 266 work-related cases (68.2% and 83.6% of the total cases identified, respectively). Around 10% of cases were only identified through the text description review. The original coder and auditor agreed on the assignment of work-relatedness for 68.9% of cases. The best estimates of the frequency of hospital admissions for occupational injury underestimate the burden by around 32%. This is a substantial underestimate that has major implications for public policy, and highlights the need for further work on improving the quality and completeness of routine, administrative data sources for a more complete identification of work-related injuries.

  16. The Sensitivity of Adverse Event Cost Estimates to Diagnostic Coding Error

    PubMed Central

    Wardle, Gavin; Wodchis, Walter P; Laporte, Audrey; Anderson, Geoffrey M; Baker, Ross G

    2012-01-01

    Objective To examine the impact of diagnostic coding error on estimates of hospital costs attributable to adverse events. Data Sources Original and reabstracted medical records of 9,670 complex medical and surgical admissions at 11 hospital corporations in Ontario from 2002 to 2004. Patient specific costs, not including physician payments, were retrieved from the Ontario Case Costing Initiative database. Study Design Adverse events were identified among the original and reabstracted records using ICD10-CA (Canadian adaptation of ICD10) codes flagged as postadmission complications. Propensity score matching and multivariate regression analysis were used to estimate the cost of the adverse events and to determine the sensitivity of cost estimates to diagnostic coding error. Principal Findings Estimates of the cost of the adverse events ranged from $16,008 (metabolic derangement) to $30,176 (upper gastrointestinal bleeding). Coding errors caused the total cost attributable to the adverse events to be underestimated by 16 percent. The impact of coding error on adverse event cost estimates was highly variable at the organizational level. Conclusions Estimates of adverse event costs are highly sensitive to coding error. Adverse event costs may be significantly underestimated if the likelihood of error is ignored. PMID:22091908

  17. Effects of Voice Coding and Speech Rate on a Synthetic Speech Display in a Telephone Information System

    DTIC Science & Technology

    1988-05-01

    Seeciv Limited- System for varying Senses term filter capacity output until some Figure 2. Original limited-capacity channel model (Frim Broadbent, 1958) S...2 Figure 2. Original limited-capacity channel model (From Broadbent, 1958) .... 10 Figure 3. Experimental...unlimited variety of human voices for digital recording sources. Synthesis by Analysis Analysis-synthesis methods electronically model the human voice

  18. Common Pitfalls in F77 Code Conversion

    DTIC Science & Technology

    2003-02-01

    implementation versus another are the source of these errors rather than typography . It is well to use the practice of commenting-out original source file lines...identifier), every I in the format field must be replaced with f followed by an appropriate floating point format designator . Floating point numeric...helps even more. Finally, libraries are a major source of non-portablility[sic], with graphics libraries one of the chief culprits. We in Fusion

  19. A Flexible and Non-instrusive Approach for Computing Complex Structural Coverage Metrics

    NASA Technical Reports Server (NTRS)

    Whalen, Michael W.; Person, Suzette J.; Rungta, Neha; Staats, Matt; Grijincu, Daniela

    2015-01-01

    Software analysis tools and techniques often leverage structural code coverage information to reason about the dynamic behavior of software. Existing techniques instrument the code with the required structural obligations and then monitor the execution of the compiled code to report coverage. Instrumentation based approaches often incur considerable runtime overhead for complex structural coverage metrics such as Modified Condition/Decision (MC/DC). Code instrumentation, in general, has to be approached with great care to ensure it does not modify the behavior of the original code. Furthermore, instrumented code cannot be used in conjunction with other analyses that reason about the structure and semantics of the code under test. In this work, we introduce a non-intrusive preprocessing approach for computing structural coverage information. It uses a static partial evaluation of the decisions in the source code and a source-to-bytecode mapping to generate the information necessary to efficiently track structural coverage metrics during execution. Our technique is flexible; the results of the preprocessing can be used by a variety of coverage-driven software analysis tasks, including automated analyses that are not possible for instrumented code. Experimental results in the context of symbolic execution show the efficiency and flexibility of our nonintrusive approach for computing code coverage information

  20. Reliability of routinely collected hospital data for child maltreatment surveillance.

    PubMed

    McKenzie, Kirsten; Scott, Debbie A; Waller, Garry S; Campbell, Margaret

    2011-01-05

    Internationally, research on child maltreatment-related injuries has been hampered by a lack of available routinely collected health data to identify cases, examine causes, identify risk factors and explore health outcomes. Routinely collected hospital separation data coded using the International Classification of Diseases and Related Health Problems (ICD) system provide an internationally standardised data source for classifying and aggregating diseases, injuries, causes of injuries and related health conditions for statistical purposes. However, there has been limited research to examine the reliability of these data for child maltreatment surveillance purposes. This study examined the reliability of coding of child maltreatment in Queensland, Australia. A retrospective medical record review and recoding methodology was used to assess the reliability of coding of child maltreatment. A stratified sample of hospitals across Queensland was selected for this study, and a stratified random sample of cases was selected from within those hospitals. In 3.6% of cases the coders disagreed on whether any maltreatment code could be assigned (definite or possible) versus no maltreatment being assigned (unintentional injury), giving a sensitivity of 0.982 and specificity of 0.948. The review of these cases where discrepancies existed revealed that all cases had some indications of risk documented in the records. 15.5% of cases originally assigned a definite or possible maltreatment code, were recoded to a more or less definite strata. In terms of the number and type of maltreatment codes assigned, the auditor assigned a greater number of maltreatment types based on the medical documentation than the original coder assigned (22% of the auditor coded cases had more than one maltreatment type assigned compared to only 6% of the original coded data). The maltreatment types which were the most 'under-coded' by the original coder were psychological abuse and neglect. Cases coded with a sexual abuse code showed the highest level of reliability. Given the increasing international attention being given to improving the uniformity of reporting of child-maltreatment related injuries and the emphasis on the better utilisation of routinely collected health data, this study provides an estimate of the reliability of maltreatment-specific ICD-10-AM codes assigned in an inpatient setting.

  1. Reliability of Routinely Collected Hospital Data for Child Maltreatment Surveillance

    PubMed Central

    2011-01-01

    Background Internationally, research on child maltreatment-related injuries has been hampered by a lack of available routinely collected health data to identify cases, examine causes, identify risk factors and explore health outcomes. Routinely collected hospital separation data coded using the International Classification of Diseases and Related Health Problems (ICD) system provide an internationally standardised data source for classifying and aggregating diseases, injuries, causes of injuries and related health conditions for statistical purposes. However, there has been limited research to examine the reliability of these data for child maltreatment surveillance purposes. This study examined the reliability of coding of child maltreatment in Queensland, Australia. Methods A retrospective medical record review and recoding methodology was used to assess the reliability of coding of child maltreatment. A stratified sample of hospitals across Queensland was selected for this study, and a stratified random sample of cases was selected from within those hospitals. Results In 3.6% of cases the coders disagreed on whether any maltreatment code could be assigned (definite or possible) versus no maltreatment being assigned (unintentional injury), giving a sensitivity of 0.982 and specificity of 0.948. The review of these cases where discrepancies existed revealed that all cases had some indications of risk documented in the records. 15.5% of cases originally assigned a definite or possible maltreatment code, were recoded to a more or less definite strata. In terms of the number and type of maltreatment codes assigned, the auditor assigned a greater number of maltreatment types based on the medical documentation than the original coder assigned (22% of the auditor coded cases had more than one maltreatment type assigned compared to only 6% of the original coded data). The maltreatment types which were the most 'under-coded' by the original coder were psychological abuse and neglect. Cases coded with a sexual abuse code showed the highest level of reliability. Conclusion Given the increasing international attention being given to improving the uniformity of reporting of child-maltreatment related injuries and the emphasis on the better utilisation of routinely collected health data, this study provides an estimate of the reliability of maltreatment-specific ICD-10-AM codes assigned in an inpatient setting. PMID:21208411

  2. Accelerating execution of the integrated TIGER series Monte Carlo radiation transport codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, L.M.; Hochstedler, R.D.

    1997-02-01

    Execution of the integrated TIGER series (ITS) of coupled electron/photon Monte Carlo radiation transport codes has been accelerated by modifying the FORTRAN source code for more efficient computation. Each member code of ITS was benchmarked and profiled with a specific test case that directed the acceleration effort toward the most computationally intensive subroutines. Techniques for accelerating these subroutines included replacing linear search algorithms with binary versions, replacing the pseudo-random number generator, reducing program memory allocation, and proofing the input files for geometrical redundancies. All techniques produced identical or statistically similar results to the original code. Final benchmark timing of themore » accelerated code resulted in speed-up factors of 2.00 for TIGER (the one-dimensional slab geometry code), 1.74 for CYLTRAN (the two-dimensional cylindrical geometry code), and 1.90 for ACCEPT (the arbitrary three-dimensional geometry code).« less

  3. Photoplus: auxiliary information for printed images based on distributed source coding

    NASA Astrophysics Data System (ADS)

    Samadani, Ramin; Mukherjee, Debargha

    2008-01-01

    A printed photograph is difficult to reuse because the digital information that generated the print may no longer be available. This paper describes a mechanism for approximating the original digital image by combining a scan of the printed photograph with small amounts of digital auxiliary information kept together with the print. The auxiliary information consists of a small amount of digital data to enable accurate registration and color-reproduction, followed by a larger amount of digital data to recover residual errors and lost frequencies by distributed Wyner-Ziv coding techniques. Approximating the original digital image enables many uses, including making good quality reprints from the original print, even when they are faded many years later. In essence, the print itself becomes the currency for archiving and repurposing digital images, without requiring computer infrastructure.

  4. Origins of Genes: "Big Bang" or Continuous Creation?

    NASA Astrophysics Data System (ADS)

    Kesse, Paul K.; Gibbs, Adrian

    1992-10-01

    Many protein families are common to all cellular organisms, indicating that many genes have ancient origins. Genetic variation is mostly attributed to processes such as mutation, duplication, and rearrangement of ancient modules. Thus it is widely assumed that much of present-day genetic diversity can be traced by common ancestry to a molecular "big bang." A rarely considered alternative is that proteins may arise continuously de novo. One mechanism of generating different coding sequences is by "overprinting," in which an existing nucleotide sequence is translated de novo in a different reading frame or from noncoding open reading frames. The clearest evidence for overprinting is provided when the original gene function is retained, as in overlapping genes. Analysis of their phylogenies indicates which are the original genes and which are their informationally novel partners. We report here the phylogenetic relationships of overlapping coding sequences from steroid-related receptor genes and from tymovirus, luteovirus, and lentivirus genomes. For each pair of overlapping coding sequences, one is confined to a single lineage, whereas the other is more widespread. This suggests that the phylogenetically restricted coding sequence arose only in the progenitor of that lineage by translating an out-of-frame sequence to yield the new polypeptide. The production of novel exons by alternative splicing in thyroid receptor and lentivirus genes suggests that introns can be a valuable evolutionary source for overprinting. New genes and their products may drive major evolutionary changes.

  5. Evaluation of Diagnostic Codes in Morbidity and Mortality Data Sources for Heat-Related Illness Surveillance

    PubMed Central

    Watkins, Sharon

    2017-01-01

    Objectives: The primary objective of this study was to identify patients with heat-related illness (HRI) using codes for heat-related injury diagnosis and external cause of injury in 3 administrative data sets: emergency department (ED) visit records, hospital discharge records, and death certificates. Methods: We obtained data on ED visits, hospitalizations, and deaths for Florida residents for May 1 through October 31, 2005-2012. To identify patients with HRI, we used codes from the International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM) to search data on ED visits and hospitalizations and codes from the International Classification of Diseases, Tenth Revision (ICD-10) to search data on deaths. We stratified the results by data source and whether the HRI was work related. Results: We identified 23 981 ED visits, 4816 hospitalizations, and 140 deaths in patients with non–work-related HRI and 2979 ED visits, 415 hospitalizations, and 23 deaths in patients with work-related HRI. The most common diagnosis codes among patients were for severe HRI (heat exhaustion or heatstroke). The proportion of patients with a severe HRI diagnosis increased with data source severity. If ICD-9-CM code E900.1 and ICD-10 code W92 (excessive heat of man-made origin) were used as exclusion criteria for HRI, 5.0% of patients with non–work-related deaths, 3.0% of patients with work-related ED visits, and 1.7% of patients with work-related hospitalizations would have been removed. Conclusions: Using multiple data sources and all diagnosis fields may improve the sensitivity of HRI surveillance. Future studies should evaluate the impact of converting ICD-9-CM to ICD-10-CM codes on HRI surveillance of ED visits and hospitalizations. PMID:28379784

  6. Open source clustering software.

    PubMed

    de Hoon, M J L; Imoto, S; Nolan, J; Miyano, S

    2004-06-12

    We have implemented k-means clustering, hierarchical clustering and self-organizing maps in a single multipurpose open-source library of C routines, callable from other C and C++ programs. Using this library, we have created an improved version of Michael Eisen's well-known Cluster program for Windows, Mac OS X and Linux/Unix. In addition, we generated a Python and a Perl interface to the C Clustering Library, thereby combining the flexibility of a scripting language with the speed of C. The C Clustering Library and the corresponding Python C extension module Pycluster were released under the Python License, while the Perl module Algorithm::Cluster was released under the Artistic License. The GUI code Cluster 3.0 for Windows, Macintosh and Linux/Unix, as well as the corresponding command-line program, were released under the same license as the original Cluster code. The complete source code is available at http://bonsai.ims.u-tokyo.ac.jp/mdehoon/software/cluster. Alternatively, Algorithm::Cluster can be downloaded from CPAN, while Pycluster is also available as part of the Biopython distribution.

  7. From The Ground Up I: Light Pollution Sources in Flagstaff, Arizona

    DTIC Science & Technology

    2009-02-01

    0188 Public reporting burden for the collection of information is estimated to average 1 hour per response, including the time for reviewing...Lighting Codes The 1989 lighting codes for the first time anywhere estab- lished overall limits on the amount of outdoor lighting. Origin- ally, four...fiducial time . It is important to note at least one category of lighting that we did not include or study, which may be a substantial contributor to the

  8. JSPAM: A restricted three-body code for simulating interacting galaxies

    NASA Astrophysics Data System (ADS)

    Wallin, J. F.; Holincheck, A. J.; Harvey, A.

    2016-07-01

    Restricted three-body codes have a proven ability to recreate much of the disturbed morphology of actual interacting galaxies. As more sophisticated n-body models were developed and computer speed increased, restricted three-body codes fell out of favor. However, their supporting role for performing wide searches of parameter space when fitting orbits to real systems demonstrates a continuing need for their use. Here we present the model and algorithm used in the JSPAM code. A precursor of this code was originally described in 1990, and was called SPAM. We have recently updated the software with an alternate potential and a treatment of dynamical friction to more closely mimic the results from n-body tree codes. The code is released publicly for use under the terms of the Academic Free License ("AFL") v. 3.0 and has been added to the Astrophysics Source Code Library.

  9. SPRAI: coupling of radiative feedback and primordial chemistry in moving mesh hydrodynamics

    NASA Astrophysics Data System (ADS)

    Jaura, O.; Glover, S. C. O.; Klessen, R. S.; Paardekooper, J.-P.

    2018-04-01

    In this paper, we introduce a new radiative transfer code SPRAI (Simplex Photon Radiation in the Arepo Implementation) based on the SIMPLEX radiation transfer method. This method, originally used only for post-processing, is now directly integrated into the AREPO code and takes advantage of its adaptive unstructured mesh. Radiated photons are transferred from the sources through the series of Voronoi gas cells within a specific solid angle. From the photon attenuation, we derive corresponding photon fluxes and ionization rates and feed them to a primordial chemistry module. This gives us a self-consistent method for studying dynamical and chemical processes caused by ionizing sources in primordial gas. Since the computational cost of the SIMPLEX method does not scale directly with the number of sources, it is convenient for studying systems such as primordial star-forming haloes that may form multiple ionizing sources.

  10. Reload of an industrial cylindrical cobalt source rack

    NASA Astrophysics Data System (ADS)

    Gharbi, F.; Kadri, O.; Trabelsi, A.

    2006-10-01

    This work presents a Monte Carlo study of the cylindrical cobalt source rack geometry of the Tunisian gamma irradiation facility, using the GEANT code developed at CERN. The study investigates the question of the reload of the source rack. The studied configurations consist in housing four new cobalt pencils, two in the upper and two in the lower cylinder of the source rack. Global dose rate uniformity inside a "dummy" product for the case of routine and nonroutine irradiation, and as function of the product bulk density, was calculated for eight hypothetical configurations. The same calculation was also performed for both of the original and the ideal (but not practical) configurations. It was shown that hypothetical cases produced dose uniformity variations, according to product density, that were statistically no different than the original and the ideal configurations and that the reload procedure cannot improve the irradiation quality inside the facilities using cylindrical cobalt source racks.

  11. CUTEX: CUrvature Thresholding EXtractor

    NASA Astrophysics Data System (ADS)

    Molinari, S.; Schisano, E.; Faustini, F.; Pestalozzi, M.; di Giorgio, A. M.; Liu, S.

    2017-08-01

    CuTEx analyzes images in the infrared bands and extracts sources from complex backgrounds, particularly star-forming regions that offer the challenges of crowding, having a highly spatially variable background, and having no-psf profiles such as protostars in their accreting phase. The code is composed of two main algorithms, the first an algorithm for source detection, and the second for flux extraction. The code is originally written in IDL language and it was exported in the license free GDL language. CuTEx could be used in other bands or in scientific cases different from the native case. This software is also available as an on-line tool from the Multi-Mission Interactive Archive web pages dedicated to the Herschel Observatory.

  12. 22 CFR 228.52 - Suppliers of commodities.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 22 Foreign Relations 1 2010-04-01 2010-04-01 false Suppliers of commodities. 228.52 Section 228.52 Foreign Relations AGENCY FOR INTERNATIONAL DEVELOPMENT RULES ON SOURCE, ORIGIN AND NATIONALITY FOR COMMODITIES AND SERVICES FINANCED BY USAID Waivers § 228.52 Suppliers of commodities. Geographic code changes...

  13. Origins of genes: "big bang" or continuous creation?

    PubMed Central

    Keese, P K; Gibbs, A

    1992-01-01

    Many protein families are common to all cellular organisms, indicating that many genes have ancient origins. Genetic variation is mostly attributed to processes such as mutation, duplication, and rearrangement of ancient modules. Thus it is widely assumed that much of present-day genetic diversity can be traced by common ancestry to a molecular "big bang." A rarely considered alternative is that proteins may arise continuously de novo. One mechanism of generating different coding sequences is by "overprinting," in which an existing nucleotide sequence is translated de novo in a different reading frame or from noncoding open reading frames. The clearest evidence for overprinting is provided when the original gene function is retained, as in overlapping genes. Analysis of their phylogenies indicates which are the original genes and which are their informationally novel partners. We report here the phylogenetic relationships of overlapping coding sequences from steroid-related receptor genes and from tymovirus, luteovirus, and lentivirus genomes. For each pair of overlapping coding sequences, one is confined to a single lineage, whereas the other is more widespread. This suggests that the phylogenetically restricted coding sequence arose only in the progenitor of that lineage by translating an out-of-frame sequence to yield the new polypeptide. The production of novel exons by alternative splicing in thyroid receptor and lentivirus genes suggests that introns can be a valuable evolutionary source for overprinting. New genes and their products may drive major evolutionary changes. PMID:1329098

  14. Reply to Comment on ‘egs_brachy: a versatile and fast Monte Carlo code for brachytherapy’

    NASA Astrophysics Data System (ADS)

    Thomson, Rowan M.; Taylor, Randle E. P.; Chamberland, Marc J. P.; Rogers, D. W. O.

    2018-02-01

    We respond to the comments by Dr Yegin by identifying the source of an error in a fit in our original paper but arguing that the lack of a fit does not affect the conclusion based on the raw data that \

  15. Technical Support Document for Version 3.4.0 of the COMcheck Software

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bartlett, Rosemarie; Connell, Linda M.; Gowri, Krishnan

    2007-09-14

    COMcheck provides an optional way to demonstrate compliance with commercial and high-rise residential building energy codes. Commercial buildings include all use groups except single family and multifamily not over three stories in height. COMcheck was originally based on ANSI/ASHRAE/IES Standard 90.1-1989 (Standard 90.1-1989) requirements and is intended for use with various codes based on Standard 90.1, including the Codification of ASHRAE/IES Standard 90.1-1989 (90.1-1989 Code) (ASHRAE 1989a, 1993b) and ASHRAE/IESNA Standard 90.1-1999 (Standard 90.1-1999). This includes jurisdictions that have adopted the 90.1-1989 Code, Standard 90.1-1989, Standard 90.1-1999, or their own code based on one of these. We view Standard 90.1-1989more » and the 90.1-1989 Code as having equivalent technical content and have used both as source documents in developing COMcheck. This technical support document (TSD) is designed to explain the technical basis for the COMcheck software as originally developed based on the ANSI/ASHRAE/IES Standard 90.1-1989 (Standard 90.1-1989). Documentation for other national model codes and standards and specific state energy codes supported in COMcheck has been added to this report as appendices. These appendices are intended to provide technical documentation for features specific to the supported codes and for any changes made for state-specific codes that differ from the standard features that support compliance with the national model codes and standards.« less

  16. Injecting Artificial Memory Errors Into a Running Computer Program

    NASA Technical Reports Server (NTRS)

    Bornstein, Benjamin J.; Granat, Robert A.; Wagstaff, Kiri L.

    2008-01-01

    Single-event upsets (SEUs) or bitflips are computer memory errors caused by radiation. BITFLIPS (Basic Instrumentation Tool for Fault Localized Injection of Probabilistic SEUs) is a computer program that deliberately injects SEUs into another computer program, while the latter is running, for the purpose of evaluating the fault tolerance of that program. BITFLIPS was written as a plug-in extension of the open-source Valgrind debugging and profiling software. BITFLIPS can inject SEUs into any program that can be run on the Linux operating system, without needing to modify the program s source code. Further, if access to the original program source code is available, BITFLIPS offers fine-grained control over exactly when and which areas of memory (as specified via program variables) will be subjected to SEUs. The rate of injection of SEUs is controlled by specifying either a fault probability or a fault rate based on memory size and radiation exposure time, in units of SEUs per byte per second. BITFLIPS can also log each SEU that it injects and, if program source code is available, report the magnitude of effect of the SEU on a floating-point value or other program variable.

  17. Assessing the Viability of Social Media for Disseminating Evidence-Based Nutrition Practice Guideline Through Content Analysis of Twitter Messages and Health Professional Interviews: An Observational Study.

    PubMed

    Hand, Rosa K; Kenne, Deric; Wolfram, Taylor M; Abram, Jenica K; Fleming, Michael

    2016-11-15

    Given the high penetration of social media use, social media has been proposed as a method for the dissemination of information to health professionals and patients. This study explored the potential for social media dissemination of the Academy of Nutrition and Dietetics Evidence-Based Nutrition Practice Guideline (EBNPG) for Heart Failure (HF). The objectives were to (1) describe the existing social media content on HF, including message content, source, and target audience, and (2) describe the attitude of physicians and registered dietitian nutritionists (RDNs) who care for outpatient HF patients toward the use of social media as a method to obtain information for themselves and to share this information with patients. The methods were divided into 2 parts. Part 1 involved conducting a content analysis of tweets related to HF, which were downloaded from Twitonomy and assigned codes for message content (19 codes), source (9 codes), and target audience (9 codes); code frequency was described. A comparison in the popularity of tweets (those marked as favorites or retweeted) based on applied codes was made using t tests. Part 2 involved conducting phone interviews with RDNs and physicians to describe health professionals' attitude toward the use of social media to communicate general health information and information specifically related to the HF EBNPG. Interviews were transcribed and coded; exemplar quotes representing frequent themes are presented. The sample included 294 original tweets with the hashtag "#heartfailure." The most frequent message content codes were "HF awareness" (166/294, 56.5%) and "patient support" (97/294, 33.0%). The most frequent source codes were "professional, government, patient advocacy organization, or charity" (112/277, 40.4%) and "patient or family" (105/277, 37.9%). The most frequent target audience codes were "unable to identify" (111/277, 40.1%) and "other" (55/277, 19.9%). Significant differences were found in the popularity of tweets with (mean 1, SD 1.3 favorites) or without (mean 0.7, SD 1.3 favorites), the content code being "HF research" (P=.049). Tweets with the source code "professional, government, patient advocacy organizations, or charities" were significantly more likely to be marked as a favorite and retweeted than those without this source code (mean 1.2, SD 1.4 vs mean 0.8, SD 1.2, P=.03) and (mean 1.5, SD 1.8 vs mean 0.9, SD 2.0, P=.03). Interview participants believed that social media was a useful way to gather professional information. They did not believe that social media was useful for communicating with patients due to privacy concerns and the fact that the information had to be kept general rather than be tailored for a specific patient and the belief that their patients did not use social media or technology. Existing Twitter content related to HF comes from a combination of patients and evidence-based organizations; however, there is little nutrition content. That gap may present an opportunity for EBNPG dissemination. Health professionals use social media to gather information for themselves but are skeptical of its value when communicating with patients, particularly due to privacy concerns and misconceptions about the characteristics of social media users. ©Rosa K Hand, Deric Kenne, Taylor M Wolfram, Jenica K Abram, Michael Fleming. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 15.11.2016.

  18. TANDEM: matching proteins with tandem mass spectra.

    PubMed

    Craig, Robertson; Beavis, Ronald C

    2004-06-12

    Tandem mass spectra obtained from fragmenting peptide ions contain some peptide sequence specific information, but often there is not enough information to sequence the original peptide completely. Several proprietary software applications have been developed to attempt to match the spectra with a list of protein sequences that may contain the sequence of the peptide. The application TANDEM was written to provide the proteomics research community with a set of components that can be used to test new methods and algorithms for performing this type of sequence-to-data matching. The source code and binaries for this software are available at http://www.proteome.ca/opensource.html, for Windows, Linux and Macintosh OSX. The source code is made available under the Artistic License, from the authors.

  19. Parallel processing a three-dimensional free-lagrange code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mandell, D.A.; Trease, H.E.

    1989-01-01

    A three-dimensional, time-dependent free-Lagrange hydrodynamics code has been multitasked and autotasked on a CRAY X-MP/416. The multitasking was done by using the Los Alamos Multitasking Control Library, which is a superset of the CRAY multitasking library. Autotasking is done by using constructs which are only comment cards if the source code is not run through a preprocessor. The three-dimensional algorithm has presented a number of problems that simpler algorithms, such as those for one-dimensional hydrodynamics, did not exhibit. Problems in converting the serial code, originally written for a CRAY-1, to a multitasking code are discussed. Autotasking of a rewritten versionmore » of the code is discussed. Timing results for subroutines and hot spots in the serial code are presented and suggestions for additional tools and debugging aids are given. Theoretical speedup results obtained from Amdahl's law and actual speedup results obtained on a dedicated machine are presented. Suggestions for designing large parallel codes are given.« less

  20. Parallel processing a real code: A case history

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mandell, D.A.; Trease, H.E.

    1988-01-01

    A three-dimensional, time-dependent Free-Lagrange hydrodynamics code has been multitasked and autotasked on a Cray X-MP/416. The multitasking was done by using the Los Alamos Multitasking Control Library, which is a superset of the Cray multitasking library. Autotasking is done by using constructs which are only comment cards if the source code is not run through a preprocessor. The 3-D algorithm has presented a number of problems that simpler algorithms, such as 1-D hydrodynamics, did not exhibit. Problems in converting the serial code, originally written for a Cray 1, to a multitasking code are discussed, Autotasking of a rewritten version ofmore » the code is discussed. Timing results for subroutines and hot spots in the serial code are presented and suggestions for additional tools and debugging aids are given. Theoretical speedup results obtained from Amdahl's law and actual speedup results obtained on a dedicated machine are presented. Suggestions for designing large parallel codes are given. 8 refs., 13 figs.« less

  1. Supersonic propulsion simulation by incorporating component models in the large perturbation inlet (LAPIN) computer code

    NASA Technical Reports Server (NTRS)

    Cole, Gary L.; Richard, Jacques C.

    1991-01-01

    An approach to simulating the internal flows of supersonic propulsion systems is presented. The approach is based on a fairly simple modification of the Large Perturbation Inlet (LAPIN) computer code. LAPIN uses a quasi-one dimensional, inviscid, unsteady formulation of the continuity, momentum, and energy equations. The equations are solved using a shock capturing, finite difference algorithm. The original code, developed for simulating supersonic inlets, includes engineering models of unstart/restart, bleed, bypass, and variable duct geometry, by means of source terms in the equations. The source terms also provide a mechanism for incorporating, with the inlet, propulsion system components such as compressor stages, combustors, and turbine stages. This requires each component to be distributed axially over a number of grid points. Because of the distributed nature of such components, this representation should be more accurate than a lumped parameter model. Components can be modeled by performance map(s), which in turn are used to compute the source terms. The general approach is described. Then, simulation of a compressor/fan stage is discussed to show the approach in detail.

  2. The inverse of winnowing: a FORTRAN subroutine and discussion of unwinnowing discrete data

    USGS Publications Warehouse

    Bracken, Robert E.

    2004-01-01

    This report describes an unwinnowing algorithm that utilizes a discrete Fourier transform, and a resulting Fortran subroutine that winnows or unwinnows a 1-dimensional stream of discrete data; the source code is included. The unwinnowing algorithm effectively increases (by integral factors) the number of available data points while maintaining the original frequency spectrum of a data stream. This has utility when an increased data density is required together with an availability of higher order derivatives that honor the original data.

  3. Splitting of 2-Digit SIC Codes at 3M's Chemolite Plant

    EPA Pesticide Factsheets

    This document may be of assistance in applying the New Source Review (NSR) air permitting regulations including the Prevention of Significant Deterioration (PSD) requirements. This document is part of the NSR Policy and Guidance Database. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.

  4. Perceptually controlled doping for audio source separation

    NASA Astrophysics Data System (ADS)

    Mahé, Gaël; Nadalin, Everton Z.; Suyama, Ricardo; Romano, João MT

    2014-12-01

    The separation of an underdetermined audio mixture can be performed through sparse component analysis (SCA) that relies however on the strong hypothesis that source signals are sparse in some domain. To overcome this difficulty in the case where the original sources are available before the mixing process, the informed source separation (ISS) embeds in the mixture a watermark, which information can help a further separation. Though powerful, this technique is generally specific to a particular mixing setup and may be compromised by an additional bitrate compression stage. Thus, instead of watermarking, we propose a `doping' method that makes the time-frequency representation of each source more sparse, while preserving its audio quality. This method is based on an iterative decrease of the distance between the distribution of the signal and a target sparse distribution, under a perceptual constraint. We aim to show that the proposed approach is robust to audio coding and that the use of the sparsified signals improves the source separation, in comparison with the original sources. In this work, the analysis is made only in instantaneous mixtures and focused on voice sources.

  5. Spatial application of WEPS for estimating wind erosion in the Pacific Northwest

    USDA-ARS?s Scientific Manuscript database

    The Wind Erosion Prediction System (WEPS) is used to simulate soil erosion on croplands and was originally designed to run field scale simulations. This research is an extension of the WEPS model to run on multiple fields (grids) covering a larger region. We modified the WEPS source code to allow it...

  6. Space communication system for compressed data with a concatenated Reed-Solomon-Viterbi coding channel

    NASA Technical Reports Server (NTRS)

    Rice, R. F.; Hilbert, E. E. (Inventor)

    1976-01-01

    A space communication system incorporating a concatenated Reed Solomon Viterbi coding channel is discussed for transmitting compressed and uncompressed data from a spacecraft to a data processing center on Earth. Imaging (and other) data are first compressed into source blocks which are then coded by a Reed Solomon coder and interleaver, followed by a convolutional encoder. The received data is first decoded by a Viterbi decoder, followed by a Reed Solomon decoder and deinterleaver. The output of the latter is then decompressed, based on the compression criteria used in compressing the data in the spacecraft. The decompressed data is processed to reconstruct an approximation of the original data-producing condition or images.

  7. Technical Support Document for Version 3.9.0 of the COMcheck Software

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bartlett, Rosemarie; Connell, Linda M.; Gowri, Krishnan

    2011-09-01

    COMcheck provides an optional way to demonstrate compliance with commercial and high-rise residential building energy codes. Commercial buildings include all use groups except single family and multifamily not over three stories in height. COMcheck was originally based on ANSI/ASHRAE/IES Standard 90.1-1989 (Standard 90.1-1989) requirements and is intended for use with various codes based on Standard 90.1, including the Codification of ASHRAE/IES Standard 90.1-1989 (90.1-1989 Code) (ASHRAE 1989a, 1993b) and ASHRAE/IESNA Standard 90.1-1999 (Standard 90.1-1999). This includes jurisdictions that have adopted the 90.1-1989 Code, Standard 90.1-1989, Standard 90.1-1999, or their own code based on one of these. We view Standard 90.1-1989more » and the 90.1-1989 Code as having equivalent technical content and have used both as source documents in developing COMcheck. This technical support document (TSD) is designed to explain the technical basis for the COMcheck software as originally developed based on the ANSI/ASHRAE/IES Standard 90.1-1989 (Standard 90.1-1989). Documentation for other national model codes and standards and specific state energy codes supported in COMcheck has been added to this report as appendices. These appendices are intended to provide technical documentation for features specific to the supported codes and for any changes made for state-specific codes that differ from the standard features that support compliance with the national model codes and standards. Beginning with COMcheck version 3.8.0, support for 90.1-1989, 90.1-1999, and the 1998 IECC are no longer included, but those sections remain in this document for reference purposes.« less

  8. Technical Support Document for Version 3.9.1 of the COMcheck Software

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bartlett, Rosemarie; Connell, Linda M.; Gowri, Krishnan

    2012-09-01

    COMcheck provides an optional way to demonstrate compliance with commercial and high-rise residential building energy codes. Commercial buildings include all use groups except single family and multifamily not over three stories in height. COMcheck was originally based on ANSI/ASHRAE/IES Standard 90.1-1989 (Standard 90.1-1989) requirements and is intended for use with various codes based on Standard 90.1, including the Codification of ASHRAE/IES Standard 90.1-1989 (90.1-1989 Code) (ASHRAE 1989a, 1993b) and ASHRAE/IESNA Standard 90.1-1999 (Standard 90.1-1999). This includes jurisdictions that have adopted the 90.1-1989 Code, Standard 90.1-1989, Standard 90.1-1999, or their own code based on one of these. We view Standard 90.1-1989more » and the 90.1-1989 Code as having equivalent technical content and have used both as source documents in developing COMcheck. This technical support document (TSD) is designed to explain the technical basis for the COMcheck software as originally developed based on the ANSI/ASHRAE/IES Standard 90.1-1989 (Standard 90.1-1989). Documentation for other national model codes and standards and specific state energy codes supported in COMcheck has been added to this report as appendices. These appendices are intended to provide technical documentation for features specific to the supported codes and for any changes made for state-specific codes that differ from the standard features that support compliance with the national model codes and standards. Beginning with COMcheck version 3.8.0, support for 90.1-1989, 90.1-1999, and the 1998 IECC and version 3.9.0 support for 2000 and 2001 IECC are no longer included, but those sections remain in this document for reference purposes.« less

  9. CTF Theory Manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Avramova, Maria N.; Salko, Robert K.

    Coolant-Boiling in Rod Arrays|Two Fluids (COBRA-TF) is a thermal/ hydraulic (T/H) simulation code designed for light water reactor (LWR) vessel analysis. It uses a two-fluid, three-field (i.e. fluid film, fluid drops, and vapor) modeling approach. Both sub-channel and 3D Cartesian forms of 9 conservation equations are available for LWR modeling. The code was originally developed by Pacific Northwest Laboratory in 1980 and had been used and modified by several institutions over the last few decades. COBRA-TF also found use at the Pennsylvania State University (PSU) by the Reactor Dynamics and Fuel Management Group (RDFMG) and has been improved, updated, andmore » subsequently re-branded as CTF. As part of the improvement process, it was necessary to generate sufficient documentation for the open-source code which had lacked such material upon being adopted by RDFMG. This document serves mainly as a theory manual for CTF, detailing the many two-phase heat transfer, drag, and important accident scenario models contained in the code as well as the numerical solution process utilized. Coding of the models is also discussed, all with consideration for updates that have been made when transitioning from COBRA-TF to CTF. Further documentation outside of this manual is also available at RDFMG which focus on code input deck generation and source code global variable and module listings.« less

  10. The Clawpack Community of Codes

    NASA Astrophysics Data System (ADS)

    Mandli, K. T.; LeVeque, R. J.; Ketcheson, D.; Ahmadia, A. J.

    2014-12-01

    Clawpack, the Conservation Laws Package, has long been one of the standards for solving hyperbolic conservation laws but over the years has extended well beyond this role. Today a community of open-source codes have been developed that address a multitude of different needs including non-conservative balance laws, high-order accurate methods, and parallelism while remaining extensible and easy to use, largely by the judicious use of Python and the original Fortran codes that it wraps. This talk will present some of the recent developments in projects under the Clawpack umbrella, notably the GeoClaw and PyClaw projects. GeoClaw was originally developed as a tool for simulating tsunamis using adaptive mesh refinement but has since encompassed a large number of other geophysically relevant flows including storm surge and debris-flows. PyClaw originated as a Python version of the original Clawpack algorithms but has since been both a testing ground for new algorithmic advances in the Clawpack framework but also an easily extensible framework for solving hyperbolic balance laws. Some of these extensions include the addition of WENO high-order methods, massively parallel capabilities, and adaptive mesh refinement technologies, made possible largely by the flexibility of the Python language and community libraries such as NumPy and PETSc. Because of the tight integration with Python tecnologies, both packages have benefited also from the focus on reproducibility in the Python community, notably IPython notebooks.

  11. BOREAS Elevation Contours over the NSA and SSA in ARC/INFO Generate Format

    NASA Technical Reports Server (NTRS)

    Knapp, David; Nickeson, Jaime; Hall, Forrest G. (Editor)

    2000-01-01

    This data set was prepared by BORIS Staff by reformatting the original data into the ARC/INFO Generate format. The original data were received in SIF at a scale of 1:50,000. BORIS staff could not find a format document or commercial software for reading SIF; the BOREAS HYD-08 team pro-vided some C source code that could read some of the SIF files. The data cover the BOREAS NSA and SSA. The original data were compiled from information available in the 1970s and 1980s. The data are available in ARC/INFO Generate format files.

  12. Software to model AXAF image quality

    NASA Technical Reports Server (NTRS)

    Ahmad, Anees

    1993-01-01

    This draft final report describes the work performed under this delivery order from May 1992 through June 1993. The purpose of this contract was to enhance and develop an integrated optical performance modeling software for complex x-ray optical systems such as AXAF. The GRAZTRACE program developed by the MSFC Optical Systems Branch for modeling VETA-I was used as the starting baseline program. The original program was a large single file program and, therefore, could not be modified very efficiently. The original source code has been reorganized, and a 'Make Utility' has been written to update the original program. The new version of the source code consists of 36 small source files to make it easier for the code developer to manage and modify the program. A user library has also been built and a 'Makelib' utility has been furnished to update the library. With the user library, the users can easily access the GRAZTRACE source files and build a custom library. A user manual for the new version of GRAZTRACE has been compiled. The plotting capability for the 3-D point spread functions and contour plots has been provided in the GRAZTRACE using the graphics package DISPLAY. The Graphics emulator over the network has been set up for programming the graphics routine. The point spread function and the contour plot routines have also been modified to display the plot centroid, and to allow the user to specify the plot range, and the viewing angle options. A Command Mode version of GRAZTRACE has also been developed. More than 60 commands have been implemented in a Code-V like format. The functions covered in this version include data manipulation, performance evaluation, and inquiry and setting of internal parameters. The user manual for these commands has been formatted as in Code-V, showing the command syntax, synopsis, and options. An interactive on-line help system for the command mode has also been accomplished to allow the user to find valid commands, command syntax, and command function. A translation program has been written to convert FEA output from structural analysis to GRAZTRACE surface deformation file (.dfm file). The program can accept standard output files and list files from COSMOS/M and NASTRAN finite analysis programs. Some interactive options are also provided, such as Cartesian or cylindrical coordinate transformation, coordinate shift and scale, and axial length change. A computerized database for technical documents relating to the AXAF project has been established. Over 5000 technical documents have been entered into the master database. A user can now rapidly retrieve the desired documents relating to the AXAF project. The summary of the work performed under this contract is shown.

  13. Using a source-to-source transformation to introduce multi-threading into the AliRoot framework for a parallel event reconstruction

    NASA Astrophysics Data System (ADS)

    Lohn, Stefan B.; Dong, Xin; Carminati, Federico

    2012-12-01

    Chip-Multiprocessors are going to support massive parallelism by many additional physical and logical cores. Improving performance can no longer be obtained by increasing clock-frequency because the technical limits are almost reached. Instead, parallel execution must be used to gain performance. Resources like main memory, the cache hierarchy, bandwidth of the memory bus or links between cores and sockets are not going to be improved as fast. Hence, parallelism can only result into performance gains if the memory usage is optimized and the communication between threads is minimized. Besides concurrent programming has become a domain for experts. Implementing multi-threading is error prone and labor-intensive. A full reimplementation of the whole AliRoot source-code is unaffordable. This paper describes the effort to evaluate the adaption of AliRoot to the needs of multi-threading and to provide the capability of parallel processing by using a semi-automatic source-to-source transformation to address the problems as described before and to provide a straight-forward way of parallelization with almost no interference between threads. This makes the approach simple and reduces the required manual changes in the code. In a first step, unconditional thread-safety will be introduced to bring the original sequential and thread unaware source-code into the position of utilizing multi-threading. Afterwards further investigations have to be performed to point out candidates of classes that are useful to share amongst threads. Then in a second step, the transformation has to change the code to share these classes and finally to verify if there are anymore invalid interferences between threads.

  14. Clarifications on Secondary Emissions as Defined in the Code of Federal Regulations (CFR)

    EPA Pesticide Factsheets

    This document may be of assistance in applying the New Source Review (NSR) air permitting regulations including the Prevention of Significant Deterioration (PSD) requirements. This document is part of the NSR Policy and Guidance Database. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.

  15. Encrypted optical storage with wavelength-key and random phase codes.

    PubMed

    Matoba, O; Javidi, B

    1999-11-10

    An encrypted optical memory system that uses a wavelength code as well as input and Fourier-plane random phase codes is proposed. Original data are illuminated by a coherent light source with a specified wavelength and are then encrypted with two random phase codes before being stored holographically in a photorefractive material. Successful decryption requires the use of a readout beam with the same wavelength as that used in the recording, in addition to the correct phase key in the Fourier plane. The wavelength selectivity of the proposed system is evaluated numerically. We show that the number of available wavelength keys depends on the correlation length of the phase key in the Fourier plane. Preliminary experiments of encryption and decryption of optical memory in a LiNbO(3):Fe photorefractive crystal are demonstrated.

  16. CMCpy: Genetic Code-Message Coevolution Models in Python

    PubMed Central

    Becich, Peter J.; Stark, Brian P.; Bhat, Harish S.; Ardell, David H.

    2013-01-01

    Code-message coevolution (CMC) models represent coevolution of a genetic code and a population of protein-coding genes (“messages”). Formally, CMC models are sets of quasispecies coupled together for fitness through a shared genetic code. Although CMC models display plausible explanations for the origin of multiple genetic code traits by natural selection, useful modern implementations of CMC models are not currently available. To meet this need we present CMCpy, an object-oriented Python API and command-line executable front-end that can reproduce all published results of CMC models. CMCpy implements multiple solvers for leading eigenpairs of quasispecies models. We also present novel analytical results that extend and generalize applications of perturbation theory to quasispecies models and pioneer the application of a homotopy method for quasispecies with non-unique maximally fit genotypes. Our results therefore facilitate the computational and analytical study of a variety of evolutionary systems. CMCpy is free open-source software available from http://pypi.python.org/pypi/CMCpy/. PMID:23532367

  17. The Visualization Toolkit (VTK): Rewriting the rendering code for modern graphics cards

    NASA Astrophysics Data System (ADS)

    Hanwell, Marcus D.; Martin, Kenneth M.; Chaudhary, Aashish; Avila, Lisa S.

    2015-09-01

    The Visualization Toolkit (VTK) is an open source, permissively licensed, cross-platform toolkit for scientific data processing, visualization, and data analysis. It is over two decades old, originally developed for a very different graphics card architecture. Modern graphics cards feature fully programmable, highly parallelized architectures with large core counts. VTK's rendering code was rewritten to take advantage of modern graphics cards, maintaining most of the toolkit's programming interfaces. This offers the opportunity to compare the performance of old and new rendering code on the same systems/cards. Significant improvements in rendering speeds and memory footprints mean that scientific data can be visualized in greater detail than ever before. The widespread use of VTK means that these improvements will reap significant benefits.

  18. Two Perspectives on the Origin of the Standard Genetic Code

    NASA Astrophysics Data System (ADS)

    Sengupta, Supratim; Aggarwal, Neha; Bandhu, Ashutosh Vishwa

    2014-12-01

    The origin of a genetic code made it possible to create ordered sequences of amino acids. In this article we provide two perspectives on code origin by carrying out simulations of code-sequence coevolution in finite populations with the aim of examining how the standard genetic code may have evolved from more primitive code(s) encoding a small number of amino acids. We determine the efficacy of the physico-chemical hypothesis of code origin in the absence and presence of horizontal gene transfer (HGT) by allowing a diverse collection of code-sequence sets to compete with each other. We find that in the absence of horizontal gene transfer, natural selection between competing codes distinguished by differences in the degree of physico-chemical optimization is unable to explain the structure of the standard genetic code. However, for certain probabilities of the horizontal transfer events, a universal code emerges having a structure that is consistent with the standard genetic code.

  19. Regular Topologies for Gigabit Wide-Area Networks: Congestion Avoidance Testbed Experiments. Volume 3

    NASA Technical Reports Server (NTRS)

    Denny, Barbara A.; McKenney, Paul E., Sr.; Lee, Danny

    1994-01-01

    This document is Volume 3 of the final technical report on the work performed by SRI International (SRI) on SRI Project 8600. The document includes source listings for all software developed by SRI under this effort. Since some of our work involved the use of ST-II and the Sun Microsystems, Inc. (Sun) High-Speed Serial Interface (HSI/S) driver, we have included some of the source developed by LBL and BBN as well. In most cases, our decision to include source developed by other contractors depended on whether it was necessary to modify the original code. If we have modified the software in any way, it is included in this document. In the case of the Traffic Generator (TG), however, we have included all the ST-II software, even though BBN performed the integration, because the ST-II software is part of the standard TG release. It is important to note that all the code developed by other contractors is in the public domain, so that all software developed under this effort can be re-created from the source included here.

  20. Application of grammar-based codes for lossless compression of digital mammograms

    NASA Astrophysics Data System (ADS)

    Li, Xiaoli; Krishnan, Srithar; Ma, Ngok-Wah

    2006-01-01

    A newly developed grammar-based lossless source coding theory and its implementation was proposed in 1999 and 2000, respectively, by Yang and Kieffer. The code first transforms the original data sequence into an irreducible context-free grammar, which is then compressed using arithmetic coding. In the study of grammar-based coding for mammography applications, we encountered two issues: processing time and limited number of single-character grammar G variables. For the first issue, we discover a feature that can simplify the matching subsequence search in the irreducible grammar transform process. Using this discovery, an extended grammar code technique is proposed and the processing time of the grammar code can be significantly reduced. For the second issue, we propose to use double-character symbols to increase the number of grammar variables. Under the condition that all the G variables have the same probability of being used, our analysis shows that the double- and single-character approaches have the same compression rates. By using the methods proposed, we show that the grammar code can outperform three other schemes: Lempel-Ziv-Welch (LZW), arithmetic, and Huffman on compression ratio, and has similar error tolerance capabilities as LZW coding under similar circumstances.

  1. Syndrome-source-coding and its universal generalization. [error correcting codes for data compression

    NASA Technical Reports Server (NTRS)

    Ancheta, T. C., Jr.

    1976-01-01

    A method of using error-correcting codes to obtain data compression, called syndrome-source-coding, is described in which the source sequence is treated as an error pattern whose syndrome forms the compressed data. It is shown that syndrome-source-coding can achieve arbitrarily small distortion with the number of compressed digits per source digit arbitrarily close to the entropy of a binary memoryless source. A 'universal' generalization of syndrome-source-coding is formulated which provides robustly effective distortionless coding of source ensembles. Two examples are given, comparing the performance of noiseless universal syndrome-source-coding to (1) run-length coding and (2) Lynch-Davisson-Schalkwijk-Cover universal coding for an ensemble of binary memoryless sources.

  2. Numerical ‘health check’ for scientific codes: the CADNA approach

    NASA Astrophysics Data System (ADS)

    Scott, N. S.; Jézéquel, F.; Denis, C.; Chesneaux, J.-M.

    2007-04-01

    Scientific computation has unavoidable approximations built into its very fabric. One important source of error that is difficult to detect and control is round-off error propagation which originates from the use of finite precision arithmetic. We propose that there is a need to perform regular numerical 'health checks' on scientific codes in order to detect the cancerous effect of round-off error propagation. This is particularly important in scientific codes that are built on legacy software. We advocate the use of the CADNA library as a suitable numerical screening tool. We present a case study to illustrate the practical use of CADNA in scientific codes that are of interest to the Computer Physics Communications readership. In doing so we hope to stimulate a greater awareness of round-off error propagation and present a practical means by which it can be analyzed and managed.

  3. Underestimated prevalence of heart failure in hospital inpatients: a comparison of ICD codes and discharge letter information.

    PubMed

    Kaspar, Mathias; Fette, Georg; Güder, Gülmisal; Seidlmayer, Lea; Ertl, Maximilian; Dietrich, Georg; Greger, Helmut; Puppe, Frank; Störk, Stefan

    2018-04-17

    Heart failure is the predominant cause of hospitalization and amongst the leading causes of death in Germany. However, accurate estimates of prevalence and incidence are lacking. Reported figures originating from different information sources are compromised by factors like economic reasons or documentation quality. We implemented a clinical data warehouse that integrates various information sources (structured parameters, plain text, data extracted by natural language processing) and enables reliable approximations to the real number of heart failure patients. Performance of ICD-based diagnosis in detecting heart failure was compared across the years 2000-2015 with (a) advanced definitions based on algorithms that integrate various sources of the hospital information system, and (b) a physician-based reference standard. Applying these methods for detecting heart failure in inpatients revealed that relying on ICD codes resulted in a marked underestimation of the true prevalence of heart failure, ranging from 44% in the validation dataset to 55% (single year) and 31% (all years) in the overall analysis. Percentages changed over the years, indicating secular changes in coding practice and efficiency. Performance was markedly improved using search and permutation algorithms from the initial expert-specified query (F1 score of 81%) to the computer-optimized query (F1 score of 86%) or, alternatively, optimizing precision or sensitivity depending on the search objective. Estimating prevalence of heart failure using ICD codes as the sole data source yielded unreliable results. Diagnostic accuracy was markedly improved using dedicated search algorithms. Our approach may be transferred to other hospital information systems.

  4. Robust video transmission with distributed source coded auxiliary channel.

    PubMed

    Wang, Jiajun; Majumdar, Abhik; Ramchandran, Kannan

    2009-12-01

    We propose a novel solution to the problem of robust, low-latency video transmission over lossy channels. Predictive video codecs, such as MPEG and H.26x, are very susceptible to prediction mismatch between encoder and decoder or "drift" when there are packet losses. These mismatches lead to a significant degradation in the decoded quality. To address this problem, we propose an auxiliary codec system that sends additional information alongside an MPEG or H.26x compressed video stream to correct for errors in decoded frames and mitigate drift. The proposed system is based on the principles of distributed source coding and uses the (possibly erroneous) MPEG/H.26x decoder reconstruction as side information at the auxiliary decoder. The distributed source coding framework depends upon knowing the statistical dependency (or correlation) between the source and the side information. We propose a recursive algorithm to analytically track the correlation between the original source frame and the erroneous MPEG/H.26x decoded frame. Finally, we propose a rate-distortion optimization scheme to allocate the rate used by the auxiliary encoder among the encoding blocks within a video frame. We implement the proposed system and present extensive simulation results that demonstrate significant gains in performance both visually and objectively (on the order of 2 dB in PSNR over forward error correction based solutions and 1.5 dB in PSNR over intrarefresh based solutions for typical scenarios) under tight latency constraints.

  5. SKYSINE-II procedure: calculation of the effects of structure design on neutron, primary gamma-ray and secondary gamma-ray dose rates in air

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lampley, C.M.

    1979-01-01

    An updated version of the SKYSHINE Monte Carlo procedure has been developed. The new computer code, SKYSHINE-II, provides a substantial increase in versatility in that the program possesses the ability to address three types of point-isotropic radiation sources: (1) primary gamma rays, (2) neutrons, and (3) secondary gamma rays. In addition, the emitted radiation may now be characterized by an energy emission spectrum product of a new energy-dependent atmospheric transmission data base developed by Radiation Research Associates, Inc. for each of the three source types described above. Most of the computational options present in the original program have been retainedmore » in the new version. Hence, the SKYSHINE-II computer code provides a versatile and viable tool for the analysis of the radiation environment in the vicinity of a building structure containing radiation sources, situated within the confines of a nuclear power plant. This report describes many of the calculational methods employed within the SKYSHINE-II program. A brief description of the new data base is included. Utilization instructions for the program are provided for operation of the SKYSHINE-II code on the Brookhaven National Laboratory Central Scientific Computing Facility. A listing of the source decks, block data routines, and the new atmospheric transmission data base are provided in the appendices of the report.« less

  6. Magnetic resonance image compression using scalar-vector quantization

    NASA Astrophysics Data System (ADS)

    Mohsenian, Nader; Shahri, Homayoun

    1995-12-01

    A new coding scheme based on the scalar-vector quantizer (SVQ) is developed for compression of medical images. SVQ is a fixed-rate encoder and its rate-distortion performance is close to that of optimal entropy-constrained scalar quantizers (ECSQs) for memoryless sources. The use of a fixed-rate quantizer is expected to eliminate some of the complexity issues of using variable-length scalar quantizers. When transmission of images over noisy channels is considered, our coding scheme does not suffer from error propagation which is typical of coding schemes which use variable-length codes. For a set of magnetic resonance (MR) images, coding results obtained from SVQ and ECSQ at low bit-rates are indistinguishable. Furthermore, our encoded images are perceptually indistinguishable from the original, when displayed on a monitor. This makes our SVQ based coder an attractive compression scheme for picture archiving and communication systems (PACS), currently under consideration for an all digital radiology environment in hospitals, where reliable transmission, storage, and high fidelity reconstruction of images are desired.

  7. Parallel tiled Nussinov RNA folding loop nest generated using both dependence graph transitive closure and loop skewing.

    PubMed

    Palkowski, Marek; Bielecki, Wlodzimierz

    2017-06-02

    RNA secondary structure prediction is a compute intensive task that lies at the core of several search algorithms in bioinformatics. Fortunately, the RNA folding approaches, such as the Nussinov base pair maximization, involve mathematical operations over affine control loops whose iteration space can be represented by the polyhedral model. Polyhedral compilation techniques have proven to be a powerful tool for optimization of dense array codes. However, classical affine loop nest transformations used with these techniques do not optimize effectively codes of dynamic programming of RNA structure predictions. The purpose of this paper is to present a novel approach allowing for generation of a parallel tiled Nussinov RNA loop nest exposing significantly higher performance than that of known related code. This effect is achieved due to improving code locality and calculation parallelization. In order to improve code locality, we apply our previously published technique of automatic loop nest tiling to all the three loops of the Nussinov loop nest. This approach first forms original rectangular 3D tiles and then corrects them to establish their validity by means of applying the transitive closure of a dependence graph. To produce parallel code, we apply the loop skewing technique to a tiled Nussinov loop nest. The technique is implemented as a part of the publicly available polyhedral source-to-source TRACO compiler. Generated code was run on modern Intel multi-core processors and coprocessors. We present the speed-up factor of generated Nussinov RNA parallel code and demonstrate that it is considerably faster than related codes in which only the two outer loops of the Nussinov loop nest are tiled.

  8. Investigating the origin of ultrahigh-energy cosmic rays with CRPropa

    NASA Astrophysics Data System (ADS)

    Bouchachi, Dallel; Attallah, Reda

    2016-07-01

    Ultrahigh-energy cosmic rays are the most energetic of any subatomic particles ever observed in nature. Yet, their sources and acceleration mechanisms are still unknown. To better understand the origin of these particles, we carried out extensive numerical simulations of their propagation in extragalactic space. We used the public CRPropa code which considers all relevant particle interactions and magnetic deflections. We examined the energy spectrum, the mass composition, and the distribution of arrival directions under different scenarios. Such a study allows, in particular, to properly interpret the data of modern experiments like "The Pierre Auger Observatory" and "The Telescope Array".

  9. Telemetry Standards, RCC Standard 106-17. Chapter 26. TmNSDataMessage Transfer Protocol

    DTIC Science & Technology

    2017-07-01

    Channel (RTSPDataChannel) ............................................ 26-13 26.4.3 Reliability Critical (RC) Delivery Protocol...error status code specified in RFC 2326 for "Request-URI Too Large" is 虮". 26.4.1.5 Request Types RTSPDataSources shall return valid ...to the following requirements. • Valid TmNSDataMessages shall be delivered containing the original Packages matching the requested

  10. Mod3DMT and EMTF: Free Software for MT Data Processing and Inversion

    NASA Astrophysics Data System (ADS)

    Egbert, G. D.; Kelbert, A.; Meqbel, N. M.

    2017-12-01

    "ModEM" was developed at Oregon State University as a modular system for inversion of electromagnetic (EM) geophysical data (Egbert and Kelbert, 2012; Kelbert et al., 2014). Although designed for more general (frequency domain) EM applications, and originally intended as a testbed for exploring inversion search and regularization strategies, our own initial uses of ModEM were for 3-D imaging of the deep crust and upper mantle at large scales. Since 2013 we have offered a version of the source code suitable for 3D magnetotelluric (MT) inversion on an "as is, user beware" basis for free for non-commercial applications. This version, which we refer to as Mod3DMT, has since been widely used by the international MT community. Over 250 users have registered to download the source code, and at least 50 MT studies in the refereed literature, covering locations around the globe at a range of spatial scales, cite use of ModEM for 3D inversion. For over 30 years I have also made MT processing software available for free use. In this presentation, I will discuss my experience with these freely available (but perhaps not truly open-source) computer codes. Although users are allowed to make modifications to the codes (on conditions that they provide a copy of the modified version) only a handful of users have tried to make any modification, and only rarely are modifications even reported, much less provided back to the developers.

  11. Syndrome source coding and its universal generalization

    NASA Technical Reports Server (NTRS)

    Ancheta, T. C., Jr.

    1975-01-01

    A method of using error-correcting codes to obtain data compression, called syndrome-source-coding, is described in which the source sequence is treated as an error pattern whose syndrome forms the compressed data. It is shown that syndrome-source-coding can achieve arbitrarily small distortion with the number of compressed digits per source digit arbitrarily close to the entropy of a binary memoryless source. A universal generalization of syndrome-source-coding is formulated which provides robustly-effective, distortionless, coding of source ensembles.

  12. Tests of Exoplanet Atmospheric Radiative Transfer Codes

    NASA Astrophysics Data System (ADS)

    Harrington, Joseph; Challener, Ryan; DeLarme, Emerson; Cubillos, Patricio; Blecic, Jasmina; Foster, Austin; Garland, Justin

    2016-10-01

    Atmospheric radiative transfer codes are used both to predict planetary spectra and in retrieval algorithms to interpret data. Observational plans, theoretical models, and scientific results thus depend on the correctness of these calculations. Yet, the calculations are complex and the codes implementing them are often written without modern software-verification techniques. In the process of writing our own code, we became aware of several others with artifacts of unknown origin and even outright errors in their spectra. We present a series of tests to verify atmospheric radiative-transfer codes. These include: simple, single-line line lists that, when combined with delta-function abundance profiles, should produce a broadened line that can be verified easily; isothermal atmospheres that should produce analytically-verifiable blackbody spectra at the input temperatures; and model atmospheres with a range of complexities that can be compared to the output of other codes. We apply the tests to our own code, Bayesian Atmospheric Radiative Transfer (BART) and to several other codes. The test suite is open-source software. We propose this test suite as a standard for verifying current and future radiative transfer codes, analogous to the Held-Suarez test for general circulation models. This work was supported by NASA Planetary Atmospheres grant NX12AI69G and NASA Astrophysics Data Analysis Program grant NNX13AF38G.

  13. Crowd-Sourced Help with Emergent Knowledge for Optimized Formal Verification (CHEKOFV)

    DTIC Science & Technology

    2016-03-01

    up game Binary Fission, which was deployed during Phase Two of CHEKOFV. Xylem: The Code of Plants is a casual game for players using mobile ...there are the design and engineering challenges of building a game infrastructure that integrates verification technology with crowd participation...the backend processes that annotate the originating software. Allowing players to construct their own equations opened up the flexibility to receive

  14. PyPanda: a Python package for gene regulatory network reconstruction

    PubMed Central

    van IJzendoorn, David G.P.; Glass, Kimberly; Quackenbush, John; Kuijjer, Marieke L.

    2016-01-01

    Summary: PANDA (Passing Attributes between Networks for Data Assimilation) is a gene regulatory network inference method that uses message-passing to integrate multiple sources of ‘omics data. PANDA was originally coded in C ++. In this application note we describe PyPanda, the Python version of PANDA. PyPanda runs considerably faster than the C ++ version and includes additional features for network analysis. Availability and implementation: The open source PyPanda Python package is freely available at http://github.com/davidvi/pypanda. Contact: mkuijjer@jimmy.harvard.edu or d.g.p.van_ijzendoorn@lumc.nl PMID:27402905

  15. PyPanda: a Python package for gene regulatory network reconstruction.

    PubMed

    van IJzendoorn, David G P; Glass, Kimberly; Quackenbush, John; Kuijjer, Marieke L

    2016-11-01

    PANDA (Passing Attributes between Networks for Data Assimilation) is a gene regulatory network inference method that uses message-passing to integrate multiple sources of 'omics data. PANDA was originally coded in C ++. In this application note we describe PyPanda, the Python version of PANDA. PyPanda runs considerably faster than the C ++ version and includes additional features for network analysis. The open source PyPanda Python package is freely available at http://github.com/davidvi/pypanda CONTACT: mkuijjer@jimmy.harvard.edu or d.g.p.van_ijzendoorn@lumc.nl. © The Author 2016. Published by Oxford University Press.

  16. Fragment emission from the mass-symmetric reactions 58Fe,58Ni +58Fe,58Ni at Ebeam=30 MeV/nucleon

    NASA Astrophysics Data System (ADS)

    Ramakrishnan, E.; Johnston, H.; Gimeno-Nogues, F.; Rowland, D. J.; Laforest, R.; Lui, Y.-W.; Ferro, S.; Vasal, S.; Yennello, S. J.

    1998-04-01

    The mass-symmetric reactions 58Fe,58Ni +58Fe,58Ni were studied at a beam energy of Ebeam=30 MeV/nucleon in order to investigate the isospin dependence of fragment emission. Ratios of inclusive yields of isotopic fragments from hydrogen through nitrogen were extracted as a function of laboratory angle. A moving source analysis of the data indicates that at laboratory angles around 40° the yield of intermediate mass fragments (IMF's) beyond Z=3 is predominantly from a midrapidity source. The angular dependence of the relative yields of isotopes beyond Z=3 indicates that the IMF's at more central angles originate from a source which is more neutron deficient than the source responsible for fragments emitted at forward angles. The charge distributions and kinetic energy spectra of the IMF's at various laboratory angles were well reproduced by calculations employing a quantum molecular-dynamics code followed by a statistical multifragmentation model for generating fragments. The calculations indicate that the measured IMF's originate mainly from a single source. The isotopic composition of the emitted fragments is, however, not reproduced by the same calculation. The measured isotopic and isobaric ratios indicate an emitting source that is more neutron rich in comparison to the source predicted by model calculations.

  17. Mal-Xtract: Hidden Code Extraction using Memory Analysis

    NASA Astrophysics Data System (ADS)

    Lim, Charles; Syailendra Kotualubun, Yohanes; Suryadi; Ramli, Kalamullah

    2017-01-01

    Software packer has been used effectively to hide the original code inside a binary executable, making it more difficult for existing signature based anti malware software to detect malicious code inside the executable. A new method of written and rewritten memory section is introduced to to detect the exact end time of unpacking routine and extract original code from packed binary executable using Memory Analysis running in an software emulated environment. Our experiment results show that at least 97% of the original code from the various binary executable packed with different software packers could be extracted. The proposed method has also been successfully extracted hidden code from recent malware family samples.

  18. The IRAS Minor Planet Survey

    NASA Technical Reports Server (NTRS)

    Tedesco, Edward F.; Veeder, Glenn J.; Fowler, John W.; Chillemi, Joseph R.

    1992-01-01

    This report documents the program and data used to identify known asteroids observed by the Infrared Astronomical Satellite (IRAS) and to compute albedos and diameters from their IRAS fluxes. It also presents listings of the results obtained. These results supplant those in the IRAS Asteroid and Comet Survey, 1986. The present version used new and improved asteroid orbital elements for 4679 numbered asteroids and 2632 additional asteroids for which at least two-opposition elements were available as of mid-1991. It employed asteroid absolute magnitudes on the International Astronomical Union system adopted in 1991. In addition, the code was modified to increase the reliability of associating asteroids with IRAS sources and rectify several shortcomings in the final data products released in 1986. Association reliability was improved by decreasing the position difference between an IRAS source and a predicted asteroid position required for an association. The shortcomings addressed included the problem of flux overestimation for low SNR sources and the systematic difference in albedos and diameters among the three wavelength bands (12, 25, and 60 micrometers). Several minor bugs in the original code were also corrected.

  19. The First ASME Code Stamped Cryomodule at SNS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Howell, M P; Crofford, M T; Douglas, D L

    The first spare cryomodule for the Spallation Neutron Source (SNS) has been designed, fabricated, and tested by SNS personnel. The approach to design for this cryomodule was to hold critical design features identical to the original design such as bayonet positions, coupler positions, cold mass assembly, and overall footprint. However, this is the first SNS cryomodule that meets the pressure requirements put forth in the 10 CFR 851: Worker Safety and Health Program. The most significant difference is that Section VIII of the ASME Boiler and Pressure Vessel Code was applied to the vacuum vessel of this cryomodule. Applying themore » pressure code to the helium vessels within the cryomodule was considered. However, it was determined to be schedule prohibitive because it required a code case for materials that are not currently covered by the code. Good engineering practice was applied to the internal components to verify the quality and integrity of the entire cryomodule. The design of the cryomodule, fabrication effort, and cryogenic test results will be reported in this paper.« less

  20. Sandia National Laboratories environmental fluid dynamics code. Marine Hydrokinetic Module User's Manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    James, Scott Carlton; Roberts, Jesse D.

    2014-03-01

    This document describes the marine hydrokinetic (MHK) input file and subroutines for the Sandia National Laboratories Environmental Fluid Dynamics Code (SNL-EFDC), which is a combined hydrodynamic, sediment transport, and water quality model based on the Environmental Fluid Dynamics Code (EFDC) developed by John Hamrick [1], formerly sponsored by the U.S. Environmental Protection Agency, and now maintained by Tetra Tech, Inc. SNL-EFDC has been previously enhanced with the incorporation of the SEDZLJ sediment dynamics model developed by Ziegler, Lick, and Jones [2-4]. SNL-EFDC has also been upgraded to more accurately simulate algae growth with specific application to optimizing biomass in anmore » open-channel raceway for biofuels production [5]. A detailed description of the input file containing data describing the MHK device/array is provided, along with a description of the MHK FORTRAN routine. Both a theoretical description of the MHK dynamics as incorporated into SNL-EFDC and an explanation of the source code are provided. This user manual is meant to be used in conjunction with the original EFDC [6] and sediment dynamics SNL-EFDC manuals [7]. Through this document, the authors provide information for users who wish to model the effects of an MHK device (or array of devices) on a flow system with EFDC and who also seek a clear understanding of the source code, which is available from staff in the Water Power Technologies Department at Sandia National Laboratories, Albuquerque, New Mexico.« less

  1. Observations and Thermochemical Calculations for Hot-Jupiter Atmospheres

    NASA Astrophysics Data System (ADS)

    Blecic, Jasmina; Harrington, Joseph; Bowman, M. Oliver; Cubillos, Patricio; Stemm, Madison

    2015-01-01

    I present Spitzer eclipse observations for WASP-14b and WASP-43b, an open source tool for thermochemical equilibrium calculations, and components of an open source tool for atmospheric parameter retrieval from spectroscopic data. WASP-14b is a planet that receives high irradiation from its host star, yet, although theory does not predict it, the planet hosts a thermal inversion. The WASP-43b eclipses have signal-to-noise ratios of ~25, one of the largest among exoplanets. To assess these planets' atmospheric composition and thermal structure, we developed an open-source Bayesian Atmospheric Radiative Transfer (BART) code. My dissertation tasks included developing a Thermochemical Equilibrium Abundances (TEA) code, implementing the eclipse geometry calculation in BART's radiative transfer module, and generating parameterized pressure and temperature profiles so the radiative-transfer module can be driven by the statistical module.To initialize the radiative-transfer calculation in BART, TEA calculates the equilibrium abundances of gaseous molecular species at a given temperature and pressure. It uses the Gibbs-free-energy minimization method with an iterative Lagrangian optimization scheme. Given elemental abundances, TEA calculates molecular abundances for a particular temperature and pressure or a list of temperature-pressure pairs. The code is tested against the original method developed by White at al. (1958), the analytic method developed by Burrows and Sharp (1999), and the Newton-Raphson method implemented in the open-source Chemical Equilibrium with Applications (CEA) code. TEA, written in Python, is modular, documented, and available to the community via the open-source development site GitHub.com.Support for this work was provided by NASA Headquarters under the NASA Earth and Space Science Fellowship Program, grant NNX12AL83H, by NASA through an award issued by JPL/Caltech, and through the Science Mission Directorate's Planetary Atmospheres Program, grant NNX12AI69G.

  2. Using National Drug Codes and drug knowledge bases to organize prescription records from multiple sources.

    PubMed

    Simonaitis, Linas; McDonald, Clement J

    2009-10-01

    The utility of National Drug Codes (NDCs) and drug knowledge bases (DKBs) in the organization of prescription records from multiple sources was studied. The master files of most pharmacy systems include NDCs and local codes to identify the products they dispense. We obtained a large sample of prescription records from seven different sources. These records carried a national product code or a local code that could be translated into a national product code via their formulary master. We obtained mapping tables from five DKBs. We measured the degree to which the DKB mapping tables covered the national product codes carried in or associated with the sample of prescription records. Considering the total prescription volume, DKBs covered 93.0-99.8% of the product codes from three outpatient sources and 77.4-97.0% of the product codes from four inpatient sources. Among the in-patient sources, invented codes explained 36-94% of the noncoverage. Outpatient pharmacy sources rarely invented codes, which comprised only 0.11-0.21% of their total prescription volume, compared with inpatient pharmacy sources for which invented codes comprised 1.7-7.4% of their prescription volume. The distribution of prescribed products was highly skewed, with 1.4-4.4% of codes accounting for 50% of the message volume and 10.7-34.5% accounting for 90% of the message volume. DKBs cover the product codes used by outpatient sources sufficiently well to permit automatic mapping. Changes in policies and standards could increase coverage of product codes used by inpatient sources.

  3. Coding of adverse events of suicidality in clinical study reports of duloxetine for the treatment of major depressive disorder: descriptive study

    PubMed Central

    Tendal, Britta; Hróbjartsson, Asbjørn; Lundh, Andreas; Gøtzsche, Peter C

    2014-01-01

    Objective To assess the effects of coding and coding conventions on summaries and tabulations of adverse events data on suicidality within clinical study reports. Design Systematic electronic search for adverse events of suicidality in tables, narratives, and listings of adverse events in individual patients within clinical study reports. Where possible, for each event we extracted the original term reported by the investigator, the term as coded by the medical coding dictionary, medical coding dictionary used, and the patient’s trial identification number. Using the patient’s trial identification number, we attempted to reconcile data on the same event between the different formats for presenting data on adverse events within the clinical study report. Setting 9 randomised placebo controlled trials of duloxetine for major depressive disorder submitted to the European Medicines Agency for marketing approval. Data sources Clinical study reports obtained from the EMA in 2011. Results Six trials used the medical coding dictionary COSTART (Coding Symbols for a Thesaurus of Adverse Reaction Terms) and three used MedDRA (Medical Dictionary for Regulatory Activities). Suicides were clearly identifiable in all formats of adverse event data in clinical study reports. Suicide attempts presented in tables included both definitive and provisional diagnoses. Suicidal ideation and preparatory behaviour were obscured in some tables owing to the lack of specificity of the medical coding dictionary, especially COSTART. Furthermore, we found one event of suicidal ideation described in narrative text that was absent from tables and adverse event listings of individual patients. The reason for this is unclear, but may be due to the coding conventions used. Conclusion Data on adverse events in tables in clinical study reports may not accurately represent the underlying patient data because of the medical dictionaries and coding conventions used. In clinical study reports, the listings of adverse events for individual patients and narratives of adverse events can provide additional information, including original investigator reported adverse event terms, which can enable a more accurate estimate of harms. PMID:24899651

  4. Practices in Code Discoverability: Astrophysics Source Code Library

    NASA Astrophysics Data System (ADS)

    Allen, A.; Teuben, P.; Nemiroff, R. J.; Shamir, L.

    2012-09-01

    Here we describe the Astrophysics Source Code Library (ASCL), which takes an active approach to sharing astrophysics source code. ASCL's editor seeks out both new and old peer-reviewed papers that describe methods or experiments that involve the development or use of source code, and adds entries for the found codes to the library. This approach ensures that source codes are added without requiring authors to actively submit them, resulting in a comprehensive listing that covers a significant number of the astrophysics source codes used in peer-reviewed studies. The ASCL now has over 340 codes in it and continues to grow. In 2011, the ASCL has on average added 19 codes per month. An advisory committee has been established to provide input and guide the development and expansion of the new site, and a marketing plan has been developed and is being executed. All ASCL source codes have been used to generate results published in or submitted to a refereed journal and are freely available either via a download site or from an identified source. This paper provides the history and description of the ASCL. It lists the requirements for including codes, examines the advantages of the ASCL, and outlines some of its future plans.

  5. Big Software for SmallSats: Adapting CFS to CubeSat Missions

    NASA Technical Reports Server (NTRS)

    Cudmore, Alan P.; Crum, Gary; Sheikh, Salman; Marshall, James

    2015-01-01

    Expanding capabilities and mission objectives for SmallSats and CubeSats is driving the need for reliable, reusable, and robust flight software. While missions are becoming more complicated and the scientific goals more ambitious, the level of acceptable risk has decreased. Design challenges are further compounded by budget and schedule constraints that have not kept pace. NASA's Core Flight Software System (cFS) is an open source solution which enables teams to build flagship satellite level flight software within a CubeSat schedule and budget. NASA originally developed cFS to reduce mission and schedule risk for flagship satellite missions by increasing code reuse and reliability. The Lunar Reconnaissance Orbiter, which launched in 2009, was the first of a growing list of Class B rated missions to use cFS. Large parts of cFS are now open source, which has spurred adoption outside of NASA. This paper reports on the experiences of two teams using cFS for current CubeSat missions. The performance overheads of cFS are quantified, and the reusability of code between missions is discussed. The analysis shows that cFS is well suited to use on CubeSats and demonstrates the portability and modularity of cFS code.

  6. De Novo Origin of Human Protein-Coding Genes

    PubMed Central

    Wu, Dong-Dong; Irwin, David M.; Zhang, Ya-Ping

    2011-01-01

    The de novo origin of a new protein-coding gene from non-coding DNA is considered to be a very rare occurrence in genomes. Here we identify 60 new protein-coding genes that originated de novo on the human lineage since divergence from the chimpanzee. The functionality of these genes is supported by both transcriptional and proteomic evidence. RNA–seq data indicate that these genes have their highest expression levels in the cerebral cortex and testes, which might suggest that these genes contribute to phenotypic traits that are unique to humans, such as improved cognitive ability. Our results are inconsistent with the traditional view that the de novo origin of new genes is very rare, thus there should be greater appreciation of the importance of the de novo origination of genes. PMID:22102831

  7. Facilitating Internet-Scale Code Retrieval

    ERIC Educational Resources Information Center

    Bajracharya, Sushil Krishna

    2010-01-01

    Internet-Scale code retrieval deals with the representation, storage, and access of relevant source code from a large amount of source code available on the Internet. Internet-Scale code retrieval systems support common emerging practices among software developers related to finding and reusing source code. In this dissertation we focus on some…

  8. SORTA: a system for ontology-based re-coding and technical annotation of biomedical phenotype data.

    PubMed

    Pang, Chao; Sollie, Annet; Sijtsma, Anna; Hendriksen, Dennis; Charbon, Bart; de Haan, Mark; de Boer, Tommy; Kelpin, Fleur; Jetten, Jonathan; van der Velde, Joeri K; Smidt, Nynke; Sijmons, Rolf; Hillege, Hans; Swertz, Morris A

    2015-01-01

    There is an urgent need to standardize the semantics of biomedical data values, such as phenotypes, to enable comparative and integrative analyses. However, it is unlikely that all studies will use the same data collection protocols. As a result, retrospective standardization is often required, which involves matching of original (unstructured or locally coded) data to widely used coding or ontology systems such as SNOMED CT (clinical terms), ICD-10 (International Classification of Disease) and HPO (Human Phenotype Ontology). This data curation process is usually a time-consuming process performed by a human expert. To help mechanize this process, we have developed SORTA, a computer-aided system for rapidly encoding free text or locally coded values to a formal coding system or ontology. SORTA matches original data values (uploaded in semicolon delimited format) to a target coding system (uploaded in Excel spreadsheet, OWL ontology web language or OBO open biomedical ontologies format). It then semi- automatically shortlists candidate codes for each data value using Lucene and n-gram based matching algorithms, and can also learn from matches chosen by human experts. We evaluated SORTA's applicability in two use cases. For the LifeLines biobank, we used SORTA to recode 90 000 free text values (including 5211 unique values) about physical exercise to MET (Metabolic Equivalent of Task) codes. For the CINEAS clinical symptom coding system, we used SORTA to map to HPO, enriching HPO when necessary (315 terms matched so far). Out of the shortlists at rank 1, we found a precision/recall of 0.97/0.98 in LifeLines and of 0.58/0.45 in CINEAS. More importantly, users found the tool both a major time saver and a quality improvement because SORTA reduced the chances of human mistakes. Thus, SORTA can dramatically ease data (re)coding tasks and we believe it will prove useful for many more projects. Database URL: http://molgenis.org/sorta or as an open source download from http://www.molgenis.org/wiki/SORTA. © The Author(s) 2015. Published by Oxford University Press.

  9. SORTA: a system for ontology-based re-coding and technical annotation of biomedical phenotype data

    PubMed Central

    Pang, Chao; Sollie, Annet; Sijtsma, Anna; Hendriksen, Dennis; Charbon, Bart; de Haan, Mark; de Boer, Tommy; Kelpin, Fleur; Jetten, Jonathan; van der Velde, Joeri K.; Smidt, Nynke; Sijmons, Rolf; Hillege, Hans; Swertz, Morris A.

    2015-01-01

    There is an urgent need to standardize the semantics of biomedical data values, such as phenotypes, to enable comparative and integrative analyses. However, it is unlikely that all studies will use the same data collection protocols. As a result, retrospective standardization is often required, which involves matching of original (unstructured or locally coded) data to widely used coding or ontology systems such as SNOMED CT (clinical terms), ICD-10 (International Classification of Disease) and HPO (Human Phenotype Ontology). This data curation process is usually a time-consuming process performed by a human expert. To help mechanize this process, we have developed SORTA, a computer-aided system for rapidly encoding free text or locally coded values to a formal coding system or ontology. SORTA matches original data values (uploaded in semicolon delimited format) to a target coding system (uploaded in Excel spreadsheet, OWL ontology web language or OBO open biomedical ontologies format). It then semi- automatically shortlists candidate codes for each data value using Lucene and n-gram based matching algorithms, and can also learn from matches chosen by human experts. We evaluated SORTA’s applicability in two use cases. For the LifeLines biobank, we used SORTA to recode 90 000 free text values (including 5211 unique values) about physical exercise to MET (Metabolic Equivalent of Task) codes. For the CINEAS clinical symptom coding system, we used SORTA to map to HPO, enriching HPO when necessary (315 terms matched so far). Out of the shortlists at rank 1, we found a precision/recall of 0.97/0.98 in LifeLines and of 0.58/0.45 in CINEAS. More importantly, users found the tool both a major time saver and a quality improvement because SORTA reduced the chances of human mistakes. Thus, SORTA can dramatically ease data (re)coding tasks and we believe it will prove useful for many more projects. Database URL: http://molgenis.org/sorta or as an open source download from http://www.molgenis.org/wiki/SORTA PMID:26385205

  10. Joint source-channel coding for motion-compensated DCT-based SNR scalable video.

    PubMed

    Kondi, Lisimachos P; Ishtiaq, Faisal; Katsaggelos, Aggelos K

    2002-01-01

    In this paper, we develop an approach toward joint source-channel coding for motion-compensated DCT-based scalable video coding and transmission. A framework for the optimal selection of the source and channel coding rates over all scalable layers is presented such that the overall distortion is minimized. The algorithm utilizes universal rate distortion characteristics which are obtained experimentally and show the sensitivity of the source encoder and decoder to channel errors. The proposed algorithm allocates the available bit rate between scalable layers and, within each layer, between source and channel coding. We present the results of this rate allocation algorithm for video transmission over a wireless channel using the H.263 Version 2 signal-to-noise ratio (SNR) scalable codec for source coding and rate-compatible punctured convolutional (RCPC) codes for channel coding. We discuss the performance of the algorithm with respect to the channel conditions, coding methodologies, layer rates, and number of layers.

  11. Numerical Relativity for Space-Based Gravitational Wave Astronomy

    NASA Technical Reports Server (NTRS)

    Baker, John G.

    2011-01-01

    In the next decade, gravitational wave instruments in space may provide high-precision measurements of gravitational-wave signals from strong sources, such as black holes. Currently variations on the original Laser Interferometer Space Antenna mission concepts are under study in the hope of reducing costs. Even the observations of a reduced instrument may place strong demands on numerical relativity capabilities. Possible advances in the coming years may fuel a new generation of codes ready to confront these challenges.

  12. Spiral computed tomography phase-space source model in the BEAMnrc/EGSnrc Monte Carlo system: implementation and validation.

    PubMed

    Kim, Sangroh; Yoshizumi, Terry T; Yin, Fang-Fang; Chetty, Indrin J

    2013-04-21

    Currently, the BEAMnrc/EGSnrc Monte Carlo (MC) system does not provide a spiral CT source model for the simulation of spiral CT scanning. We developed and validated a spiral CT phase-space source model in the BEAMnrc/EGSnrc system. The spiral phase-space source model was implemented in the DOSXYZnrc user code of the BEAMnrc/EGSnrc system by analyzing the geometry of spiral CT scan-scan range, initial angle, rotational direction, pitch, slice thickness, etc. Table movement was simulated by changing the coordinates of the isocenter as a function of beam angles. Some parameters such as pitch, slice thickness and translation per rotation were also incorporated into the model to make the new phase-space source model, designed specifically for spiral CT scan simulations. The source model was hard-coded by modifying the 'ISource = 8: Phase-Space Source Incident from Multiple Directions' in the srcxyznrc.mortran and dosxyznrc.mortran files in the DOSXYZnrc user code. In order to verify the implementation, spiral CT scans were simulated in a CT dose index phantom using the validated x-ray tube model of a commercial CT simulator for both the original multi-direction source (ISOURCE = 8) and the new phase-space source model in the DOSXYZnrc system. Then the acquired 2D and 3D dose distributions were analyzed with respect to the input parameters for various pitch values. In addition, surface-dose profiles were also measured for a patient CT scan protocol using radiochromic film and were compared with the MC simulations. The new phase-space source model was found to simulate the spiral CT scanning in a single simulation run accurately. It also produced the equivalent dose distribution of the ISOURCE = 8 model for the same CT scan parameters. The MC-simulated surface profiles were well matched to the film measurement overall within 10%. The new spiral CT phase-space source model was implemented in the BEAMnrc/EGSnrc system. This work will be beneficial in estimating the spiral CT scan dose in the BEAMnrc/EGSnrc system.

  13. Spiral computed tomography phase-space source model in the BEAMnrc/EGSnrc Monte Carlo system: implementation and validation

    NASA Astrophysics Data System (ADS)

    Kim, Sangroh; Yoshizumi, Terry T.; Yin, Fang-Fang; Chetty, Indrin J.

    2013-04-01

    Currently, the BEAMnrc/EGSnrc Monte Carlo (MC) system does not provide a spiral CT source model for the simulation of spiral CT scanning. We developed and validated a spiral CT phase-space source model in the BEAMnrc/EGSnrc system. The spiral phase-space source model was implemented in the DOSXYZnrc user code of the BEAMnrc/EGSnrc system by analyzing the geometry of spiral CT scan—scan range, initial angle, rotational direction, pitch, slice thickness, etc. Table movement was simulated by changing the coordinates of the isocenter as a function of beam angles. Some parameters such as pitch, slice thickness and translation per rotation were also incorporated into the model to make the new phase-space source model, designed specifically for spiral CT scan simulations. The source model was hard-coded by modifying the ‘ISource = 8: Phase-Space Source Incident from Multiple Directions’ in the srcxyznrc.mortran and dosxyznrc.mortran files in the DOSXYZnrc user code. In order to verify the implementation, spiral CT scans were simulated in a CT dose index phantom using the validated x-ray tube model of a commercial CT simulator for both the original multi-direction source (ISOURCE = 8) and the new phase-space source model in the DOSXYZnrc system. Then the acquired 2D and 3D dose distributions were analyzed with respect to the input parameters for various pitch values. In addition, surface-dose profiles were also measured for a patient CT scan protocol using radiochromic film and were compared with the MC simulations. The new phase-space source model was found to simulate the spiral CT scanning in a single simulation run accurately. It also produced the equivalent dose distribution of the ISOURCE = 8 model for the same CT scan parameters. The MC-simulated surface profiles were well matched to the film measurement overall within 10%. The new spiral CT phase-space source model was implemented in the BEAMnrc/EGSnrc system. This work will be beneficial in estimating the spiral CT scan dose in the BEAMnrc/EGSnrc system.

  14. Toward an architecture of attachment disorganization: John Bowlby's published and unpublished reflections.

    PubMed

    Solomon, Judith; Duschinsky, Robbie; Bakkum, Lianne; Schuengel, Carlo

    2017-10-01

    This article examines the construct of disorganized attachment originally proposed by Main and Solomon, developing some new conjectures based on inspiration from a largely unknown source: John Bowlby's unpublished texts, housed at the Wellcome Trust Library Archive in London (with permission from the Bowlby family). We explore Bowlby's discussions of disorganized attachment, which he understood from the perspective of ethological theories of conflict behavior. Bowlby's reflections regarding differences among the behaviors used to code disorganized attachment will be used to explore distinctions that may underlie the structure of the current coding system. The article closes with an emphasis on the importance Bowlby placed on Popper's distinction between the context of discovery and the context of justification in developmental science.

  15. Validation of a modified table to map the 1998 Abbreviated Injury Scale to the 2008 scale and the use of adjusted severities.

    PubMed

    Tohira, Hideo; Jacobs, Ian; Mountain, David; Gibson, Nick; Yeo, Allen; Ueno, Masato; Watanabe, Hiroaki

    2011-12-01

    The Abbreviated Injury Scale 2008 (AIS 2008) is the most recent injury coding system. A mapping table from a previous AIS 98 to AIS 2008 is available. However, AIS 98 codes that are unmappable to AIS 2008 codes exist in this table. Furthermore, some AIS 98 codes can be mapped to multiple candidate AIS 2008 codes with different severities. We aimed to modify the original table to adjust the severities and to validate these changes. We modified the original table by adding links from unmappable AIS 98 codes to AIS 2008 codes. We applied the original table and our modified table to AIS 98 codes for major trauma patients. We also assigned candidate codes with different severities the weighted averages of their severities as an adjusted severity. The proportion of cases whose injury severity scores (ISSs) were computable were compared. We also compared the agreement of the ISS and New ISS (NISS) between manually determined AIS 2008 codes (MAN) and mapped codes by using our table (MAP) with unadjusted or adjusted severities. All and 72.3% of cases had their ISSs computed by our modified table and the original table, respectively. The agreement between MAN and MAP with respect to the ISS and NISS was substantial (intraclass correlation coefficient = 0.939 for ISS and 0.943 for NISS). Using adjusted severities, the agreements of the ISS and NISS improved to 0.953 (p = 0.11) and 0.963 (p = 0.007), respectively. Our modified mapping table seems to allow more ISSs to be computed than the original table. Severity scores exhibited substantial agreement between MAN and MAP. The use of adjusted severities improved these agreements further.

  16. Exploration of ICD-9-CM Coding of Chronic Disease within the Elixhauser Comorbidity Measure in Patients with Chronic Heart Failure

    PubMed Central

    Garvin, Jennifer Hornung; Redd, Andrew; Bolton, Dan; Graham, Pauline; Roche, Dominic; Groeneveld, Peter; Leecaster, Molly; Shen, Shuying; Weiner, Mark G.

    2013-01-01

    Introduction International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM) codes capture comorbidities that can be used to risk adjust nonrandom patient groups. We explored the accuracy of capturing comorbidities associated with one risk adjustment method, the Elixhauser Comorbidity Measure (ECM), in patients with chronic heart failure (CHF) at one Veterans Affairs (VA) medical center. We explored potential reasons for the differences found between the original codes assigned and conditions found through retrospective review. Methods This descriptive, retrospective study used a cohort of patients discharged with a principal diagnosis coded as CHF from one VA medical center in 2003. One admission per patient was used in the study; with multiple admissions, only the first admission was analyzed. We compared the assignment of original codes assigned to conditions found in a retrospective, manual review of the medical record conducted by an investigator with coding expertise as well as by physicians. Members of the team experienced with assigning ICD-9-CM codes and VA coding processes developed themes related to systemic reasons why chronic conditions were not coded in VA records using applied thematic techniques. Results In the 181-patient cohort, 388 comorbid conditions were identified; 305 of these were chronic conditions, originally coded at the time of discharge with an average of 1.7 comorbidities related to the ECM per patient. The review by an investigator with coding expertise revealed a total of 937 comorbidities resulting in 618 chronic comorbid conditions with an average of 3.4 per patient; physician review found 872 total comorbidities with 562 chronic conditions (average 3.1 per patient). The agreement between the original and the retrospective coding review was 88 percent. The kappa statistic for the original and the retrospective coding review was 0.375 with a 95 percent confidence interval (CI) of 0.352 to 0.398. The kappa statistic for the retrospective coding review and physician review was 0.849 (CI, 0.823–0.875). The kappa statistic for the original coding and the physician review was 0.340 (CI, 0.316–0.364). Several systemic factors were identified, including familiarity with inpatient VA and non-VA guidelines, the quality of documentation, and operational requirements to complete the coding process within short time frames and to identify the reasons for movement within a given facility. Conclusion Comorbidities within the ECM representing chronic conditions were significantly underrepresented in the original code assignment. Contributing factors potentially include prioritization of codes related to acute conditions over chronic conditions; coders’ professional training, educational level, and experience; and the limited number of codes allowed in initial coding software. This study highlights the need to evaluate systemic causes of underrepresentation of chronic conditions to improve the accuracy of risk adjustment used for health services research, resource allocation, and performance measurement. PMID:24159270

  17. The Astrophysics Source Code Library: An Update

    NASA Astrophysics Data System (ADS)

    Allen, Alice; Nemiroff, R. J.; Shamir, L.; Teuben, P. J.

    2012-01-01

    The Astrophysics Source Code Library (ASCL), founded in 1999, takes an active approach to sharing astrophysical source code. ASCL's editor seeks out both new and old peer-reviewed papers that describe methods or experiments that involve the development or use of source code, and adds entries for the found codes to the library. This approach ensures that source codes are added without requiring authors to actively submit them, resulting in a comprehensive listing that covers a significant number of the astrophysics source codes used in peer-reviewed studies. The ASCL moved to a new location in 2010, and has over 300 codes in it and continues to grow. In 2011, the ASCL (http://asterisk.apod.com/viewforum.php?f=35) has on average added 19 new codes per month; we encourage scientists to submit their codes for inclusion. An advisory committee has been established to provide input and guide the development and expansion of its new site, and a marketing plan has been developed and is being executed. All ASCL source codes have been used to generate results published in or submitted to a refereed journal and are freely available either via a download site or from an identified source. This presentation covers the history of the ASCL and examines the current state and benefits of the ASCL, the means of and requirements for including codes, and outlines its future plans.

  18. Python-Assisted MODFLOW Application and Code Development

    NASA Astrophysics Data System (ADS)

    Langevin, C.

    2013-12-01

    The U.S. Geological Survey (USGS) has a long history of developing and maintaining free, open-source software for hydrological investigations. The MODFLOW program is one of the most popular hydrologic simulation programs released by the USGS, and it is considered to be the most widely used groundwater flow simulation code. MODFLOW was written using a modular design and a procedural FORTRAN style, which resulted in code that could be understood, modified, and enhanced by many hydrologists. The code is fast, and because it uses standard FORTRAN it can be run on most operating systems. Most MODFLOW users rely on proprietary graphical user interfaces for constructing models and viewing model results. Some recent efforts, however, have focused on construction of MODFLOW models using open-source Python scripts. Customizable Python packages, such as FloPy (https://code.google.com/p/flopy), can be used to generate input files, read simulation results, and visualize results in two and three dimensions. Automating this sequence of steps leads to models that can be reproduced directly from original data and rediscretized in space and time. Python is also being used in the development and testing of new MODFLOW functionality. New packages and numerical formulations can be quickly prototyped and tested first with Python programs before implementation in MODFLOW. This is made possible by the flexible object-oriented design capabilities available in Python, the ability to call FORTRAN code from Python, and the ease with which linear systems of equations can be solved using SciPy, for example. Once new features are added to MODFLOW, Python can then be used to automate comprehensive regression testing and ensure reliability and accuracy of new versions prior to release.

  19. Authorship Attribution of Source Code

    ERIC Educational Resources Information Center

    Tennyson, Matthew F.

    2013-01-01

    Authorship attribution of source code is the task of deciding who wrote a program, given its source code. Applications include software forensics, plagiarism detection, and determining software ownership. A number of methods for the authorship attribution of source code have been presented in the past. A review of those existing methods is…

  20. Some pungent arguments against the physico-chemical theories of the origin of the genetic code and corroborating the coevolution theory.

    PubMed

    Di Giulio, Massimo

    2017-02-07

    Whereas it is extremely easy to prove that "if the biosynthetic relationships between amino acids were fundamental in the structuring of the genetic code, then their physico-chemical properties might also be revealed in the genetic code table"; it is, on the contrary, impossible to prove that "if the physico-chemical properties of amino acids were fundamental in the structuring of the genetic code, then the presence of the biosynthetic relationships between amino acids should not be revealed in the genetic code". And, given that in the genetic code table are mirrored both the biosynthetic relationships between amino acids and their physico-chemical properties, all this would be a test that would falsify the physico-chemical theories of the origin of the genetic code. That is to say, if the physico-chemical properties of amino acids had a fundamental role in organizing the genetic code, then we would not have duly revealed the presence - in the genetic code - of the biosynthetic relationships between amino acids, and on the contrary this has been observed. Therefore, this falsifies the physico-chemical theories of genetic code origin. Whereas, the coevolution theory of the origin of the genetic code would be corroborated by this analysis, because it would be able to give a description of evolution of the genetic code more coherent with the indisputable empirical observations that link both the biosynthetic relationships of amino acids and their physico-chemical properties to the evolutionary organization of the genetic code. Copyright © 2016 Elsevier Ltd. All rights reserved.

  1. Assessing the Viability of Social Media for Disseminating Evidence-Based Nutrition Practice Guideline Through Content Analysis of Twitter Messages and Health Professional Interviews: An Observational Study

    PubMed Central

    Kenne, Deric; Wolfram, Taylor M; Abram, Jenica K; Fleming, Michael

    2016-01-01

    Background Given the high penetration of social media use, social media has been proposed as a method for the dissemination of information to health professionals and patients. This study explored the potential for social media dissemination of the Academy of Nutrition and Dietetics Evidence-Based Nutrition Practice Guideline (EBNPG) for Heart Failure (HF). Objectives The objectives were to (1) describe the existing social media content on HF, including message content, source, and target audience, and (2) describe the attitude of physicians and registered dietitian nutritionists (RDNs) who care for outpatient HF patients toward the use of social media as a method to obtain information for themselves and to share this information with patients. Methods The methods were divided into 2 parts. Part 1 involved conducting a content analysis of tweets related to HF, which were downloaded from Twitonomy and assigned codes for message content (19 codes), source (9 codes), and target audience (9 codes); code frequency was described. A comparison in the popularity of tweets (those marked as favorites or retweeted) based on applied codes was made using t tests. Part 2 involved conducting phone interviews with RDNs and physicians to describe health professionals’ attitude toward the use of social media to communicate general health information and information specifically related to the HF EBNPG. Interviews were transcribed and coded; exemplar quotes representing frequent themes are presented. Results The sample included 294 original tweets with the hashtag “#heartfailure.” The most frequent message content codes were “HF awareness” (166/294, 56.5%) and “patient support” (97/294, 33.0%). The most frequent source codes were “professional, government, patient advocacy organization, or charity” (112/277, 40.4%) and “patient or family” (105/277, 37.9%). The most frequent target audience codes were “unable to identify” (111/277, 40.1%) and “other” (55/277, 19.9%). Significant differences were found in the popularity of tweets with (mean 1, SD 1.3 favorites) or without (mean 0.7, SD 1.3 favorites), the content code being “HF research” (P=.049). Tweets with the source code “professional, government, patient advocacy organizations, or charities” were significantly more likely to be marked as a favorite and retweeted than those without this source code (mean 1.2, SD 1.4 vs mean 0.8, SD 1.2, P=.03) and (mean 1.5, SD 1.8 vs mean 0.9, SD 2.0, P=.03). Interview participants believed that social media was a useful way to gather professional information. They did not believe that social media was useful for communicating with patients due to privacy concerns and the fact that the information had to be kept general rather than be tailored for a specific patient and the belief that their patients did not use social media or technology. Conclusions Existing Twitter content related to HF comes from a combination of patients and evidence-based organizations; however, there is little nutrition content. That gap may present an opportunity for EBNPG dissemination. Health professionals use social media to gather information for themselves but are skeptical of its value when communicating with patients, particularly due to privacy concerns and misconceptions about the characteristics of social media users. PMID:27847349

  2. Schroedinger’s code: Source code availability and transparency in astrophysics

    NASA Astrophysics Data System (ADS)

    Ryan, PW; Allen, Alice; Teuben, Peter

    2018-01-01

    Astronomers use software for their research, but how many of the codes they use are available as source code? We examined a sample of 166 papers from 2015 for clearly identified software use, then searched for source code for the software packages mentioned in these research papers. We categorized the software to indicate whether source code is available for download and whether there are restrictions to accessing it, and if source code was not available, whether some other form of the software, such as a binary, was. Over 40% of the source code for the software used in our sample was not available for download.As URLs have often been used as proxy citations for software, we also extracted URLs from one journal’s 2015 research articles, removed those from certain long-term, reliable domains, and tested the remainder to determine what percentage of these URLs were still accessible in September and October, 2017.

  3. Summary of evidence for an anticodonic basis for the origin of the genetic code

    NASA Technical Reports Server (NTRS)

    Lacey, J. C., Jr.; Mullins, D. W., Jr.

    1981-01-01

    This article summarizes data supporting the hypothesis that the genetic code origin was based on relationships (probably affinities) between amino acids and their anticodon nucleotides. Selective activation seems to follow from selective affinity and consequently, incorporation of amino acids into peptides can also be selective. It is suggested that these selectivities in affinity and activation, coupled with the base pairing specificities, allowed the origin of the code and the process of translation.

  4. Algorithm Science to Operations for the National Polar-orbiting Operational Environmental Satellite System (NPOESS) Visible/Infrared Imager/Radiometer Suite (VIIRS)

    NASA Technical Reports Server (NTRS)

    Duda, James L.; Barth, Suzanna C

    2005-01-01

    The VIIRS sensor provides measurements for 22 Environmental Data Records (EDRs) addressing the atmosphere, ocean surface temperature, ocean color, land parameters, aerosols, imaging for clouds and ice, and more. That is, the VIIRS collects visible and infrared radiometric data of the Earth's atmosphere, ocean, and land surfaces. Data types include atmospheric, clouds, Earth radiation budget, land/water and sea surface temperature, ocean color, and low light imagery. This wide scope of measurements calls for the preparation of a multiplicity of Algorithm Theoretical Basis Documents (ATBDs), and, additionally, for intermediate products such as cloud mask, et al. Furthermore, the VIIRS interacts with three or more other sensors. This paper addresses selected and crucial elements of the process being used to convert and test an immense volume of a maturing and changing science code to the initial operational source code in preparation for launch of NPP. The integrity of the original science code is maintained and enhanced via baseline comparisons when re-hosted, in addition to multiple planned code performance reviews.

  5. Towards a Property-Based Testing Environment With Applications to Security-Critical Software

    DTIC Science & Technology

    1994-01-01

    4 is a slice of the MINIX [Tan87] login program with respect to the setuid system call. The original program contains 337 lines, the slice only 20...demonstrat- ing the e ectiveness of slicing in this case5. The mapping of the abstract concept of au- thentication to source code in the MINIX login...Slice of MINIX login with respect to setuid(). occurs. If no incorrect execution occurs, slices of the program are examined for their data ow coverage

  6. A&M. TAN607 first floor plan for cold assembly area. Shows ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    A&M. TAN-607 first floor plan for cold assembly area. Shows special source vaults, X-ray room, instrument shops, and positions of large machines in component test laboratory. This drawing was re-drawn to show conditions in 1994. Ralph M. Parsons 902-3-ANP-607-A 100. Date of original: December 1952. Approved by INEEL Classification Office for public release. INEEL index code no. 034-060-00-693-106752 - Idaho National Engineering Laboratory, Test Area North, Scoville, Butte County, ID

  7. New AGU scientific integrity and professional ethics policy available for review

    USGS Publications Warehouse

    Gundersen, Linda C.

    2012-01-01

    The AGU Task Force on Scientific Ethics welcomes your review and comments on AGU's new Scientific Integrity and Professional Ethics Policy. The policy has at its heart a code of conduct adopted from the internationally accepted "Singapore Statement," originally created by the Second World Conference on Research Integrity (http://www.singaporestatement.org/), held in 2010. The new policy also encompasses professional and publishing ethics, providing a single source of guidance to AGU members, officers, authors, and editors

  8. Calculated Research on the Energetic Characteristic of TN in Composite Solid Propellant,

    DTIC Science & Technology

    1986-01-06

    TEST CHART NATK)AAL BIREAU Of SIANDARN D %3 A F..• ,*’" ..... . . . L,.•...... . . . . . . . . FTD-ID(RS)T-0905-85 (0 FOREIGN TECHNOLOGY DIVISION...English pages: 12 Source: Yuhang Xuebao, Nr. 2, 1984, pp. 84-90 Country of origin: China Translated by: SCITRAN F33657-84- D -0165 Requester: FTD/TQTA...best quality copy available. Accesston 7i VTIS GT A&l DTIC TAB justif i,,Lltion- Distr-ibution/ Availability Codes Avail and/or D i t Special ft

  9. A practical approach to portability and performance problems on massively parallel supercomputers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beazley, D.M.; Lomdahl, P.S.

    1994-12-08

    We present an overview of the tactics we have used to achieve a high-level of performance while improving portability for a large-scale molecular dynamics code SPaSM. SPaSM was originally implemented in ANSI C with message passing for the Connection Machine 5 (CM-5). In 1993, SPaSM was selected as one of the winners in the IEEE Gordon Bell Prize competition for sustaining 50 Gflops on the 1024 node CM-5 at Los Alamos National Laboratory. Achieving this performance on the CM-5 required rewriting critical sections of code in CDPEAC assembler language. In addition, the code made extensive use of CM-5 parallel I/Omore » and the CMMD message passing library. Given this highly specialized implementation, we describe how we have ported the code to the Cray T3D and high performance workstations. In addition we will describe how it has been possible to do this using a single version of source code that runs on all three platforms without sacrificing any performance. Sound too good to be true? We hope to demonstrate that one can realize both code performance and portability without relying on the latest and greatest prepackaged tool or parallelizing compiler.« less

  10. Measuring Diagnoses: ICD Code Accuracy

    PubMed Central

    O'Malley, Kimberly J; Cook, Karon F; Price, Matt D; Wildes, Kimberly Raiford; Hurdle, John F; Ashton, Carol M

    2005-01-01

    Objective To examine potential sources of errors at each step of the described inpatient International Classification of Diseases (ICD) coding process. Data Sources/Study Setting The use of disease codes from the ICD has expanded from classifying morbidity and mortality information for statistical purposes to diverse sets of applications in research, health care policy, and health care finance. By describing a brief history of ICD coding, detailing the process for assigning codes, identifying where errors can be introduced into the process, and reviewing methods for examining code accuracy, we help code users more systematically evaluate code accuracy for their particular applications. Study Design/Methods We summarize the inpatient ICD diagnostic coding process from patient admission to diagnostic code assignment. We examine potential sources of errors at each step and offer code users a tool for systematically evaluating code accuracy. Principle Findings Main error sources along the “patient trajectory” include amount and quality of information at admission, communication among patients and providers, the clinician's knowledge and experience with the illness, and the clinician's attention to detail. Main error sources along the “paper trail” include variance in the electronic and written records, coder training and experience, facility quality-control efforts, and unintentional and intentional coder errors, such as misspecification, unbundling, and upcoding. Conclusions By clearly specifying the code assignment process and heightening their awareness of potential error sources, code users can better evaluate the applicability and limitations of codes for their particular situations. ICD codes can then be used in the most appropriate ways. PMID:16178999

  11. The aminoacyl-tRNA synthetases had only a marginal role in the origin of the organization of the genetic code: Evidence in favor of the coevolution theory.

    PubMed

    Di Giulio, Massimo

    2017-11-07

    The coevolution theory of the origin of the genetic code suggests that the organization of the genetic code coevolved with the biosynthetic relationships between amino acids. The mechanism that allowed this coevolution was based on tRNA-like molecules on which-this theory-would postulate the biosynthetic transformations between amino acids to have occurred. This mechanism makes a prediction on how the role conducted by the aminoacyl-tRNA synthetases (ARSs), in the origin of the genetic code, should have been. Indeed, if the biosynthetic transformations between amino acids occurred on tRNA-like molecules, then there was no need to link amino acids to these molecules because amino acids were already charged on tRNA-like molecules, as the coevolution theory suggests. In spite of the fact that ARSs make the genetic code responsible for the first interaction between a component of nucleic acids and that of proteins, for the coevolution theory the role of ARSs should have been entirely marginal in the genetic code origin. Therefore, I have conducted a further analysis of the distribution of the two classes of ARSs and of their subclasses-in the genetic code table-in order to perform a falsification test of the coevolution theory. Indeed, in the case in which the distribution of ARSs within the genetic code would have been highly significant, then the coevolution theory would be falsified since the mechanism on which it is based would not predict a fundamental role of ARSs in the origin of the genetic code. I found that the statistical significance of the distribution of the two classes of ARSs in the table of the genetic code is low or marginal, whereas that of the subclasses of ARSs statistically significant. However, this is in perfect agreement with the postulates of the coevolution theory. Indeed, the only case of statistical significance-regarding the classes of ARSs-is appreciable for the CAG code, whereas for its complement-the UNN/NUN code-only a marginal significance is measurable. These two codes codify roughly for the two ARS classes, in particular, the CAG code for the class II while the UNN/NUN code for the class I. Furthermore, the subclasses of ARSs show a statistical significance of their distribution in the genetic code table. Nevertheless, the more sensible explanation for these observations would be the following. The observation that would link the two classes of ARSs to the CAG and UNN/NUN codes, and the statistical significance of the distribution of the subclasses of ARSs in the genetic code table, would be only a secondary effect due to the highly significant distribution of the polarity of amino acids and their biosynthetic relationships in the genetic code. That is to say, the polarity of amino acids and their biosynthetic relationships would have conditioned the evolution of ARSs so that their presence in the genetic code would have been detectable. Even if the ARSs would not have-on their own-influenced directly the evolutionary organization of the genetic code. In other words, the role that ARSs had in the origin of the genetic code would have been entirely marginal. This conclusion would be in perfect accord with the predictions of the coevolution theory. Conversely, this conclusion would be in contrast-at least partially-with the physicochemical theories of the origin of the genetic code because they would foresee an absolutely more active role of ARSs in the origin of the organization of the genetic code. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. A discriminative test among the different theories proposed to explain the origin of the genetic code: the coevolution theory finds additional support.

    PubMed

    Giulio, Massimo Di

    2018-05-19

    A discriminative statistical test among the different theories proposed to explain the origin of the genetic code is presented. Gathering the amino acids into polarity and biosynthetic classes that are the first expression of the physicochemical theory of the origin of the genetic code and the second expression of the coevolution theory, these classes are utilized in the Fisher's exact test to establish their significance within the genetic code table. Linking to the rows and columns of the genetic code of probabilities that express the statistical significance of these classes, I have finally been in the condition to be able to calculate a χ value to link to both the physicochemical theory and to the coevolution theory that would express the corroboration level referred to these theories. The comparison between these two χ values showed that the coevolution theory is able to explain - in this strictly empirical analysis - the origin of the genetic code better than that of the physicochemical theory. Copyright © 2018 Elsevier B.V. All rights reserved.

  13. Operational rate-distortion performance for joint source and channel coding of images.

    PubMed

    Ruf, M J; Modestino, J W

    1999-01-01

    This paper describes a methodology for evaluating the operational rate-distortion behavior of combined source and channel coding schemes with particular application to images. In particular, we demonstrate use of the operational rate-distortion function to obtain the optimum tradeoff between source coding accuracy and channel error protection under the constraint of a fixed transmission bandwidth for the investigated transmission schemes. Furthermore, we develop information-theoretic bounds on performance for specific source and channel coding systems and demonstrate that our combined source-channel coding methodology applied to different schemes results in operational rate-distortion performance which closely approach these theoretical limits. We concentrate specifically on a wavelet-based subband source coding scheme and the use of binary rate-compatible punctured convolutional (RCPC) codes for transmission over the additive white Gaussian noise (AWGN) channel. Explicit results for real-world images demonstrate the efficacy of this approach.

  14. Toward an architecture of attachment disorganization: John Bowlby’s published and unpublished reflections

    PubMed Central

    Solomon, Judith; Duschinsky, Robbie; Bakkum, Lianne; Schuengel, Carlo

    2017-01-01

    This article examines the construct of disorganized attachment originally proposed by Main and Solomon, developing some new conjectures based on inspiration from a largely unknown source: John Bowlby’s unpublished texts, housed at the Wellcome Trust Library Archive in London (with permission from the Bowlby family). We explore Bowlby’s discussions of disorganized attachment, which he understood from the perspective of ethological theories of conflict behavior. Bowlby’s reflections regarding differences among the behaviors used to code disorganized attachment will be used to explore distinctions that may underlie the structure of the current coding system. The article closes with an emphasis on the importance Bowlby placed on Popper’s distinction between the context of discovery and the context of justification in developmental science. PMID:28791871

  15. Biometrics encryption combining palmprint with two-layer error correction codes

    NASA Astrophysics Data System (ADS)

    Li, Hengjian; Qiu, Jian; Dong, Jiwen; Feng, Guang

    2017-07-01

    To bridge the gap between the fuzziness of biometrics and the exactitude of cryptography, based on combining palmprint with two-layer error correction codes, a novel biometrics encryption method is proposed. Firstly, the randomly generated original keys are encoded by convolutional and cyclic two-layer coding. The first layer uses a convolution code to correct burst errors. The second layer uses cyclic code to correct random errors. Then, the palmprint features are extracted from the palmprint images. Next, they are fused together by XORing operation. The information is stored in a smart card. Finally, the original keys extraction process is the information in the smart card XOR the user's palmprint features and then decoded with convolutional and cyclic two-layer code. The experimental results and security analysis show that it can recover the original keys completely. The proposed method is more secure than a single password factor, and has higher accuracy than a single biometric factor.

  16. Plant identification credibility in ethnobotany: a closer look at Polish ethnographic studies

    PubMed Central

    2010-01-01

    Background This paper is an attempt to estimate the percentage of erroneously identified taxa in ethnographic studies concerning the use of plants and to propose a code for recording credibility of identification in historical ethnobotany publications. Methods A sample of Polish-language ethnobotanical literature (45 published sources from 1874-2005) and four collections of voucher specimens (from 1894-1975) were analyzed. Errors were detected in the publications by comparing the data with existing knowledge on the distribution of plant names and species ranges. The voucher specimens were re-examined. A one-letter code was invented for quick identification of the credibility of data published in lists of species compiled from historical or ethnographic sources, according to the source of identification: voucher specimen, Latin binominal, botanical expert, obvious widespread name, folk name, mode of use, range, physical description or photograph. To test the use of the code an up-to-date list of wild food plants used in Poland was made. Results A significant difference between the ratio of mistakes in the voucher specimen collections and the ratio of detectable mistakes in the studies without herbarium documentation was found. At least 2.3% of taxa in the publications were identified erroneously (mean rate was 6.2% per publication), and in half of these mistakes even the genus was not correct. As many as 10.0% of voucher specimens (on average 9.2% per collection) were originally erroneously identified, but three quarters of the identification mistakes remained within-genus. The species of the genera Thymus, Rumex and Rubus were most often confused within the genus. Not all of the invented credibility codes were used in the list of wild food plants, but they may be useful for other researchers. The most often used codes were the ones signifying identification by: voucher specimen, botanical expert and by a common name used throughout the country. Conclusions The results of this study support the rigorous use of voucher specimens in ethnobotany, although they also reveal a relatively high percentage of misidentified taxa in the specimens studied. The invented credibility coding system may become a useful tool for communication between historical ethnobotanists, particularly in creating larger databases. PMID:21167056

  17. Continuation of research into language concepts for the mission support environment: Source code

    NASA Technical Reports Server (NTRS)

    Barton, Timothy J.; Ratner, Jeremiah M.

    1991-01-01

    Research into language concepts for the Mission Control Center is presented. A computer code for source codes is presented. The file contains the routines which allow source code files to be created and compiled. The build process assumes that all elements and the COMP exist in the current directory. The build process places as much code generation as possible on the preprocessor as possible. A summary is given of the source files as used and/or manipulated by the build routine.

  18. A nomenclator of extant and fossil taxa of the Melanopsidae (Gastropoda, Cerithioidea)

    PubMed Central

    Neubauer, Thomas A.

    2016-01-01

    Abstract This nomenclator provides details on all published names in the family-, genus-, and species-group, as well as for a few infrasubspecific names introduced for, or attributed to, the family Melanopsidae. It includes nomenclaturally valid names, as well as junior homonyms, junior objective synonyms, nomina nuda, common incorrect subsequent spellings, and as far as possible discussion on the current status in taxonomy. The catalogue encompasses three family-group names, 79 genus-group names, and 1381 species-group names. All of them are given in their original combination and spelling (except mandatory corrections requested by the Code), along with their original source. For each family- and genus-group name, the original classification and the type genus and type species, respectively, are given. Data provided for species-group taxa are type locality, type horizon (for fossil taxa), and type specimens, as far as available. PMID:27551193

  19. Open-Source as a strategy for operational software - the case of Enki

    NASA Astrophysics Data System (ADS)

    Kolberg, Sjur; Bruland, Oddbjørn

    2014-05-01

    Since 2002, SINTEF Energy has been developing what is now known as the Enki modelling system. This development has been financed by Norway's largest hydropower producer Statkraft, motivated by a desire for distributed hydrological models in operational use. As the owner of the source code, Statkraft has recently decided on Open Source as a strategy for further development, and for migration from an R&D context to operational use. A current cooperation project is currently carried out between SINTEF Energy, 7 large Norwegian hydropower producers including Statkraft, three universities and one software company. Of course, the most immediate task is that of software maturing. A more important challenge, however, is one of gaining experience within the operational hydropower industry. A transition from lumped to distributed models is likely to also require revision of measurement program, calibration strategy, use of GIS and modern data sources like weather radar and satellite imagery. On the other hand, map based visualisations enable a richer information exchange between hydrologic forecasters and power market traders. The operating context of a distributed hydrology model within hydropower planning is far from settled. Being both a modelling framework and a library of plugin-routines to build models from, Enki supports the flexibility needed in this situation. Recent development has separated the core from the user interface, paving the way for a scripting API, cross-platform compilation, and front-end programs serving different degrees of flexibility, robustness and security. The open source strategy invites anyone to use Enki and to develop and contribute new modules. Once tested, the same modules are available for the operational versions of the program. A core challenge is to offer rigid testing procedures and mechanisms to reject routines in an operational setting, without limiting the experimentation with new modules. The Open Source strategy also has implications for building and maintaining competence around the source code and the advanced hydrological and statistical routines in Enki. Originally developed by hydrologists, the Enki code is now approaching a state where maintenance requires a background in professional software development. Without the advantage of proprietary source code, both hydrologic improvements and software maintenance depend on donations or development support on a case-to-case basis, a situation well known within the open source community. It remains to see whether these mechanisms suffice to keep Enki at the maintenance level required by the hydropower sector. ENKI is available from www.opensource-enki.org.

  20. Measuring diagnoses: ICD code accuracy.

    PubMed

    O'Malley, Kimberly J; Cook, Karon F; Price, Matt D; Wildes, Kimberly Raiford; Hurdle, John F; Ashton, Carol M

    2005-10-01

    To examine potential sources of errors at each step of the described inpatient International Classification of Diseases (ICD) coding process. The use of disease codes from the ICD has expanded from classifying morbidity and mortality information for statistical purposes to diverse sets of applications in research, health care policy, and health care finance. By describing a brief history of ICD coding, detailing the process for assigning codes, identifying where errors can be introduced into the process, and reviewing methods for examining code accuracy, we help code users more systematically evaluate code accuracy for their particular applications. We summarize the inpatient ICD diagnostic coding process from patient admission to diagnostic code assignment. We examine potential sources of errors at each step and offer code users a tool for systematically evaluating code accuracy. Main error sources along the "patient trajectory" include amount and quality of information at admission, communication among patients and providers, the clinician's knowledge and experience with the illness, and the clinician's attention to detail. Main error sources along the "paper trail" include variance in the electronic and written records, coder training and experience, facility quality-control efforts, and unintentional and intentional coder errors, such as misspecification, unbundling, and upcoding. By clearly specifying the code assignment process and heightening their awareness of potential error sources, code users can better evaluate the applicability and limitations of codes for their particular situations. ICD codes can then be used in the most appropriate ways.

  1. Computerized Workstation for Tsunami Hazard Monitoring

    NASA Astrophysics Data System (ADS)

    Lavrentiev-Jr, Mikhail; Marchuk, Andrey; Romanenko, Alexey; Simonov, Konstantin; Titov, Vasiliy

    2010-05-01

    We present general structure and functionality of the proposed Computerized Workstation for Tsunami Hazard Monitoring (CWTHM). The tool allows interactive monitoring of hazard, tsunami risk assessment, and mitigation - at all stages, from the period of strong tsunamigenic earthquake preparation to inundation of the defended coastal areas. CWTHM is a software-hardware complex with a set of software applications, optimized to achieve best performance on hardware platforms in use. The complex is calibrated for selected tsunami source zone(s) and coastal zone(s) to be defended. The number of zones (both source and coastal) is determined, or restricted, by available hardware resources. The presented complex performs monitoring of selected tsunami source zone via the Internet. The authors developed original algorithms, which enable detection of the preparation zone of the strong underwater earthquake automatically. For the so-determined zone the event time, magnitude and spatial location of tsunami source are evaluated by means of energy of the seismic precursors (foreshocks) analysis. All the above parameters are updated after each foreshock. Once preparing event is detected, several scenarios are forecasted for wave amplitude parameters as well as the inundation zone. Estimations include the lowest and the highest wave amplitudes and the least and the most inundation zone. In addition to that, the most probable case is calculated. In case of multiple defended coastal zones, forecasts and estimates can be done in parallel. Each time the simulated model wave reaches deep ocean buoys or tidal gauge, expected values of wave parameters and inundation zones are updated with historical events information and pre-calculated scenarios. The Method of Splitting Tsunami (MOST) software package is used for mathematical simulation. The authors suggest code acceleration for deep water wave propagation. As a result, performance is 15 times faster compared to MOST, original version. Performance gain is achieved by compiler options, use of optimized libraries, and advantages of OpenMP parallel technology. Moreover, it is possible to achieve 100 times code acceleration by using modern Graphics Processing Units (GPU). Parallel evaluation of inundation zones for multiple coastal zones is also available. All computer codes can be easily assembled under MS Windows and Unix OS family. Although software is virtually platform independent, the most performance gain is achieved while using the recommended hardware components. When the seismic event occurs, all valuable parameters are updated with seismic data and wave propagation monitoring is enabled. As soon as the wave passes each deep ocean tsunameter, parameters of the initial displacement at source are updated from direct calculations based on original algorithms. For better source reconstruction, a combination of two methods is used: optimal unit source linear combination from preliminary calculated database and direct numerical inversion along the wave ray between real source and particular measurement buoys. Specific dissipation parameter along with the wave ray is also taken into account. During the entire wave propagation process the expected wave parameters and inundation zone(s) characteristics are updated with all available information. If recommended hardware components are used, monitoring results are available in real time. The suggested version of CWTHM has been tested by analyzing seismic precursors (foreshocks) and the measured tsunami waves at North Pacific for the Central Kuril's tsunamigenic earthquake of November 15, 2006.

  2. The discounting model selector: Statistical software for delay discounting applications.

    PubMed

    Gilroy, Shawn P; Franck, Christopher T; Hantula, Donald A

    2017-05-01

    Original, open-source computer software was developed and validated against established delay discounting methods in the literature. The software executed approximate Bayesian model selection methods from user-supplied temporal discounting data and computed the effective delay 50 (ED50) from the best performing model. Software was custom-designed to enable behavior analysts to conveniently apply recent statistical methods to temporal discounting data with the aid of a graphical user interface (GUI). The results of independent validation of the approximate Bayesian model selection methods indicated that the program provided results identical to that of the original source paper and its methods. Monte Carlo simulation (n = 50,000) confirmed that true model was selected most often in each setting. Simulation code and data for this study were posted to an online repository for use by other researchers. The model selection approach was applied to three existing delay discounting data sets from the literature in addition to the data from the source paper. Comparisons of model selected ED50 were consistent with traditional indices of discounting. Conceptual issues related to the development and use of computer software by behavior analysts and the opportunities afforded by free and open-sourced software are discussed and a review of possible expansions of this software are provided. © 2017 Society for the Experimental Analysis of Behavior.

  3. Matlab Based LOCO

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Portmann, Greg; /LBL, Berkeley; Safranek, James

    The LOCO algorithm has been used by many accelerators around the world. Although the uses for LOCO vary, the most common use has been to find calibration errors and correct the optics functions. The light source community in particular has made extensive use of the LOCO algorithms to tightly control the beta function and coupling. Maintaining high quality beam parameters requires constant attention so a relatively large effort was put into software development for the LOCO application. The LOCO code was originally written in FORTRAN. This code worked fine but it was somewhat awkward to use. For instance, the FORTRANmore » code itself did not calculate the model response matrix. It required a separate modeling code such as MAD to calculate the model matrix then one manually loads the data into the LOCO code. As the number of people interested in LOCO grew, it required making it easier to use. The decision to port LOCO to Matlab was relatively easy. It's best to use a matrix programming language with good graphics capability; Matlab was also being used for high level machine control; and the accelerator modeling code AT, [5], was already developed for Matlab. Since LOCO requires collecting and processing a relative large amount of data, it is very helpful to have the LOCO code compatible with the high level machine control, [3]. A number of new features were added while porting the code from FORTRAN and new methods continue to evolve, [7][9]. Although Matlab LOCO was written with AT as the underlying tracking code, a mechanism to connect to other modeling codes has been provided.« less

  4. Simulation of Charge Collection in Diamond Detectors Irradiated with Deuteron-Triton Neutron Sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Milocco, Alberto; Trkov, Andrej; Pillon, Mario

    2011-12-13

    Diamond-based neutron spectrometers exhibit outstanding properties such as radiation hardness, low sensitivity to gamma rays, fast response and high-energy resolution. They represent a very promising application of diamonds for plasma diagnostics in fusion devices. The measured pulse height spectrum is obtained from the collection of helium and beryllium ions produced by the reactions on {sup 12}C. An original code is developed to simulate the production and the transport of charged particles inside the diamond detector. The ion transport methodology is based on the well-known TRIM code. The reactions of interest are triggered using the ENDF/B-VII.0 nuclear data for the neutronmore » interactions on carbon. The model is implemented in the TALLYX subroutine of the MCNP5 and MCNPX codes. Measurements with diamond detectors in a {approx}14 MeV neutron field have been performed at the FNG (Rome, Italy) and IRMM (Geel, Belgium) facilities. The comparison of the experimental data with the simulations validates the proposed model.« less

  5. Tracking financial flows for immunization in Honduras.

    PubMed

    Valdés, Werner; Janusz, Cara Bess; Molina Aguilera, Ida Berenice; Mendoza, Lourdes; Díaz, Iris Yolanda; Resch, Stephen

    2015-05-07

    In Honduras, until 2008, vaccine and injection supplies were financed with domestic resources. With the introduction of rotavirus vaccine in 2009 and pneumococcal conjugate in 2011, the country's Expanded Program on Immunization required an influx of resources to support not only vaccine procurement but also investments in cold chain infrastructure and programmatic strategies. This paper examines the origin, allocation, and use of resources for immunization in 2011 in Honduras, with the aim of identifying gaps in financing. An adaptation of the System of Health Accounts (2011) codes was used to specifically track resources for immunization services in Honduras for 2011. All financial flows were entered into an Excel database, and each transfer of resources was coded with a financing source and a financing agent. These coded financing sources were then distributed by provider, health care function (activity), health care provision (line item or resource input), and beneficiary (geographic, population, and antigen). All costs were calculated in 2011 United States dollars. In 2011, financing for routine immunization in Honduras amounted to US$ 49.1 million, which is equal to 3.3% of the total health spending of US$ 1.49 billion and 0.29% of the GDP. Of the total financing, 64% originate from domestic sources. The other 36% is external financing, most importantly Gavi support for introducing new vaccines. This analysis identified potential financing gaps for many immunization-related activities besides procuring vaccines, such as expanding the cold chain, training, social mobilization, information systems, and research. The funding for Honduras' immunization program is a small share of total public spending on health. However, new vaccines recently added to the schedule with financial support from Gavi have increased the financing requirements by more than 30% in comparison to 2008. The Honduran government and its partners are developing sustainability plans to cover a financing gap that will occur when the country graduates from Gavi support in 2016. Access to lower vaccine prices will make the existing and future program, including the planned introduction of HPV vaccine to adolescent girls, more affordable. Copyright © 2015. Published by Elsevier Ltd.

  6. Calculation and plotting of retinal nerve fiber paths based on Jansonius et al. 2009/2012 with an R program.

    PubMed

    Bach, M; Hoffmann, M B

    2018-06-01

    The data presented in this article are related to the research article entitled "Retinal conduction speed analysis reveals different origins of the P50 and N95 components of the (multifocal) pattern electroretinogram" (Bach et al., 2018) [1]. That analysis required the individual length data of the retinal nerve fibers (from ganglion cell body to optic nerve head, depending on the position of the ganglion cell body). Jansonius et al. (2009, 2012) [2,3] mathematically modeled the path morphology of the human retinal nerve fibers. We here present a working implementation with source code (for the free and open-source programming environment "R") of the Jansonius' formulas, including all errata. One file defines Jansonius et al.'s "phi" function. This function allows quantitative modelling of paths (and any measures derived from them) of the retinal nerve fibers. As a working demonstration, a second file contains a graph which plots samples of nerve fibers. The included R code runs in base R without the need of any additional packages.

  7. Schroedinger’s Code: A Preliminary Study on Research Source Code Availability and Link Persistence in Astrophysics

    NASA Astrophysics Data System (ADS)

    Allen, Alice; Teuben, Peter J.; Ryan, P. Wesley

    2018-05-01

    We examined software usage in a sample set of astrophysics research articles published in 2015 and searched for the source codes for the software mentioned in these research papers. We categorized the software to indicate whether the source code is available for download and whether there are restrictions to accessing it, and if the source code is not available, whether some other form of the software, such as a binary, is. We also extracted hyperlinks from one journal’s 2015 research articles, as links in articles can serve as an acknowledgment of software use and lead to the data used in the research, and tested them to determine which of these URLs are still accessible. For our sample of 715 software instances in the 166 articles we examined, we were able to categorize 418 records as according to whether source code was available and found that 285 unique codes were used, 58% of which offered the source code for download. Of the 2558 hyperlinks extracted from 1669 research articles, at best, 90% of them were available over our testing period.

  8. Automated Concurrent Blackboard System Generation in C++

    NASA Technical Reports Server (NTRS)

    Kaplan, J. A.; McManus, J. W.; Bynum, W. L.

    1999-01-01

    In his 1992 Ph.D. thesis, "Design and Analysis Techniques for Concurrent Blackboard Systems", John McManus defined several performance metrics for concurrent blackboard systems and developed a suite of tools for creating and analyzing such systems. These tools allow a user to analyze a concurrent blackboard system design and predict the performance of the system before any code is written. The design can be modified until simulated performance is satisfactory. Then, the code generator can be invoked to generate automatically all of the code required for the concurrent blackboard system except for the code implementing the functionality of each knowledge source. We have completed the port of the source code generator and a simulator for a concurrent blackboard system. The source code generator generates the necessary C++ source code to implement the concurrent blackboard system using Parallel Virtual Machine (PVM) running on a heterogeneous network of UNIX(trademark) workstations. The concurrent blackboard simulator uses the blackboard specification file to predict the performance of the concurrent blackboard design. The only part of the source code for the concurrent blackboard system that the user must supply is the code implementing the functionality of the knowledge sources.

  9. A Grid Sourcing and Adaptation Study Using Unstructured Grids for Supersonic Boom Prediction

    NASA Technical Reports Server (NTRS)

    Carter, Melissa B.; Deere, Karen A.

    2008-01-01

    NASA created the Supersonics Project as part of the NASA Fundamental Aeronautics Program to advance technology that will make a supersonic flight over land viable. Computational flow solvers have lacked the ability to accurately predict sonic boom from the near to far field. The focus of this investigation was to establish gridding and adaptation techniques to predict near-to-mid-field (<10 body lengths below the aircraft) boom signatures at supersonic speeds using the USM3D unstructured grid flow solver. The study began by examining sources along the body the aircraft, far field sourcing and far field boundaries. The study then examined several techniques for grid adaptation. During the course of the study, volume sourcing was introduced as a new way to source grids using the grid generation code VGRID. Two different methods of using the volume sources were examined. The first method, based on manual insertion of the numerous volume sources, made great improvements in the prediction capability of USM3D for boom signatures. The second method (SSGRID), which uses an a priori adaptation approach to stretch and shear the original unstructured grid to align the grid and pressure waves, showed similar results with a more automated approach. Due to SSGRID s results and ease of use, the rest of the study focused on developing a best practice using SSGRID. The best practice created by this study for boom predictions using the CFD code USM3D involved: 1) creating a small cylindrical outer boundary either 1 or 2 body lengths in diameter (depending on how far below the aircraft the boom prediction is required), 2) using a single volume source under the aircraft, and 3) using SSGRID to stretch and shear the grid to the desired length.

  10. Real-time realizations of the Bayesian Infrasonic Source Localization Method

    NASA Astrophysics Data System (ADS)

    Pinsky, V.; Arrowsmith, S.; Hofstetter, A.; Nippress, A.

    2015-12-01

    The Bayesian Infrasonic Source Localization method (BISL), introduced by Mordak et al. (2010) and upgraded by Marcillo et al. (2014) is destined for the accurate estimation of the atmospheric event origin at local, regional and global scales by the seismic and infrasonic networks and arrays. The BISL is based on probabilistic models of the source-station infrasonic signal propagation time, picking time and azimuth estimate merged with a prior knowledge about celerity distribution. It requires at each hypothetical source location, integration of the product of the corresponding source-station likelihood functions multiplied by a prior probability density function of celerity over the multivariate parameter space. The present BISL realization is generally time-consuming procedure based on numerical integration. The computational scheme proposed simplifies the target function so that integrals are taken exactly and are represented via standard functions. This makes the procedure much faster and realizable in real-time without practical loss of accuracy. The procedure executed as PYTHON-FORTRAN code demonstrates high performance on a set of the model and real data.

  11. Normalization of a collimated 14.7 MeV neutron source in a neutron spectrometry system for benchmark experiments

    NASA Astrophysics Data System (ADS)

    Ofek, R.; Tsechanski, A.; Shani, G.

    1988-05-01

    In the present study a method used to normalize a collimated 14.7 MeV neutron beam is introduced. It combined a measurement of the fast neutron scalar flux passing through the collimator, using a copper foil activation, with a neutron transport calculation of the foil activation per unit source neutron, carried out by the discrete-ordinates transport code DOT 4.2. The geometry of the collimated neutron beam is composed of a D-T neutron source positioned 30 cm in front of a 6 cm diameter collimator, through a 120 cm thick paraffin wall. The neutron flux emitted from the D-T source was counted by an NE-213 scintillator, simultaneously with the irradiation of the copper foil. Thus, the determination of the normalization factor of the D-T source is used for an absolute flux calibration of the NE-213 scintillator. The major contributions to the uncertainty in the determination of the normalization factor, and their origins, are discussed.

  12. You've Written a Cool Astronomy Code! Now What Do You Do with It?

    NASA Astrophysics Data System (ADS)

    Allen, Alice; Accomazzi, A.; Berriman, G. B.; DuPrie, K.; Hanisch, R. J.; Mink, J. D.; Nemiroff, R. J.; Shamir, L.; Shortridge, K.; Taylor, M. B.; Teuben, P. J.; Wallin, J. F.

    2014-01-01

    Now that you've written a useful astronomy code for your soon-to-be-published research, you have to figure out what you want to do with it. Our suggestion? Share it! This presentation highlights the means and benefits of sharing your code. Make your code citable -- submit it to the Astrophysics Source Code Library and have it indexed by ADS! The Astrophysics Source Code Library (ASCL) is a free online registry of source codes of interest to astronomers and astrophysicists. With over 700 codes, it is continuing its rapid growth, with an average of 17 new codes a month. The editors seek out codes for inclusion; indexing by ADS improves the discoverability of codes and provides a way to cite codes as separate entries, especially codes without papers that describe them.

  13. Optimized scalar promotion with load and splat SIMD instructions

    DOEpatents

    Eichenberger, Alexander E; Gschwind, Michael K; Gunnels, John A

    2013-10-29

    Mechanisms for optimizing scalar code executed on a single instruction multiple data (SIMD) engine are provided. Placement of vector operation-splat operations may be determined based on an identification of scalar and SIMD operations in an original code representation. The original code representation may be modified to insert the vector operation-splat operations based on the determined placement of vector operation-splat operations to generate a first modified code representation. Placement of separate splat operations may be determined based on identification of scalar and SIMD operations in the first modified code representation. The first modified code representation may be modified to insert or delete separate splat operations based on the determined placement of the separate splat operations to generate a second modified code representation. SIMD code may be output based on the second modified code representation for execution by the SIMD engine.

  14. Optimized scalar promotion with load and splat SIMD instructions

    DOEpatents

    Eichenberger, Alexandre E [Chappaqua, NY; Gschwind, Michael K [Chappaqua, NY; Gunnels, John A [Yorktown Heights, NY

    2012-08-28

    Mechanisms for optimizing scalar code executed on a single instruction multiple data (SIMD) engine are provided. Placement of vector operation-splat operations may be determined based on an identification of scalar and SIMD operations in an original code representation. The original code representation may be modified to insert the vector operation-splat operations based on the determined placement of vector operation-splat operations to generate a first modified code representation. Placement of separate splat operations may be determined based on identification of scalar and SIMD operations in the first modified code representation. The first modified code representation may be modified to insert or delete separate splat operations based on the determined placement of the separate splat operations to generate a second modified code representation. SIMD code may be output based on the second modified code representation for execution by the SIMD engine.

  15. Identifying and acting on potentially inappropriate care? Inadequacy of current hospital coding for this task.

    PubMed

    Cooper, P David; Smart, David R

    2017-06-01

    Recent Australian attempts to facilitate disinvestment in healthcare, by identifying instances of 'inappropriate' care from large Government datasets, are subject to significant methodological flaws. Amongst other criticisms has been the fact that the Government datasets utilized for this purpose correlate poorly with datasets collected by relevant professional bodies. Government data derive from official hospital coding, collected retrospectively by clerical personnel, whilst professional body data derive from unit-specific databases, collected contemporaneously with care by clinical personnel. Assessment of accuracy of official hospital coding data for hyperbaric services in a tertiary referral hospital. All official hyperbaric-relevant coding data submitted to the relevant Australian Government agencies by the Royal Hobart Hospital, Tasmania, Australia for financial year 2010-2011 were reviewed and compared against actual hyperbaric unit activity as determined by reference to original source documents. Hospital coding data contained one or more errors in diagnoses and/or procedures in 70% of patients treated with hyperbaric oxygen that year. Multiple discrete error types were identified, including (but not limited to): missing patients; missing treatments; 'additional' treatments; 'additional' patients; incorrect procedure codes and incorrect diagnostic codes. Incidental observations of errors in surgical, anaesthetic and intensive care coding within this cohort suggest that the problems are not restricted to the specialty of hyperbaric medicine alone. Publications from other centres indicate that these problems are not unique to this institution or State. Current Government datasets are irretrievably compromised and not fit for purpose. Attempting to inform the healthcare policy debate by reference to these datasets is inappropriate. Urgent clinical engagement with hospital coding departments is warranted.

  16. Investigation of Finite Sources through Time Reversal

    NASA Astrophysics Data System (ADS)

    Kremers, S.; Brietzke, G.; Igel, H.; Larmat, C.; Fichtner, A.; Johnson, P. A.; Huang, L.

    2008-12-01

    Under certain conditions time reversal is a promising method to determine earthquake source characteristics without any a-priori information (except the earth model and the data). It consists of injecting flipped-in-time records from seismic stations within the model to create an approximate reverse movie of wave propagation from which the location of the source point and other information might be inferred. In this study, the backward propagation is performed numerically using a spectral element code. We investigate the potential of time reversal to recover finite source characteristics (e.g., size of ruptured area, location of asperities, rupture velocity etc.). We use synthetic data from the SPICE kinematic source inversion blind test initiated to investigate the performance of current kinematic source inversion approaches (http://www.spice- rtn.org/library/valid). The synthetic data set attempts to reproduce the 2000 Tottori earthquake with 33 records close to the fault. We discuss the influence of relaxing the ignorance to prior source information (e.g., origin time, hypocenter, fault location, etc.) on the results of the time reversal process.

  17. PACER -- A fast running computer code for the calculation of short-term containment/confinement loads following coolant boundary failure. Volume 2: User information

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sienicki, J.J.

    A fast running and simple computer code has been developed to calculate pressure loadings inside light water reactor containments/confinements under loss-of-coolant accident conditions. PACER was originally developed to calculate containment/confinement pressure and temperature time histories for loss-of-coolant accidents in Soviet-designed VVER reactors and is relevant to the activities of the US International Nuclear Safety Center. The code employs a multicompartment representation of the containment volume and is focused upon application to early time containment phenomena during and immediately following blowdown. PACER has been developed for FORTRAN 77 and earlier versions of FORTRAN. The code has been successfully compiled and executedmore » on SUN SPARC and Hewlett-Packard HP-735 workstations provided that appropriate compiler options are specified. The code incorporates both capabilities built around a hardwired default generic VVER-440 Model V230 design as well as fairly general user-defined input. However, array dimensions are hardwired and must be changed by modifying the source code if the number of compartments/cells differs from the default number of nine. Detailed input instructions are provided as well as a description of outputs. Input files and selected output are presented for two sample problems run on both HP-735 and SUN SPARC workstations.« less

  18. 94 Mo(γ,n) and 90Zr(γ,n) cross-section measurements towards understanding the origin of p-nuclei

    NASA Astrophysics Data System (ADS)

    Meekins, E.; Banu, A.; Karwowski, H.; Silano, J.; Zimmerman, W.; Muller, J.; Rich, G.; Bhike, M.; Tornow, W.; McClesky, M.; Travaglio, C.

    2014-09-01

    The nucleosynthesis beyond iron of the rarest stable isotopes in the cosmos, the so-called p-nuclei, is one of the forefront topics in nuclear astrophysics. Recently, a stellar source was found that, for the first time, was able to produce both light and heavy p-nuclei almost at the same level as 56Fe, including the most debated 92,94Mo and 96,98Ru; it was also found that there is an important contribution from the p-process nucleosynthesis to the neutron magic nucleus 90Zr. We focus here on constraining the origin of p-nuclei through nuclear physics by studying two key astrophysical photoneutron reaction cross sections for 94Mo(γ,n) and 90Zr(γ,n). Their energy dependencies were measured using quasi-monochromatic photon beams from Duke University's High Intensity Gamma-ray Source facility at the respective neutron threshold energies up to 18 MeV. Preliminary results of these experimental cross sections will be presented along with their comparison to predictions by a statistical model based on the Hauser-Feshbach formalism implemented in codes like TALYS and SMARAGD. The nucleosynthesis beyond iron of the rarest stable isotopes in the cosmos, the so-called p-nuclei, is one of the forefront topics in nuclear astrophysics. Recently, a stellar source was found that, for the first time, was able to produce both light and heavy p-nuclei almost at the same level as 56Fe, including the most debated 92,94Mo and 96,98Ru; it was also found that there is an important contribution from the p-process nucleosynthesis to the neutron magic nucleus 90Zr. We focus here on constraining the origin of p-nuclei through nuclear physics by studying two key astrophysical photoneutron reaction cross sections for 94Mo(γ,n) and 90Zr(γ,n). Their energy dependencies were measured using quasi-monochromatic photon beams from Duke University's High Intensity Gamma-ray Source facility at the respective neutron threshold energies up to 18 MeV. Preliminary results of these experimental cross sections will be presented along with their comparison to predictions by a statistical model based on the Hauser-Feshbach formalism implemented in codes like TALYS and SMARAGD. This research was supported by the Research Corporation for Science Advancement.

  19. AMIDE: a free software tool for multimodality medical image analysis.

    PubMed

    Loening, Andreas Markus; Gambhir, Sanjiv Sam

    2003-07-01

    Amide's a Medical Image Data Examiner (AMIDE) has been developed as a user-friendly, open-source software tool for displaying and analyzing multimodality volumetric medical images. Central to the package's abilities to simultaneously display multiple data sets (e.g., PET, CT, MRI) and regions of interest is the on-demand data reslicing implemented within the program. Data sets can be freely shifted, rotated, viewed, and analyzed with the program automatically handling interpolation as needed from the original data. Validation has been performed by comparing the output of AMIDE with that of several existing software packages. AMIDE runs on UNIX, Macintosh OS X, and Microsoft Windows platforms, and it is freely available with source code under the terms of the GNU General Public License.

  20. The random energy model in a magnetic field and joint source channel coding

    NASA Astrophysics Data System (ADS)

    Merhav, Neri

    2008-09-01

    We demonstrate that there is an intimate relationship between the magnetic properties of Derrida’s random energy model (REM) of spin glasses and the problem of joint source-channel coding in Information Theory. In particular, typical patterns of erroneously decoded messages in the coding problem have “magnetization” properties that are analogous to those of the REM in certain phases, where the non-uniformity of the distribution of the source in the coding problem plays the role of an external magnetic field applied to the REM. We also relate the ensemble performance (random coding exponents) of joint source-channel codes to the free energy of the REM in its different phases.

  1. Experimental studies related to the origin of the genetic code and the process of protein synthesis - A review

    NASA Technical Reports Server (NTRS)

    Lacey, J. C., Jr.; Mullins, D. W., Jr.

    1983-01-01

    A survey is presented of the literature on the experimental evidence for the genetic code assignments and the chemical reactions involved in the process of protein synthesis. In view of the enormous number of theoretical models that have been advanced to explain the origin of the genetic code, attention is confined to experimental studies. Since genetic coding has significance only within the context of protein synthesis, it is believed that the problem of the origin of the code must be dealt with in terms of the origin of the process of protein synthesis. It is contended that the answers must lie in the nature of the molecules, amino acids and nucleotides, the affinities they might have for one another, and the effect that those affinities must have on the chemical reactions that are related to primitive protein synthesis. The survey establishes that for the bulk of amino acids, there is a direct and significant correlation between the hydrophobicity rank of the amino acids and the hydrophobicity rank of their anticodonic dinucleotides.

  2. User Manual and Source Code for a LAMMPS Implementation of Constant Energy Dissipative Particle Dynamics (DPD-E)

    DTIC Science & Technology

    2014-06-01

    User Manual and Source Code for a LAMMPS Implementation of Constant Energy Dissipative Particle Dynamics (DPD-E) by James P. Larentzos...Laboratory Aberdeen Proving Ground, MD 21005-5069 ARL-SR-290 June 2014 User Manual and Source Code for a LAMMPS Implementation of Constant...3. DATES COVERED (From - To) September 2013–February 2014 4. TITLE AND SUBTITLE User Manual and Source Code for a LAMMPS Implementation of

  3. BigBWA: approaching the Burrows-Wheeler aligner to Big Data technologies.

    PubMed

    Abuín, José M; Pichel, Juan C; Pena, Tomás F; Amigo, Jorge

    2015-12-15

    BigBWA is a new tool that uses the Big Data technology Hadoop to boost the performance of the Burrows-Wheeler aligner (BWA). Important reductions in the execution times were observed when using this tool. In addition, BigBWA is fault tolerant and it does not require any modification of the original BWA source code. BigBWA is available at the project GitHub repository: https://github.com/citiususc/BigBWA. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  4. Quasi-real-time end-to-end simulations of ELT-scale adaptive optics systems on GPUs

    NASA Astrophysics Data System (ADS)

    Gratadour, Damien

    2011-09-01

    Our team has started the development of a code dedicated to GPUs for the simulation of AO systems at the E-ELT scale. It uses the CUDA toolkit and an original binding to Yorick (an open source interpreted language) to provide the user with a comprehensive interface. In this paper we present the first performance analysis of our simulation code, showing its ability to provide Shack-Hartmann (SH) images and measurements at the kHz scale for VLT-sized AO system and in quasi-real-time (up to 70 Hz) for ELT-sized systems on a single top-end GPU. The simulation code includes multiple layers atmospheric turbulence generation, ray tracing through these layers, image formation at the focal plane of every sub-apertures of a SH sensor using either natural or laser guide stars and centroiding on these images using various algorithms. Turbulence is generated on-the-fly giving the ability to simulate hours of observations without the need of loading extremely large phase screens in the global memory. Because of its performance this code additionally provides the unique ability to test real-time controllers for future AO systems under nominal conditions.

  5. Special issue on network coding

    NASA Astrophysics Data System (ADS)

    Monteiro, Francisco A.; Burr, Alister; Chatzigeorgiou, Ioannis; Hollanti, Camilla; Krikidis, Ioannis; Seferoglu, Hulya; Skachek, Vitaly

    2017-12-01

    Future networks are expected to depart from traditional routing schemes in order to embrace network coding (NC)-based schemes. These have created a lot of interest both in academia and industry in recent years. Under the NC paradigm, symbols are transported through the network by combining several information streams originating from the same or different sources. This special issue contains thirteen papers, some dealing with design aspects of NC and related concepts (e.g., fountain codes) and some showcasing the application of NC to new services and technologies, such as data multi-view streaming of video or underwater sensor networks. One can find papers that show how NC turns data transmission more robust to packet losses, faster to decode, and more resilient to network changes, such as dynamic topologies and different user options, and how NC can improve the overall throughput. This issue also includes papers showing that NC principles can be used at different layers of the networks (including the physical layer) and how the same fundamental principles can lead to new distributed storage systems. Some of the papers in this issue have a theoretical nature, including code design, while others describe hardware testbeds and prototypes.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    The Analysis of Search Results for the Clarification and Identification of Technology Emergence (AR-CITE) computer code examines a scientometric model that tracks the emergence of an identified technology from initial discovery (via original scientific and conference literature), through critical discoveries (via original scientific, conference literature and patents), transitioning through Technology Readiness Levels (TRLs) and ultimately on to commercial currency of citations, collaboration indicators, and on-line news patterns are identified. The combinations of four distinct and separate searchable on-line networked sources (i.e. scholarly publications and citation, world patents, news archives, and on-line mapping networks) are assembled to become one collective networkmore » (a dataset for analysis of relations). This established network becomes the basis from which to quickly analyze the temporal flow of activity (searchable events) for the subject domain to be clarified and identified.« less

  7. GUINEVERE experiment: Kinetic analysis of some reactivity measurement methods by deterministic and Monte Carlo codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bianchini, G.; Burgio, N.; Carta, M.

    The GUINEVERE experiment (Generation of Uninterrupted Intense Neutrons at the lead Venus Reactor) is an experimental program in support of the ADS technology presently carried out at SCK-CEN in Mol (Belgium). In the experiment a modified lay-out of the original thermal VENUS critical facility is coupled to an accelerator, built by the French body CNRS in Grenoble, working in both continuous and pulsed mode and delivering 14 MeV neutrons by bombardment of deuterons on a tritium-target. The modified lay-out of the facility consists of a fast subcritical core made of 30% U-235 enriched metallic Uranium in a lead matrix. Severalmore » off-line and on-line reactivity measurement techniques will be investigated during the experimental campaign. This report is focused on the simulation by deterministic (ERANOS French code) and Monte Carlo (MCNPX US code) calculations of three reactivity measurement techniques, Slope ({alpha}-fitting), Area-ratio and Source-jerk, applied to a GUINEVERE subcritical configuration (namely SC1). The inferred reactivity, in dollar units, by the Area-ratio method shows an overall agreement between the two deterministic and Monte Carlo computational approaches, whereas the MCNPX Source-jerk results are affected by large uncertainties and allow only partial conclusions about the comparison. Finally, no particular spatial dependence of the results is observed in the case of the GUINEVERE SC1 subcritical configuration. (authors)« less

  8. Astronomy education and the Astrophysics Source Code Library

    NASA Astrophysics Data System (ADS)

    Allen, Alice; Nemiroff, Robert J.

    2016-01-01

    The Astrophysics Source Code Library (ASCL) is an online registry of source codes used in refereed astrophysics research. It currently lists nearly 1,200 codes and covers all aspects of computational astrophysics. How can this resource be of use to educators and to the graduate students they mentor? The ASCL serves as a discovery tool for codes that can be used for one's own research. Graduate students can also investigate existing codes to see how common astronomical problems are approached numerically in practice, and use these codes as benchmarks for their own solutions to these problems. Further, they can deepen their knowledge of software practices and techniques through examination of others' codes.

  9. Data processing with microcode designed with source coding

    DOEpatents

    McCoy, James A; Morrison, Steven E

    2013-05-07

    Programming for a data processor to execute a data processing application is provided using microcode source code. The microcode source code is assembled to produce microcode that includes digital microcode instructions with which to signal the data processor to execute the data processing application.

  10. Adaptive distributed source coding.

    PubMed

    Varodayan, David; Lin, Yao-Chung; Girod, Bernd

    2012-05-01

    We consider distributed source coding in the presence of hidden variables that parameterize the statistical dependence among sources. We derive the Slepian-Wolf bound and devise coding algorithms for a block-candidate model of this problem. The encoder sends, in addition to syndrome bits, a portion of the source to the decoder uncoded as doping bits. The decoder uses the sum-product algorithm to simultaneously recover the source symbols and the hidden statistical dependence variables. We also develop novel techniques based on density evolution (DE) to analyze the coding algorithms. We experimentally confirm that our DE analysis closely approximates practical performance. This result allows us to efficiently optimize parameters of the algorithms. In particular, we show that the system performs close to the Slepian-Wolf bound when an appropriate doping rate is selected. We then apply our coding and analysis techniques to a reduced-reference video quality monitoring system and show a bit rate saving of about 75% compared with fixed-length coding.

  11. Phase II evaluation of clinical coding schemes: completeness, taxonomy, mapping, definitions, and clarity. CPRI Work Group on Codes and Structures.

    PubMed

    Campbell, J R; Carpenter, P; Sneiderman, C; Cohn, S; Chute, C G; Warren, J

    1997-01-01

    To compare three potential sources of controlled clinical terminology (READ codes version 3.1, SNOMED International, and Unified Medical Language System (UMLS) version 1.6) relative to attributes of completeness, clinical taxonomy, administrative mapping, term definitions and clarity (duplicate coding rate). The authors assembled 1929 source concept records from a variety of clinical information taken from four medical centers across the United States. The source data included medical as well as ample nursing terminology. The source records were coded in each scheme by an investigator and checked by the coding scheme owner. The codings were then scored by an independent panel of clinicians for acceptability. Codes were checked for definitions provided with the scheme. Codes for a random sample of source records were analyzed by an investigator for "parent" and "child" codes within the scheme. Parent and child pairs were scored by an independent panel of medical informatics specialists for clinical acceptability. Administrative and billing code mapping from the published scheme were reviewed for all coded records and analyzed by independent reviewers for accuracy. The investigator for each scheme exhaustively searched a sample of coded records for duplications. SNOMED was judged to be significantly more complete in coding the source material than the other schemes (SNOMED* 70%; READ 57%; UMLS 50%; *p < .00001). SNOMED also had a richer clinical taxonomy judged by the number of acceptable first-degree relatives per coded concept (SNOMED* 4.56, UMLS 3.17; READ 2.14, *p < .005). Only the UMLS provided any definitions; these were found for 49% of records which had a coding assignment. READ and UMLS had better administrative mappings (composite score: READ* 40.6%; UMLS* 36.1%; SNOMED 20.7%, *p < .00001), and SNOMED had substantially more duplications of coding assignments (duplication rate: READ 0%; UMLS 4.2%; SNOMED* 13.9%, *p < .004) associated with a loss of clarity. No major terminology source can lay claim to being the ideal resource for a computer-based patient record. However, based upon this analysis of releases for April 1995, SNOMED International is considerably more complete, has a compositional nature and a richer taxonomy. Is suffers from less clarity, resulting from a lack of syntax and evolutionary changes in its coding scheme. READ has greater clarity and better mapping to administrative schemes (ICD-10 and OPCS-4), is rapidly changing and is less complete. UMLS is a rich lexical resource, with mappings to many source vocabularies. It provides definitions for many of its terms. However, due to the varying granularities and purposes of its source schemes, it has limitations for representation of clinical concepts within a computer-based patient record.

  12. BSDWormer; an Open Source Implementation of a Poisson Wavelet Multiscale Analysis for Potential Fields

    NASA Astrophysics Data System (ADS)

    Horowitz, F. G.; Gaede, O.

    2014-12-01

    Wavelet multiscale edge analysis of potential fields (a.k.a. "worms") has been known since Moreau et al. (1997) and was independently derived by Hornby et al. (1999). The technique is useful for producing a scale-explicit overview of the structures beneath a gravity or magnetic survey, including establishing the location and estimating the attitude of surface features, as well as incorporating information about the geometric class (point, line, surface, volume, fractal) of the underlying sources — in a fashion much like traditional structural indices from Euler solutions albeit with better areal coverage. Hornby et al. (2002) show that worms form the locally highest concentration of horizontal edges of a given strike — which in conjunction with the results from Mallat and Zhong (1992) induces a (non-unique!) inversion where the worms are physically interpretable as lateral boundaries in a source distribution that produces a close approximation of the observed potential field. The technique has enjoyed widespread adoption and success in the Australian mineral exploration community — including "ground truth" via successfully drilling structures indicated by the worms. Unfortunately, to our knowledge, all implementations of the code to calculate the worms/multiscale edges (including Horowitz' original research code) are either part of commercial software packages, or have copyright restrictions that impede the use of the technique by the wider community. The technique is completely described mathematically in Hornby et al. (1999) along with some later publications. This enables us to re-implement from scratch the code required to calculate and visualize the worms. We are freely releasing the results under an (open source) BSD two-clause software license. A git repository is available at . We will give an overview of the technique, show code snippets using the codebase, and present visualization results for example datasets (including the Surat basin of Australia, and the Lake Ontario region of North America). We invite you to join us in creating and using the best worming software for potential fields in existence — as both gratis and libre software!

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    MacFarlane, R. E.

    An accurate representation of the scattering of neutrons by the materials used to build cold sources at neutron scattering facilities is important for the initial design and optimization of a cold source, and for the analysis of experimental results obtained using the cold source. In practice, this requires a good representation of the physics of scattering from the material, a method to convert this into observable quantities (such as scattering cross sections), and a method to use the results in a neutron transport code (such as the MCNP Monte Carlo code). At Los Alamos, the authors have been developing thesemore » capabilities over the last ten years. The final set of cold-moderator evaluations, together with evaluations for conventional moderator materials, was released in 1994. These materials have been processed into MCNP data files using the NJOY Nuclear Data Processing System. Over the course of this work, they were able to develop a new module for NJOY called LEAPR based on the LEAP + ADDELT code from the UK as modified by D.J. Picton for cold-moderator calculations. Much of the physics for methane came from Picton`s work. The liquid hydrogen work was originally based on a code using the Young-Koppel approach that went through a number of hands in Europe (including Rolf Neef and Guy Robert). It was generalized and extended for LEAPR, and depends strongly on work by Keinert and Sax of the University of Stuttgart. Thus, their collection of cold-moderator scattering kernels is truly an international effort, and they are glad to be able to return the enhanced evaluations and processing techniques to the international community. In this paper, they give sections on the major cold moderator materials (namely, solid methane, liquid methane, and liquid hydrogen) using each section to introduce the relevant physics for that material and to show typical results.« less

  14. Practical color vision tests for air traffic control applicants: en route center and terminal facilities.

    PubMed

    Mertens, H W; Milburn, N J; Collins, W E

    2000-12-01

    Two practical color vision tests were developed and validated for use in screening Air Traffic Control Specialist (ATCS) applicants for work at en route center or terminal facilities. The development of the tests involved careful reproduction/simulation of color-coded materials from the most demanding, safety-critical color task performed in each type of facility. The tests were evaluated using 106 subjects with normal color vision and 85 with color vision deficiency. The en route center test, named the Flight Progress Strips Test (FPST), required the identification of critical red/black coding in computer printing and handwriting on flight progress strips. The terminal option test, named the Aviation Lights Test (ALT), simulated red/green/white aircraft lights that must be identified in night ATC tower operations. Color-coding is a non-redundant source of safety-critical information in both tasks. The FPST was validated by direct comparison of responses to strip reproductions with responses to the original flight progress strips and a set of strips selected independently. Validity was high; Kappa = 0.91 with original strips as the validation criterion and 0.86 with different strips. The light point stimuli of the ALT were validated physically with a spectroradiometer. The reliabilities of the FPST and ALT were estimated with Chronbach's alpha as 0.93 and 0.98, respectively. The high job-relevance, validity, and reliability of these tests increases the effectiveness and fairness of ATCS color vision testing.

  15. Data Bank 5 - Origin and Destination Survey City/Airport Nomenclature : fourth quarter : [2006-01

    DOT National Transportation Integrated Search

    2006-01-01

    This CD presents the letter alphabetic codes, numeric codes, full name spelling (up to 30 characters), abbreviated name spelling (up to 20 characters), and geographic coordinates for all cities in flight itineraries reported in the Passenger Origin a...

  16. Impacts of DNAPL Source Treatment: Experimental and Modeling Assessment of the Benefits of Partial DNAPL Source Removal

    DTIC Science & Technology

    2009-09-01

    nuclear industry for conducting performance assessment calculations. The analytical FORTRAN code for the DNAPL source function, REMChlor, was...project. The first was to apply existing deterministic codes , such as T2VOC and UTCHEM, to the DNAPL source zone to simulate the remediation processes...but describe the spatial variability of source zones unlike one-dimensional flow and transport codes that assume homogeneity. The Lagrangian models

  17. History of Science in the Physics Curriculum: A Directed Content Analysis of Historical Sources

    NASA Astrophysics Data System (ADS)

    Seker, Hayati; Guney, Burcu G.

    2012-05-01

    Although history of science is a potential resource for instructional materials, teachers do not have a tendency to use historical materials in their lessons. Studies showed that instructional materials should be adaptable and consistent with curriculum. This study purports to examine the alignment between history of science and the curriculum in the light of the facilitator model on the use of history of science in science teaching, and to expose possible difficulties in preparing historical materials. For this purpose, qualitative content analysis method was employed. Codes and themes were defined beforehand, with respect to levels and their sublevels of the model. The analysis revealed several problems with the alignment of historical sources for the physics curriculum: limited information about scientists' personal lives, the difficulty of linking with content knowledge, the lack of emphasis on scientific process in the physics curriculum, differences between chronology and sequence of topics, the lack of information about scientists' reasoning. Based on the findings of the analysis, it would be difficult to use original historical sources; educators were needed to simplify historical knowledge within a pedagogical perspective. There is a need for historical sources, like Harvard Case Histories in Experimental Science, since appropriate historical information to the curriculum objectives can only be obtained by simplifying complex information at the origin. The curriculum should leave opportunities for educators interested in history of science, even historical sources provides legitimate amount of information for every concepts in the curriculum.

  18. Development and Use of Health-Related Technologies in Indigenous Communities: Critical Review.

    PubMed

    Jones, Louise; Jacklin, Kristen; O'Connell, Megan E

    2017-07-20

    Older Indigenous adults encounter multiple challenges as their age intersects with health inequities. Research suggests that a majority of older Indigenous adults prefer to age in place, and they will need culturally safe assistive technologies to do so. The aim of this critical review was to examine literature concerning use, adaptation, and development of assistive technologies for health purposes by Indigenous peoples. Working within Indigenous research methodologies and from a decolonizing approach, searches of peer-reviewed academic and gray literature dated to February 2016 were conducted using keywords related to assistive technology and Indigenous peoples. Sources were reviewed and coded thematically. Of the 34 sources captured, only 2 concerned technology specifically for older Indigenous adults. Studies detailing technology with Indigenous populations of all ages originated primarily from Canada (n=12), Australia (n=10), and the United States (n=9) and were coded to four themes: meaningful user involvement and community-based processes in development, the digital divide, Indigenous innovation in technology, and health technology needs as holistic and interdependent. A key finding is the necessity of meaningful user involvement in technology development, especially in communities struggling with the digital divide. In spite of, or perhaps because of this divide, Indigenous communities are enthusiastically adapting mobile technologies to suit their needs in creative, culturally specific ways. This enthusiasm and creativity, coupled with the extensive experience many Indigenous communities have with telehealth technologies, presents opportunity for meaningful, culturally safe development processes. ©Louise Jones, Kristen Jacklin, Megan E O'Connell. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 20.07.2017.

  19. Optical image encryption based on real-valued coding and subtracting with the help of QR code

    NASA Astrophysics Data System (ADS)

    Deng, Xiaopeng

    2015-08-01

    A novel optical image encryption based on real-valued coding and subtracting is proposed with the help of quick response (QR) code. In the encryption process, the original image to be encoded is firstly transformed into the corresponding QR code, and then the corresponding QR code is encoded into two phase-only masks (POMs) by using basic vector operations. Finally, the absolute values of the real or imaginary parts of the two POMs are chosen as the ciphertexts. In decryption process, the QR code can be approximately restored by recording the intensity of the subtraction between the ciphertexts, and hence the original image can be retrieved without any quality loss by scanning the restored QR code with a smartphone. Simulation results and actual smartphone collected results show that the method is feasible and has strong tolerance to noise, phase difference and ratio between intensities of the two decryption light beams.

  20. Phase II Evaluation of Clinical Coding Schemes

    PubMed Central

    Campbell, James R.; Carpenter, Paul; Sneiderman, Charles; Cohn, Simon; Chute, Christopher G.; Warren, Judith

    1997-01-01

    Abstract Objective: To compare three potential sources of controlled clinical terminology (READ codes version 3.1, SNOMED International, and Unified Medical Language System (UMLS) version 1.6) relative to attributes of completeness, clinical taxonomy, administrative mapping, term definitions and clarity (duplicate coding rate). Methods: The authors assembled 1929 source concept records from a variety of clinical information taken from four medical centers across the United States. The source data included medical as well as ample nursing terminology. The source records were coded in each scheme by an investigator and checked by the coding scheme owner. The codings were then scored by an independent panel of clinicians for acceptability. Codes were checked for definitions provided with the scheme. Codes for a random sample of source records were analyzed by an investigator for “parent” and “child” codes within the scheme. Parent and child pairs were scored by an independent panel of medical informatics specialists for clinical acceptability. Administrative and billing code mapping from the published scheme were reviewed for all coded records and analyzed by independent reviewers for accuracy. The investigator for each scheme exhaustively searched a sample of coded records for duplications. Results: SNOMED was judged to be significantly more complete in coding the source material than the other schemes (SNOMED* 70%; READ 57%; UMLS 50%; *p <.00001). SNOMED also had a richer clinical taxonomy judged by the number of acceptable first-degree relatives per coded concept (SNOMED* 4.56; UMLS 3.17; READ 2.14, *p <.005). Only the UMLS provided any definitions; these were found for 49% of records which had a coding assignment. READ and UMLS had better administrative mappings (composite score: READ* 40.6%; UMLS* 36.1%; SNOMED 20.7%, *p <. 00001), and SNOMED had substantially more duplications of coding assignments (duplication rate: READ 0%; UMLS 4.2%; SNOMED* 13.9%, *p <. 004) associated with a loss of clarity. Conclusion: No major terminology source can lay claim to being the ideal resource for a computer-based patient record. However, based upon this analysis of releases for April 1995, SNOMED International is considerably more complete, has a compositional nature and a richer taxonomy. It suffers from less clarity, resulting from a lack of syntax and evolutionary changes in its coding scheme. READ has greater clarity and better mapping to administrative schemes (ICD-10 and OPCS-4), is rapidly changing and is less complete. UMLS is a rich lexical resource, with mappings to many source vocabularies. It provides definitions for many of its terms. However, due to the varying granularities and purposes of its source schemes, it has limitations for representation of clinical concepts within a computer-based patient record. PMID:9147343

  1. Joint Source-Channel Decoding of Variable-Length Codes with Soft Information: A Survey

    NASA Astrophysics Data System (ADS)

    Guillemot, Christine; Siohan, Pierre

    2005-12-01

    Multimedia transmission over time-varying wireless channels presents a number of challenges beyond existing capabilities conceived so far for third-generation networks. Efficient quality-of-service (QoS) provisioning for multimedia on these channels may in particular require a loosening and a rethinking of the layer separation principle. In that context, joint source-channel decoding (JSCD) strategies have gained attention as viable alternatives to separate decoding of source and channel codes. A statistical framework based on hidden Markov models (HMM) capturing dependencies between the source and channel coding components sets the foundation for optimal design of techniques of joint decoding of source and channel codes. The problem has been largely addressed in the research community, by considering both fixed-length codes (FLC) and variable-length source codes (VLC) widely used in compression standards. Joint source-channel decoding of VLC raises specific difficulties due to the fact that the segmentation of the received bitstream into source symbols is random. This paper makes a survey of recent theoretical and practical advances in the area of JSCD with soft information of VLC-encoded sources. It first describes the main paths followed for designing efficient estimators for VLC-encoded sources, the key component of the JSCD iterative structure. It then presents the main issues involved in the application of the turbo principle to JSCD of VLC-encoded sources as well as the main approaches to source-controlled channel decoding. This survey terminates by performance illustrations with real image and video decoding systems.

  2. PFMCal : Photonic force microscopy calibration extended for its application in high-frequency microrheology

    NASA Astrophysics Data System (ADS)

    Butykai, A.; Domínguez-García, P.; Mor, F. M.; Gaál, R.; Forró, L.; Jeney, S.

    2017-11-01

    The present document is an update of the previously published MatLab code for the calibration of optical tweezers in the high-resolution detection of the Brownian motion of non-spherical probes [1]. In this instance, an alternative version of the original code, based on the same physical theory [2], but focused on the automation of the calibration of measurements using spherical probes, is outlined. The new added code is useful for high-frequency microrheology studies, where the probe radius is known but the viscosity of the surrounding fluid maybe not. This extended calibration methodology is automatic, without the need of a user's interface. A code for calibration by means of thermal noise analysis [3] is also included; this is a method that can be applied when using viscoelastic fluids if the trap stiffness is previously estimated [4]. The new code can be executed in MatLab and using GNU Octave. Program Files doi:http://dx.doi.org/10.17632/s59f3gz729.1 Licensing provisions: GPLv3 Programming language: MatLab 2016a (MathWorks Inc.) and GNU Octave 4.0 Operating system: Linux and Windows. Supplementary material: A new document README.pdf includes basic running instructions for the new code. Journal reference of previous version: Computer Physics Communications, 196 (2015) 599 Does the new version supersede the previous version?: No. It adds alternative but compatible code while providing similar calibration factors. Nature of problem (approx. 50-250 words): The original code uses a MatLab-provided user's interface, which is not available in GNU Octave, and cannot be used outside of a proprietary software as MatLab. Besides, the process of calibration when using spherical probes needs an automatic method when calibrating big amounts of different data focused to microrheology. Solution method (approx. 50-250 words): The new code can be executed in the latest version of MatLab and using GNU Octave, a free and open-source alternative to MatLab. This code generates an automatic calibration process which requires only to write the input data in the main script. Additionally, we include a calibration method based on thermal noise statistics, which can be used with viscoelastic fluids if the trap stiffness is previously estimated. Reasons for the new version: This version extends the functionality of PFMCal for the particular case of spherical probes and unknown fluid viscosities. The extended code is automatic, works in different operating systems and it is compatible with GNU Octave. Summary of revisions: The original MatLab program in the previous version, which is executed by PFMCal.m, is not changed. Here, we have added two additional main archives named PFMCal_auto.m and PFMCal_histo.m, which implement automatic calculations of the calibration process and calibration through Boltzmann statistics, respectively. The process of calibration using this code for spherical beads is described in the README.pdf file provided in the new code submission. Here, we obtain different calibration factors, β (given in μm/V), according to [2], related to two statistical quantities: the mean-squared displacement (MSD), βMSD, and the velocity autocorrelation function (VAF), βVAF. Using that methodology, the trap stiffness, k, and the zero-shear viscosity of the fluid, η, can be calculated if the value of the particle's radius, a, is previously known. For comparison, we include in the extended code the method of calibration using the corner frequency of the power-spectral density (PSD) [5], providing a calibration factor βPSD. Besides, with the prior estimation of the trap stiffness, along with the known value of the particle's radius, we can use thermal noise statistics to obtain calibration factors, β, according to the quadratic form of the optical potential, βE, and related to the Gaussian distribution of the bead's positions, βσ2. This method has been demonstrated to be applicable to the calibration of optical tweezers when using non-Newtonian viscoelastic polymeric liquids [4]. An example of the results using this calibration process is summarized in Table 1. Using the data provided in the new code submission, for water and acetone fluids, we calculate all the calibration factors by using the original PFMCal.m and by the new non-GUI code PFMCal_auto.m and PFMCal_histo.m. Regarding the new code, PFMCal_auto.m returns η, k, βMSD, βVAF and βPSD, while PFMCal_histo.m provides βσ2 and βE. Table 1 shows how we obtain the expected viscosity of the two fluids at this temperature and how the different methods provide good agreement between trap stiffnesses and calibration factors. Additional comments including Restrictions and Unusual features (approx. 50-250 words): The original code, PFMCal.m, runs under MatLab using the Statistics Toolbox. The extended code, PFMCal_auto.m and PFMCal_histo.m, can be executed without modification using MatLab or GNU Octave. The code has been tested in Linux and Windows operating systems.

  3. Open-Source Development of the Petascale Reactive Flow and Transport Code PFLOTRAN

    NASA Astrophysics Data System (ADS)

    Hammond, G. E.; Andre, B.; Bisht, G.; Johnson, T.; Karra, S.; Lichtner, P. C.; Mills, R. T.

    2013-12-01

    Open-source software development has become increasingly popular in recent years. Open-source encourages collaborative and transparent software development and promotes unlimited free redistribution of source code to the public. Open-source development is good for science as it reveals implementation details that are critical to scientific reproducibility, but generally excluded from journal publications. In addition, research funds that would have been spent on licensing fees can be redirected to code development that benefits more scientists. In 2006, the developers of PFLOTRAN open-sourced their code under the U.S. Department of Energy SciDAC-II program. Since that time, the code has gained popularity among code developers and users from around the world seeking to employ PFLOTRAN to simulate thermal, hydraulic, mechanical and biogeochemical processes in the Earth's surface/subsurface environment. PFLOTRAN is a massively-parallel subsurface reactive multiphase flow and transport simulator designed from the ground up to run efficiently on computing platforms ranging from the laptop to leadership-class supercomputers, all from a single code base. The code employs domain decomposition for parallelism and is founded upon the well-established and open-source parallel PETSc and HDF5 frameworks. PFLOTRAN leverages modern Fortran (i.e. Fortran 2003-2008) in its extensible object-oriented design. The use of this progressive, yet domain-friendly programming language has greatly facilitated collaboration in the code's software development. Over the past year, PFLOTRAN's top-level data structures were refactored as Fortran classes (i.e. extendible derived types) to improve the flexibility of the code, ease the addition of new process models, and enable coupling to external simulators. For instance, PFLOTRAN has been coupled to the parallel electrical resistivity tomography code E4D to enable hydrogeophysical inversion while the same code base can be used as a third-party library to provide hydrologic flow, energy transport, and biogeochemical capability to the community land model, CLM, part of the open-source community earth system model (CESM) for climate. In this presentation, the advantages and disadvantages of open source software development in support of geoscience research at government laboratories, universities, and the private sector are discussed. Since the code is open-source (i.e. it's transparent and readily available to competitors), the PFLOTRAN team's development strategy within a competitive research environment is presented. Finally, the developers discuss their approach to object-oriented programming and the leveraging of modern Fortran in support of collaborative geoscience research as the Fortran standard evolves among compiler vendors.

  4. Development of Eulerian Code Modeling for ICF Experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bradley, Paul A.

    2014-02-27

    One of the most pressing unexplained phenomena standing in the way of ICF ignition is understanding mix and how it interacts with burn. Experiments were being designed and fielded as part of the Defect-Induced Mix Experiment (DIME) project to obtain data about the extent of material mix and how this mix influenced burn. Experiments on the Omega laser and National Ignition Facility (NIF) provided detailed data for comparison to the Eulerian code RAGE1. The Omega experiments were able to resolve the mix and provide “proof of principle” support for subsequent NIF experiments, which were fielded from July 2012 through Junemore » 2013. The Omega shots were fired at least once per year between 2009 and 2012. RAGE was not originally designed to model inertial confinement fusion (ICF) implosions. It still lacks lasers, so the code has been validated using an energy source. To test RAGE, the simulation output is compared to data and by means of postprocessing tools that were developed. Here, the various postprocessing tools are described with illustrative examples.« less

  5. Functional Programming with C++ Template Metaprograms

    NASA Astrophysics Data System (ADS)

    Porkoláb, Zoltán

    Template metaprogramming is an emerging new direction of generative programming. With the clever definitions of templates we can force the C++ compiler to execute algorithms at compilation time. Among the application areas of template metaprograms are the expression templates, static interface checking, code optimization with adaption, language embedding and active libraries. However, as template metaprogramming was not an original design goal, the C++ language is not capable of elegant expression of metaprograms. The complicated syntax leads to the creation of code that is hard to write, understand and maintain. Although template metaprogramming has a strong relationship with functional programming, this is not reflected in the language syntax and existing libraries. In this paper we give a short and incomplete introduction to C++ templates and the basics of template metaprogramming. We will enlight the role of template metaprograms, and some important and widely used idioms. We give an overview of the possible application areas as well as debugging and profiling techniques. We suggest a pure functional style programming interface for C++ template metaprograms in the form of embedded Haskell code which is transformed to standard compliant C++ source.

  6. SRGULL - AN ADVANCED ENGINEERING MODEL FOR THE PREDICTION OF AIRFRAME INTEGRATED SCRAMJET CYCLE PERFORMANCE

    NASA Technical Reports Server (NTRS)

    Walton, J. T.

    1994-01-01

    The development of a single-stage-to-orbit aerospace vehicle intended to be launched horizontally into low Earth orbit, such as the National Aero-Space Plane (NASP), has concentrated on the use of the supersonic combustion ramjet (scramjet) propulsion cycle. SRGULL, a scramjet cycle analysis code, is an engineer's tool capable of nose-to-tail, hydrogen-fueled, airframe-integrated scramjet simulation in a real gas flow with equilibrium thermodynamic properties. This program facilitates initial estimates of scramjet cycle performance by linking a two-dimensional forebody, inlet and nozzle code with a one-dimensional combustor code. Five computer codes (SCRAM, SEAGUL, INLET, Progam HUD, and GASH) originally developed at NASA Langley Research Center in support of hypersonic technology are integrated in this program to analyze changing flow conditions. The one-dimensional combustor code is based on the combustor subroutine from SCRAM and the two-dimensional coding is based on an inviscid Euler program (SEAGUL). Kinetic energy efficiency input for sidewall area variation modeling can be calculated by the INLET program code. At the completion of inviscid component analysis, Program HUD, an integral boundary layer code based on the Spaulding-Chi method, is applied to determine the friction coefficient which is then used in a modified Reynolds Analogy to calculate heat transfer. Real gas flow properties such as flow composition, enthalpy, entropy, and density are calculated by the subroutine GASH. Combustor input conditions are taken from one-dimensionalizing the two-dimensional inlet exit flow. The SEAGUL portions of this program are limited to supersonic flows, but the combustor (SCRAM) section can handle supersonic and dual-mode operation. SRGULL has been compared to scramjet engine tests with excellent results. SRGULL was written in FORTRAN 77 on an IBM PC compatible using IBM's FORTRAN/2 or Microway's NDP386 F77 compiler. The program is fully user interactive, but can also run in batch mode. It operates under the UNIX, VMS, NOS, and DOS operating systems. The source code is not directly compatible with all PC compilers (e.g., Lahey or Microsoft FORTRAN) due to block and segment size requirements. SRGULL executable code requires about 490K RAM and a math coprocessor on PC's. The SRGULL program was developed in 1989, although the component programs originated in the 1960's and 1970's. IBM, IBM PC, and DOS are registered trademarks of International Business Machines. VMS is a registered trademark of Digital Equipment Corporation. UNIX is a registered trademark of Bell Laboratories. NOS is a registered trademark of Control Data Corporation.

  7. Multidimensional incremental parsing for universal source coding.

    PubMed

    Bae, Soo Hyun; Juang, Biing-Hwang

    2008-10-01

    A multidimensional incremental parsing algorithm (MDIP) for multidimensional discrete sources, as a generalization of the Lempel-Ziv coding algorithm, is investigated. It consists of three essential component schemes, maximum decimation matching, hierarchical structure of multidimensional source coding, and dictionary augmentation. As a counterpart of the longest match search in the Lempel-Ziv algorithm, two classes of maximum decimation matching are studied. Also, an underlying behavior of the dictionary augmentation scheme for estimating the source statistics is examined. For an m-dimensional source, m augmentative patches are appended into the dictionary at each coding epoch, thus requiring the transmission of a substantial amount of information to the decoder. The property of the hierarchical structure of the source coding algorithm resolves this issue by successively incorporating lower dimensional coding procedures in the scheme. In regard to universal lossy source coders, we propose two distortion functions, the local average distortion and the local minimax distortion with a set of threshold levels for each source symbol. For performance evaluation, we implemented three image compression algorithms based upon the MDIP; one is lossless and the others are lossy. The lossless image compression algorithm does not perform better than the Lempel-Ziv-Welch coding, but experimentally shows efficiency in capturing the source structure. The two lossy image compression algorithms are implemented using the two distortion functions, respectively. The algorithm based on the local average distortion is efficient at minimizing the signal distortion, but the images by the one with the local minimax distortion have a good perceptual fidelity among other compression algorithms. Our insights inspire future research on feature extraction of multidimensional discrete sources.

  8. Calculation of spherical harmonics and Wigner d functions by FFT. Applications to fast rotational matching in molecular replacement and implementation into AMoRe.

    PubMed

    Trapani, Stefano; Navaza, Jorge

    2006-07-01

    The FFT calculation of spherical harmonics, Wigner D matrices and rotation function has been extended to all angular variables in the AMoRe molecular replacement software. The resulting code avoids singularity issues arising from recursive formulas, performs faster and produces results with at least the same accuracy as the original code. The new code aims at permitting accurate and more rapid computations at high angular resolution of the rotation function of large particles. Test calculations on the icosahedral IBDV VP2 subviral particle showed that the new code performs on the average 1.5 times faster than the original code.

  9. An Efficient Variable Length Coding Scheme for an IID Source

    NASA Technical Reports Server (NTRS)

    Cheung, K. -M.

    1995-01-01

    A scheme is examined for using two alternating Huffman codes to encode a discrete independent and identically distributed source with a dominant symbol. This combined strategy, or alternating runlength Huffman (ARH) coding, was found to be more efficient than ordinary coding in certain circumstances.

  10. A national EHR strategy preparedness characterisation model and its application in the South-East European region.

    PubMed

    Orfanidis, Leonidas; Bamidis, Panagiotis; Eaglestone, Barry

    2006-01-01

    This paper is concerned with modelling national approaches towards electronic health record systems (NEHRS) development. A model framework is stepwise produced, that allows for the characterisation of the preparedness and the readiness of a country to develop an NEHRS. Secondary data of published reports are considered for the creation of the model. Such sources are identified to mostly originate from within a sample of five developed countries. Factors arising from these sources are identified, coded and scaled, so as to allow for a quantitative application of the model. Instantiation of the latter for the case of the five developed countries is contrasted with the set of countries from South East Europe (SEE). The likely importance and validity of this modelling approach is discussed, using the Delphi method.

  11. Evaluating a Control System Architecture Based on a Formally Derived AOCS Model

    NASA Astrophysics Data System (ADS)

    Ilic, Dubravka; Latvala, Timo; Varpaaniemi, Kimmo; Vaisanen, Pauli; Troubitsyna, Elena; Laibinis, Linas

    2010-08-01

    Attitude & Orbit Control System (AOCS) refers to a wider class of control systems which are used to determine and control the attitude of the spacecraft while in orbit, based on the information obtained from various sensors. In this paper, we propose an approach to evaluate a typical (yet somewhat simplified) AOCS architecture using formal development - based on the Event-B method. As a starting point, an Ada specification of the AOCS is translated into a formal specification and further refined to incorporate all the details of its original source code specification. This way we are able not only to evaluate the Ada specification by expressing and verifying specific system properties in our formal models, but also to determine how well the chosen modelling framework copes with the level of detail required for an actual implementation and code generation from the derived models.

  12. Source Code Plagiarism--A Student Perspective

    ERIC Educational Resources Information Center

    Joy, M.; Cosma, G.; Yau, J. Y.-K.; Sinclair, J.

    2011-01-01

    This paper considers the problem of source code plagiarism by students within the computing disciplines and reports the results of a survey of students in Computing departments in 18 institutions in the U.K. This survey was designed to investigate how well students understand the concept of source code plagiarism and to discover what, if any,…

  13. Survey Of Lossless Image Coding Techniques

    NASA Astrophysics Data System (ADS)

    Melnychuck, Paul W.; Rabbani, Majid

    1989-04-01

    Many image transmission/storage applications requiring some form of data compression additionally require that the decoded image be an exact replica of the original. Lossless image coding algorithms meet this requirement by generating a decoded image that is numerically identical to the original. Several lossless coding techniques are modifications of well-known lossy schemes, whereas others are new. Traditional Markov-based models and newer arithmetic coding techniques are applied to predictive coding, bit plane processing, and lossy plus residual coding. Generally speaking, the compression ratio offered by these techniques are in the area of 1.6:1 to 3:1 for 8-bit pictorial images. Compression ratios for 12-bit radiological images approach 3:1, as these images have less detailed structure, and hence, their higher pel correlation leads to a greater removal of image redundancy.

  14. Recent advances in coding theory for near error-free communications

    NASA Technical Reports Server (NTRS)

    Cheung, K.-M.; Deutsch, L. J.; Dolinar, S. J.; Mceliece, R. J.; Pollara, F.; Shahshahani, M.; Swanson, L.

    1991-01-01

    Channel and source coding theories are discussed. The following subject areas are covered: large constraint length convolutional codes (the Galileo code); decoder design (the big Viterbi decoder); Voyager's and Galileo's data compression scheme; current research in data compression for images; neural networks for soft decoding; neural networks for source decoding; finite-state codes; and fractals for data compression.

  15. Classification and simulation of stereoscopic artifacts in mobile 3DTV content

    NASA Astrophysics Data System (ADS)

    Boev, Atanas; Hollosi, Danilo; Gotchev, Atanas; Egiazarian, Karen

    2009-02-01

    We identify, categorize and simulate artifacts which might occur during delivery stereoscopic video to mobile devices. We consider the stages of 3D video delivery dataflow: content creation, conversion to the desired format (multiview or source-plus-depth), coding/decoding, transmission, and visualization on 3D display. Human 3D vision works by assessing various depth cues - accommodation, binocular depth cues, pictorial cues and motion parallax. As a consequence any artifact which modifies these cues impairs the quality of a 3D scene. The perceptibility of each artifact can be estimated through subjective tests. The material for such tests needs to contain various artifacts with different amounts of impairment. We present a system for simulation of these artifacts. The artifacts are organized in groups with similar origins, and each group is simulated by a block in a simulation channel. The channel introduces the following groups of artifacts: sensor limitations, geometric distortions caused by camera optics, spatial and temporal misalignments between video channels, spatial and temporal artifacts caused by coding, transmission losses, and visualization artifacts. For the case of source-plus-depth representation, artifacts caused by format conversion are added as well.

  16. Investigation of the Impact of the Upstream Induction Zone on LIDAR Measurement Accuracy for Wind Turbine Control Applications using Large-Eddy Simulation

    NASA Astrophysics Data System (ADS)

    Simley, Eric; Y Pao, Lucy; Gebraad, Pieter; Churchfield, Matthew

    2014-06-01

    Several sources of error exist in lidar measurements for feedforward control of wind turbines including the ability to detect only radial velocities, spatial averaging, and wind evolution. This paper investigates another potential source of error: the upstream induction zone. The induction zone can directly affect lidar measurements and presents an opportunity for further decorrelation between upstream wind and the wind that interacts with the rotor. The impact of the induction zone is investigated using the combined CFD and aeroelastic code SOWFA. Lidar measurements are simulated upstream of a 5 MW turbine rotor and the true wind disturbances are found using a wind speed estimator and turbine outputs. Lidar performance in the absence of an induction zone is determined by simulating lidar measurements and the turbine response using the aeroelastic code FAST with wind inputs taken far upstream of the original turbine location in the SOWFA wind field. Results indicate that while measurement quality strongly depends on the amount of wind evolution, the induction zone has little effect. However, the optimal lidar preview distance and circular scan radius change slightly due to the presence of the induction zone.

  17. Algorithms and physical parameters involved in the calculation of model stellar atmospheres

    NASA Astrophysics Data System (ADS)

    Merlo, D. C.

    This contribution summarizes the Doctoral Thesis presented at Facultad de Matemática, Astronomía y Física, Universidad Nacional de Córdoba for the degree of PhD in Astronomy. We analyze some algorithms and physical parameters involved in the calculation of model stellar atmospheres, such as atomic partition functions, functional relations connecting gaseous and electronic pressure, molecular formation, temperature distribution, chemical compositions, Gaunt factors, atomic cross-sections and scattering sources, as well as computational codes for calculating models. Special attention is paid to the integration of hydrostatic equation. We compare our results with those obtained by other authors, finding reasonable agreement. We make efforts on the implementation of methods that modify the originally adopted temperature distribution in the atmosphere, in order to obtain constant energy flux throughout. We find limitations and we correct numerical instabilities. We integrate the transfer equation solving directly the integral equation involving the source function. As a by-product, we calculate updated atomic partition functions of the light elements. Also, we discuss and enumerate carefully selected formulae for the monochromatic absorption and dispersion of some atomic and molecular species. Finally, we obtain a flexible code to calculate model stellar atmospheres.

  18. A Particle-In-Cell Gun Code for Surface-Converter H- Ion Source Modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chacon-Golcher, Edwin; Bowers, Kevin J.

    2007-08-10

    We present the current status of a particle-in-cell with Monte Carlo collisions (PIC-MCC) gun code under development at Los Alamos for the study of surface-converter H- ion sources. The program preserves a first-principles approach to a significant extent and simulates the production processes without ad hoc models within the plasma region. Some of its features include: solution of arbitrary electrostatic and magnetostatic fields in an axisymmetric (r,z) geometry to describe the self-consistent time evolution of a plasma; simulation of a multi-species (e-,H+,H{sub 2}{sup +},H{sub 3}{sup +},H-) plasma discharge from a neutral hydrogen gas and filament-originated seed electrons; full 2-dimensional (r,z)more » 3-velocity (vr,vz,v{phi}) dynamics for all species with exact conservation of the canonical angular momentum p{phi}; detailed collision physics between charged particles and neutrals and the ability to represent multiple smooth (not stair-stepped) electrodes of arbitrary shape and voltage whose surfaces may be secondary-particle emitters (H- and e-). The status of this development is discussed in terms of its physics content and current implementation details.« less

  19. MEMOPS: data modelling and automatic code generation.

    PubMed

    Fogh, Rasmus H; Boucher, Wayne; Ionides, John M C; Vranken, Wim F; Stevens, Tim J; Laue, Ernest D

    2010-03-25

    In recent years the amount of biological data has exploded to the point where much useful information can only be extracted by complex computational analyses. Such analyses are greatly facilitated by metadata standards, both in terms of the ability to compare data originating from different sources, and in terms of exchanging data in standard forms, e.g. when running processes on a distributed computing infrastructure. However, standards thrive on stability whereas science tends to constantly move, with new methods being developed and old ones modified. Therefore maintaining both metadata standards, and all the code that is required to make them useful, is a non-trivial problem. Memops is a framework that uses an abstract definition of the metadata (described in UML) to generate internal data structures and subroutine libraries for data access (application programming interfaces--APIs--currently in Python, C and Java) and data storage (in XML files or databases). For the individual project these libraries obviate the need for writing code for input parsing, validity checking or output. Memops also ensures that the code is always internally consistent, massively reducing the need for code reorganisation. Across a scientific domain a Memops-supported data model makes it easier to support complex standards that can capture all the data produced in a scientific area, share them among all programs in a complex software pipeline, and carry them forward to deposition in an archive. The principles behind the Memops generation code will be presented, along with example applications in Nuclear Magnetic Resonance (NMR) spectroscopy and structural biology.

  20. Performing aggressive code optimization with an ability to rollback changes made by the aggressive optimizations

    DOEpatents

    Gschwind, Michael K

    2013-07-23

    Mechanisms for aggressively optimizing computer code are provided. With these mechanisms, a compiler determines an optimization to apply to a portion of source code and determines if the optimization as applied to the portion of source code will result in unsafe optimized code that introduces a new source of exceptions being generated by the optimized code. In response to a determination that the optimization is an unsafe optimization, the compiler generates an aggressively compiled code version, in which the unsafe optimization is applied, and a conservatively compiled code version in which the unsafe optimization is not applied. The compiler stores both versions and provides them for execution. Mechanisms are provided for switching between these versions during execution in the event of a failure of the aggressively compiled code version. Moreover, predictive mechanisms are provided for predicting whether such a failure is likely.

  1. OSRP Source Repatriations-Case Studies: Brazil, Ecuador, Uruguay

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Greenberg, Ray Jr.; Abeyta, Cristy; Matzke, Jim

    2012-07-01

    The Global Threat Reduction Initiative's (GTRI) Offsite Source Recovery Project (OSRP) began recovering excess and unwanted radioactive sealed sources (sources) in 1999. As of February 2012, the project had recovered over 30,000 sources totaling over 820,000 Ci. OSRP grew out of early efforts at Los Alamos National Laboratory (LANL) to recover disused excess Plutonium- 239 (Pu-239) sources that were distributed in the 1960's and 1970's under the Atoms for Peace Program. Source recovery was initially considered a waste management activity. However, after the 9/11 terrorist attacks, the interagency community began to recognize that excess and unwanted radioactive sealed sources posemore » a national security threat, particularly those that lack a disposition path. After OSRP's transfer to the U.S. National Nuclear Security Administration (NNSA) to be part of GTRI, its mission was expanded to include all disused sealed sources that might require national security consideration. Recognizing the transnational threat posed by porous borders and the ubiquitous nature of sources, GTRI/OSRP repatriates U.S. origin sources based on threat reduction prioritization criteria. For example, several recent challenging source repatriation missions have been conducted by GTRI/OSRP in South America. These include the repatriation of a significant amount of Cs-137 and other isotopes from Brazil; re-packaging of conditioned Ra-226 sources in Ecuador for future repatriation; and, multilateral cooperation in the consolidation and export of Canadian, US, and Indian Co-60/Cs-137 sources from Uruguay. In addition, cooperation with regulators and private source owners in other countries presents opportunities for GTRI/OSRP to exchange best practices for managing disused sources. These positive experiences often result in long-term cooperation and information sharing with key foreign counterparts. International source recovery operations are essential to the preservation of U.S. national security interests. They are also mutually beneficial for fostering positive relationships with other governments and private industry, and demonstrate that responsible end-of-life options are given to legacy U.S.-origin sources in other countries. GTRI/OSRP does not take back sources that have a viable path for commercial disposal. Most US origin sources were sold commercially and were not provided by the US government. Below is a synopsis of cooperative efforts with Brazil, Ecuador, and Uruguay. Bilateral and multilateral efforts have been successful in removing hundreds of U.S.origin sealed radioactive sources from Latin American countries to the U.S. As many disused sources remain in the region, and since repatriation is not always an option, GTRI will continue to work with those countries to ensure that these sources are stored securely for the long-term. Successful Latin America operations should serve as a model for other regional cooperation in the repatriation of sealed sources, encouraging other source exporting countries to implement similar programs. Securing and removing sources, both domestically and internationally, is crucial to strengthening the life-cycle management of radioactive sources worldwide. Such efforts not only prevent these materials from being used maliciously, but also address public health and safety concerns, and under-gird the IAEA Code of Conduct on the Safety and Security of Radioactive Sources. (authors)« less

  2. The impact of conventional dietary intake data coding methods on foods typically consumed by low-income African-American and White urban populations.

    PubMed

    Mason, Marc A; Fanelli Kuczmarski, Marie; Allegro, Deanne; Zonderman, Alan B; Evans, Michele K

    2015-08-01

    Analysing dietary data to capture how individuals typically consume foods is dependent on the coding variables used. Individual foods consumed simultaneously, like coffee with milk, are given codes to identify these combinations. Our literature review revealed a lack of discussion about using combination codes in analysis. The present study identified foods consumed at mealtimes and by race when combination codes were or were not utilized. Duplicate analysis methods were performed on separate data sets. The original data set consisted of all foods reported; each food was coded as if it was consumed individually. The revised data set was derived from the original data set by first isolating coded foods consumed as individual items from those foods consumed simultaneously and assigning a code to designate a combination. Foods assigned a combination code, like pancakes with syrup, were aggregated and associated with a food group, defined by the major food component (i.e. pancakes), and then appended to the isolated coded foods. Healthy Aging in Neighborhoods of Diversity across the Life Span study. African-American and White adults with two dietary recalls (n 2177). Differences existed in lists of foods most frequently consumed by mealtime and race when comparing results based on original and revised data sets. African Americans reported consumption of sausage/luncheon meat and poultry, while ready-to-eat cereals and cakes/doughnuts/pastries were reported by Whites on recalls. Use of combination codes provided more accurate representation of how foods were consumed by populations. This information is beneficial when creating interventions and exploring diet-health relationships.

  3. SU-E-T-212: Comparison of TG-43 Dosimetric Parameters of Low and High Energy Brachytherapy Sources Obtained by MCNP Code Versions of 4C, X and 5

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zehtabian, M; Zaker, N; Sina, S

    2015-06-15

    Purpose: Different versions of MCNP code are widely used for dosimetry purposes. The purpose of this study is to compare different versions of the MCNP codes in dosimetric evaluation of different brachytherapy sources. Methods: The TG-43 parameters such as dose rate constant, radial dose function, and anisotropy function of different brachytherapy sources, i.e. Pd-103, I-125, Ir-192, and Cs-137 were calculated in water phantom. The results obtained by three versions of Monte Carlo codes (MCNP4C, MCNPX, MCNP5) were compared for low and high energy brachytherapy sources. Then the cross section library of MCNP4C code was changed to ENDF/B-VI release 8 whichmore » is used in MCNP5 and MCNPX codes. Finally, the TG-43 parameters obtained using the MCNP4C-revised code, were compared with other codes. Results: The results of these investigations indicate that for high energy sources, the differences in TG-43 parameters between the codes are less than 1% for Ir-192 and less than 0.5% for Cs-137. However for low energy sources like I-125 and Pd-103, large discrepancies are observed in the g(r) values obtained by MCNP4C and the two other codes. The differences between g(r) values calculated using MCNP4C and MCNP5 at the distance of 6cm were found to be about 17% and 28% for I-125 and Pd-103 respectively. The results obtained with MCNP4C-revised and MCNPX were similar. However, the maximum difference between the results obtained with the MCNP5 and MCNP4C-revised codes was 2% at 6cm. Conclusion: The results indicate that using MCNP4C code for dosimetry of low energy brachytherapy sources can cause large errors in the results. Therefore it is recommended not to use this code for low energy sources, unless its cross section library is changed. Since the results obtained with MCNP4C-revised and MCNPX were similar, it is concluded that the difference between MCNP4C and MCNPX is their cross section libraries.« less

  4. Process Model Improvement for Source Code Plagiarism Detection in Student Programming Assignments

    ERIC Educational Resources Information Center

    Kermek, Dragutin; Novak, Matija

    2016-01-01

    In programming courses there are various ways in which students attempt to cheat. The most commonly used method is copying source code from other students and making minimal changes in it, like renaming variable names. Several tools like Sherlock, JPlag and Moss have been devised to detect source code plagiarism. However, for larger student…

  5. Application of a Design Space Exploration Tool to Enhance Interleaver Generation

    DTIC Science & Technology

    2009-06-24

    2], originally dedicated to channel coding, are being currently reused in a large set of the whole digital communication systems (e.g. equalization... originally target interface synthesis, is shown to be also suited to the interleaver design space exploration. Our design flow can take as input...slice turbo codes,” in Proc. 3rd Int. Symp. Turbo Codes, Related Topics, Brest , 2003, pp. 343–346. [11] IEEE 802.15.3a, WPAN High Rate Alternative [12

  6. 40 CFR 51.50 - What definitions apply to this subpart?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... accuracy description (MAD) codes means a set of six codes used to define the accuracy of latitude/longitude data for point sources. The six codes and their definitions are: (1) Coordinate Data Source Code: The... physical piece of or a closely related set of equipment. The EPA's reporting format for a given inventory...

  7. Concentration transport calculations by an original C++ program with interediate fidelity physics through user-defined buildings with an emphasis on release scenarios in radiological facilities

    NASA Astrophysics Data System (ADS)

    Sayre, George Anthony

    The purpose of this dissertation was to develop the C ++ program Emergency Dose to calculate transport of radionuclides through indoor spaces using intermediate fidelity physics that provides improved spatial heterogeneity over well-mixed models such as MELCORRTM and much lower computation times than CFD codes such as FLUENTRTM . Modified potential flow theory, which is an original formulation of potential flow theory with additions of turbulent jet and natural convection approximations, calculates spatially heterogeneous velocity fields that well-mixed models cannot predict. Other original contributions of MPFT are: (1) generation of high fidelity boundary conditions relative to well-mixed-CFD coupling methods (conflation), (2) broadening of potential flow applications to arbitrary indoor spaces previously restricted to specific applications such as exhaust hood studies, and (3) great reduction of computation time relative to CFD codes without total loss of heterogeneity. Additionally, the Lagrangian transport module, which is discussed in Sections 1.3 and 2.4, showcases an ensemble-based formulation thought to be original to interior studies. Velocity and concentration transport benchmarks against analogous formulations in COMSOLRTM produced favorable results with discrepancies resulting from the tetrahedral meshing used in COMSOLRTM outperforming the Cartesian method used by Emergency Dose. A performance comparison of the concentration transport modules against MELCORRTM showed that Emergency Dose held advantages over the well-mixed model especially in scenarios with many interior partitions and varied source positions. A performance comparison of velocity module against FLUENTRTM showed that viscous drag provided the largest error between Emergency Dose and CFD velocity calculations, but that Emergency Dose's turbulent jets well approximated the corresponding CFD jets. Overall, Emergency Dose was found to provide a viable intermediate solution method for concentration transport with relatively low computation times.

  8. Distinct neural mechanisms for reading Arabic vs verbal numbers: an ERP study.

    PubMed

    Proverbio, Alice Mado; Bianco, Marco; de Benedetto, Francesco

    2018-05-12

    In this EEG/ERP study, 16 volunteers were asked to compare the numerical equality of 360 pairs of multi-digit numbers presented in Arabic or verbal format. Behavioural data showed faster and more accurate responses for digit targets, with a right hand/left hemisphere advantage only for verbal numerals. Occipito-temporal N1, peaking at approximately 180 ms, was strongly left-lateralized during verbal number processing and bilateral during digit processing. A LORETA (low resolution electromagnetic tomography) source reconstruction performed at the N1 latency stage (155-185 ms) revealed greater brain activation during coding of Arabic than of verbal stimuli. Digit perceptual coding was associated with the activation of the right angular gyrus (rAG), the left fusiform gyrus (FG, BA37), and left and right superior and medial frontal areas. N1 sources for verbal numerals included the left FG (BA37), the precuneus (BA31), the parahippocampal area and a small right prefrontal activation. In addition, verbal numerals elicited a late frontocentral negativity, possibly reflecting stimulus unfamiliarity or complexity. Overall, the data suggest distinct mechanisms for number reading through ciphers (digits) or words. Information about quantity was accessed earlier and more accurately if numbers were in a nonlinguistic code. Indeed, it can be speculated that numerosity processing would involve circuits originally involved in processing space (i.e.,rAG/rIPS). This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  9. Spallation neutron production and the current intra-nuclear cascade and transport codes

    NASA Astrophysics Data System (ADS)

    Filges, D.; Goldenbaum, F.; Enke, M.; Galin, J.; Herbach, C.-M.; Hilscher, D.; Jahnke, U.; Letourneau, A.; Lott, B.; Neef, R.-D.; Nünighoff, K.; Paul, N.; Péghaire, A.; Pienkowski, L.; Schaal, H.; Schröder, U.; Sterzenbach, G.; Tietze, A.; Tishchenko, V.; Toke, J.; Wohlmuther, M.

    A recent renascent interest in energetic proton-induced production of neutrons originates largely from the inception of projects for target stations of intense spallation neutron sources, like the planned European Spallation Source (ESS), accelerator-driven nuclear reactors, nuclear waste transmutation, and also from the application for radioactive beams. In the framework of such a neutron production, of major importance is the search for ways for the most efficient conversion of the primary beam energy into neutron production. Although the issue has been quite successfully addressed experimentally by varying the incident proton energy for various target materials and by covering a huge collection of different target geometries --providing an exhaustive matrix of benchmark data-- the ultimate challenge is to increase the predictive power of transport codes currently on the market. To scrutinize these codes, calculations of reaction cross-sections, hadronic interaction lengths, average neutron multiplicities, neutron multiplicity and energy distributions, and the development of hadronic showers are confronted with recent experimental data of the NESSI collaboration. Program packages like HERMES, LCS or MCNPX master the prevision of reaction cross-sections, hadronic interaction lengths, averaged neutron multiplicities and neutron multiplicity distributions in thick and thin targets for a wide spectrum of incident proton energies, geometrical shapes and materials of the target generally within less than 10% deviation, while production cross-section measurements for light charged particles on thin targets point out that appreciable distinctions exist within these models.

  10. The Astrophysics Source Code Library by the numbers

    NASA Astrophysics Data System (ADS)

    Allen, Alice; Teuben, Peter; Berriman, G. Bruce; DuPrie, Kimberly; Mink, Jessica; Nemiroff, Robert; Ryan, PW; Schmidt, Judy; Shamir, Lior; Shortridge, Keith; Wallin, John; Warmels, Rein

    2018-01-01

    The Astrophysics Source Code Library (ASCL, ascl.net) was founded in 1999 by Robert Nemiroff and John Wallin. ASCL editors seek both new and old peer-reviewed papers that describe methods or experiments that involve the development or use of source code, and add entries for the found codes to the library. Software authors can submit their codes to the ASCL as well. This ensures a comprehensive listing covering a significant number of the astrophysics source codes used in peer-reviewed studies. The ASCL is indexed by both NASA’s Astrophysics Data System (ADS) and Web of Science, making software used in research more discoverable. This presentation covers the growth in the ASCL’s number of entries, the number of citations to its entries, and in which journals those citations appear. It also discusses what changes have been made to the ASCL recently, and what its plans are for the future.

  11. Astrophysics Source Code Library: Incite to Cite!

    NASA Astrophysics Data System (ADS)

    DuPrie, K.; Allen, A.; Berriman, B.; Hanisch, R. J.; Mink, J.; Nemiroff, R. J.; Shamir, L.; Shortridge, K.; Taylor, M. B.; Teuben, P.; Wallen, J. F.

    2014-05-01

    The Astrophysics Source Code Library (ASCl,http://ascl.net/) is an on-line registry of over 700 source codes that are of interest to astrophysicists, with more being added regularly. The ASCL actively seeks out codes as well as accepting submissions from the code authors, and all entries are citable and indexed by ADS. All codes have been used to generate results published in or submitted to a refereed journal and are available either via a download site or from an identified source. In addition to being the largest directory of scientist-written astrophysics programs available, the ASCL is also an active participant in the reproducible research movement with presentations at various conferences, numerous blog posts and a journal article. This poster provides a description of the ASCL and the changes that we are starting to see in the astrophysics community as a result of the work we are doing.

  12. Astrophysics Source Code Library

    NASA Astrophysics Data System (ADS)

    Allen, A.; DuPrie, K.; Berriman, B.; Hanisch, R. J.; Mink, J.; Teuben, P. J.

    2013-10-01

    The Astrophysics Source Code Library (ASCL), founded in 1999, is a free on-line registry for source codes of interest to astronomers and astrophysicists. The library is housed on the discussion forum for Astronomy Picture of the Day (APOD) and can be accessed at http://ascl.net. The ASCL has a comprehensive listing that covers a significant number of the astrophysics source codes used to generate results published in or submitted to refereed journals and continues to grow. The ASCL currently has entries for over 500 codes; its records are citable and are indexed by ADS. The editors of the ASCL and members of its Advisory Committee were on hand at a demonstration table in the ADASS poster room to present the ASCL, accept code submissions, show how the ASCL is starting to be used by the astrophysics community, and take questions on and suggestions for improving the resource.

  13. Generating code adapted for interlinking legacy scalar code and extended vector code

    DOEpatents

    Gschwind, Michael K

    2013-06-04

    Mechanisms for intermixing code are provided. Source code is received for compilation using an extended Application Binary Interface (ABI) that extends a legacy ABI and uses a different register configuration than the legacy ABI. First compiled code is generated based on the source code, the first compiled code comprising code for accommodating the difference in register configurations used by the extended ABI and the legacy ABI. The first compiled code and second compiled code are intermixed to generate intermixed code, the second compiled code being compiled code that uses the legacy ABI. The intermixed code comprises at least one call instruction that is one of a call from the first compiled code to the second compiled code or a call from the second compiled code to the first compiled code. The code for accommodating the difference in register configurations is associated with the at least one call instruction.

  14. Modularized seismic full waveform inversion based on waveform sensitivity kernels - The software package ASKI

    NASA Astrophysics Data System (ADS)

    Schumacher, Florian; Friederich, Wolfgang; Lamara, Samir; Gutt, Phillip; Paffrath, Marcel

    2015-04-01

    We present a seismic full waveform inversion concept for applications ranging from seismological to enineering contexts, based on sensitivity kernels for full waveforms. The kernels are derived from Born scattering theory as the Fréchet derivatives of linearized frequency-domain full waveform data functionals, quantifying the influence of elastic earth model parameters and density on the data values. For a specific source-receiver combination, the kernel is computed from the displacement and strain field spectrum originating from the source evaluated throughout the inversion domain, as well as the Green function spectrum and its strains originating from the receiver. By storing the wavefield spectra of specific sources/receivers, they can be re-used for kernel computation for different specific source-receiver combinations, optimizing the total number of required forward simulations. In the iterative inversion procedure, the solution of the forward problem, the computation of sensitivity kernels and the derivation of a model update is held completely separate. In particular, the model description for the forward problem and the description of the inverted model update are kept independent. Hence, the resolution of the inverted model as well as the complexity of solving the forward problem can be iteratively increased (with increasing frequency content of the inverted data subset). This may regularize the overall inverse problem and optimizes the computational effort of both, solving the forward problem and computing the model update. The required interconnection of arbitrary unstructured volume and point grids is realized by generalized high-order integration rules and 3D-unstructured interpolation methods. The model update is inferred solving a minimization problem in a least-squares sense, resulting in Gauss-Newton convergence of the overall inversion process. The inversion method was implemented in the modularized software package ASKI (Analysis of Sensitivity and Kernel Inversion), which provides a generalized interface to arbitrary external forward modelling codes. So far, the 3D spectral-element code SPECFEM3D (Tromp, Komatitsch and Liu, 2008) and the 1D semi-analytical code GEMINI (Friederich and Dalkolmo, 1995) in both, Cartesian and spherical framework are supported. The creation of interfaces to further forward codes is planned in the near future. ASKI is freely available under the terms of the GPL at www.rub.de/aski . Since the independent modules of ASKI must communicate via file output/input, large storage capacities need to be accessible conveniently. Storing the complete sensitivity matrix to file, however, permits the scientist full manual control over each step in a customized procedure of sensitivity/resolution analysis and full waveform inversion. In the presentation, we will show some aspects of the theory behind the full waveform inversion method and its practical realization by the software package ASKI, as well as synthetic and real-data applications from different scales and geometries.

  15. Optimal power allocation and joint source-channel coding for wireless DS-CDMA visual sensor networks

    NASA Astrophysics Data System (ADS)

    Pandremmenou, Katerina; Kondi, Lisimachos P.; Parsopoulos, Konstantinos E.

    2011-01-01

    In this paper, we propose a scheme for the optimal allocation of power, source coding rate, and channel coding rate for each of the nodes of a wireless Direct Sequence Code Division Multiple Access (DS-CDMA) visual sensor network. The optimization is quality-driven, i.e. the received quality of the video that is transmitted by the nodes is optimized. The scheme takes into account the fact that the sensor nodes may be imaging scenes with varying levels of motion. Nodes that image low-motion scenes will require a lower source coding rate, so they will be able to allocate a greater portion of the total available bit rate to channel coding. Stronger channel coding will mean that such nodes will be able to transmit at lower power. This will both increase battery life and reduce interference to other nodes. Two optimization criteria are considered. One that minimizes the average video distortion of the nodes and one that minimizes the maximum distortion among the nodes. The transmission powers are allowed to take continuous values, whereas the source and channel coding rates can assume only discrete values. Thus, the resulting optimization problem lies in the field of mixed-integer optimization tasks and is solved using Particle Swarm Optimization. Our experimental results show the importance of considering the characteristics of the video sequences when determining the transmission power, source coding rate and channel coding rate for the nodes of the visual sensor network.

  16. Images multiplexing by code division technique

    NASA Astrophysics Data System (ADS)

    Kuo, Chung J.; Rigas, Harriett

    Spread Spectrum System (SSS) or Code Division Multiple Access System (CDMAS) has been studied for a long time, but most of the attention was focused on the transmission problems. In this paper, we study the results when the code division technique is applied to the image at the source stage. The idea is to convolve the N different images with the corresponding m-sequence to obtain the encrypted image. The superimposed image (summation of the encrypted images) is then stored or transmitted. The benefit of this is that no one knows what is stored or transmitted unless the m-sequence is known. The recovery of the original image is recovered by correlating the superimposed image with corresponding m-sequence. Two cases are studied in this paper. First, the two-dimensional image is treated as a long one-dimensional vector and the m-sequence is employed to obtain the results. Secondly, the two-dimensional quasi m-array is proposed and used for the code division multiplexing. It is shown that quasi m-array is faster when the image size is 256 x 256. The important features of the proposed technique are not only the image security but also the data compactness. The compression ratio depends on how many images are superimposed.

  17. Images Multiplexing By Code Division Technique

    NASA Astrophysics Data System (ADS)

    Kuo, Chung Jung; Rigas, Harriett B.

    1990-01-01

    Spread Spectrum System (SSS) or Code Division Multiple Access System (CDMAS) has been studied for a long time, but most of the attention was focused on the transmission problems. In this paper, we study the results when the code division technique is applied to the image at the source stage. The idea is to convolve the N different images with the corresponding m-sequence to obtain the encrypted image. The superimposed image (summation of the encrypted images) is then stored or transmitted. The benefit of this is that no one knows what is stored or transmitted unless the m-sequence is known. The recovery of the original image is recovered by correlating the superimposed image with corresponding m-sequence. Two cases are studied in this paper. First, the 2-D image is treated as a long 1-D vector and the m-sequence is employed to obtained the results. Secondly, the 2-D quasi m-array is proposed and used for the code division multiplexing. It is showed that quasi m-array is faster when the image size is 256x256. The important features of the proposed technique are not only the image security but also the data compactness. The compression ratio depends on how many images are superimposed.

  18. A specific indel marker for the Philippines Schistosoma japonicum revealed by analysis of mitochondrial genome sequences.

    PubMed

    Li, Juan; Chen, Fen; Sugiyama, Hiromu; Blair, David; Lin, Rui-Qing; Zhu, Xing-Quan

    2015-07-01

    In the present study, near-complete mitochondrial (mt) genome sequences for Schistosoma japonicum from different regions in the Philippines and Japan were amplified and sequenced. Comparisons among S. japonicum from the Philippines, Japan, and China revealed a geographically based length difference in mt genomes, but the mt genomic organization and gene arrangement were the same. Sequence differences among samples from the Philippines and all samples from the three endemic areas were 0.57-2.12 and 0.76-3.85 %, respectively. The most variable part of the mt genome was the non-coding region. In the coding portion of the genome, protein-coding genes varied more than rRNA genes and tRNAs. The near-complete mt genome sequences for Philippine specimens were identical in length (14,091 bp) which was 4 bp longer than those of S. japonicum samples from Japan and China. This indel provides a unique genetic marker for S. japonicum samples from the Philippines. Phylogenetic analyses based on the concatenated amino acids of 12 protein-coding genes showed that samples of S. japonicum clustered according to their geographical origins. The identified mitochondrial indel marker will be useful for tracing the source of S. japonicum infection in humans and animals in Southeast Asia.

  19. An extension of the coevolution theory of the origin of the genetic code

    PubMed Central

    Di Giulio, Massimo

    2008-01-01

    Background The coevolution theory of the origin of the genetic code suggests that the genetic code is an imprint of the biosynthetic relationships between amino acids. However, this theory does not seem to attribute a role to the biosynthetic relationships between the earliest amino acids that evolved along the pathways of energetic metabolism. As a result, the coevolution theory is unable to clearly define the very earliest phases of genetic code origin. In order to remove this difficulty, I here suggest an extension of the coevolution theory that attributes a crucial role to the first amino acids that evolved along these biosynthetic pathways and to their biosynthetic relationships, even when defined by the non-amino acid molecules that are their precursors. Results It is re-observed that the first amino acids to evolve along these biosynthetic pathways are predominantly those codified by codons of the type GNN, and this observation is found to be statistically significant. Furthermore, the close biosynthetic relationships between the sibling amino acids Ala-Ser, Ser-Gly, Asp-Glu, and Ala-Val are not random in the genetic code table and reinforce the hypothesis that the biosynthetic relationships between these six amino acids played a crucial role in defining the very earliest phases of genetic code origin. Conclusion All this leads to the hypothesis that there existed a code, GNS, reflecting the biosynthetic relationships between these six amino acids which, as it defines the very earliest phases of genetic code origin, removes the main difficulty of the coevolution theory. Furthermore, it is here discussed how this code might have naturally led to the code codifying only for the domains of the codons of precursor amino acids, as predicted by the coevolution theory. Finally, the hypothesis here suggested also removes other problems of the coevolution theory, such as the existence for certain pairs of amino acids with an unclear biosynthetic relationship between the precursor and product amino acids and the collocation of Ala between the amino acids Val and Leu belonging to the pyruvate biosynthetic family, which the coevolution theory considered as belonging to different biosyntheses. Reviewers This article was reviewed by Rob Knight, Paul Higgs (nominated by Laura Landweber), and Eugene Koonin. PMID:18775066

  20. Data compression for satellite images

    NASA Technical Reports Server (NTRS)

    Chen, P. H.; Wintz, P. A.

    1976-01-01

    An efficient data compression system is presented for satellite pictures and two grey level pictures derived from satellite pictures. The compression techniques take advantages of the correlation between adjacent picture elements. Several source coding methods are investigated. Double delta coding is presented and shown to be the most efficient. Both predictive differential quantizing technique and double delta coding can be significantly improved by applying a background skipping technique. An extension code is constructed. This code requires very little storage space and operates efficiently. Simulation results are presented for various coding schemes and source codes.

  1. Distributed Joint Source-Channel Coding in Wireless Sensor Networks

    PubMed Central

    Zhu, Xuqi; Liu, Yu; Zhang, Lin

    2009-01-01

    Considering the fact that sensors are energy-limited and the wireless channel conditions in wireless sensor networks, there is an urgent need for a low-complexity coding method with high compression ratio and noise-resisted features. This paper reviews the progress made in distributed joint source-channel coding which can address this issue. The main existing deployments, from the theory to practice, of distributed joint source-channel coding over the independent channels, the multiple access channels and the broadcast channels are introduced, respectively. To this end, we also present a practical scheme for compressing multiple correlated sources over the independent channels. The simulation results demonstrate the desired efficiency. PMID:22408560

  2. State-Chart Autocoder

    NASA Technical Reports Server (NTRS)

    Clark, Kenneth; Watney, Garth; Murray, Alexander; Benowitz, Edward

    2007-01-01

    A computer program translates Unified Modeling Language (UML) representations of state charts into source code in the C, C++, and Python computing languages. ( State charts signifies graphical descriptions of states and state transitions of a spacecraft or other complex system.) The UML representations constituting the input to this program are generated by using a UML-compliant graphical design program to draw the state charts. The generated source code is consistent with the "quantum programming" approach, which is so named because it involves discrete states and state transitions that have features in common with states and state transitions in quantum mechanics. Quantum programming enables efficient implementation of state charts, suitable for real-time embedded flight software. In addition to source code, the autocoder program generates a graphical-user-interface (GUI) program that, in turn, generates a display of state transitions in response to events triggered by the user. The GUI program is wrapped around, and can be used to exercise the state-chart behavior of, the generated source code. Once the expected state-chart behavior is confirmed, the generated source code can be augmented with a software interface to the rest of the software with which the source code is required to interact.

  3. Practices in source code sharing in astrophysics

    NASA Astrophysics Data System (ADS)

    Shamir, Lior; Wallin, John F.; Allen, Alice; Berriman, Bruce; Teuben, Peter; Nemiroff, Robert J.; Mink, Jessica; Hanisch, Robert J.; DuPrie, Kimberly

    2013-02-01

    While software and algorithms have become increasingly important in astronomy, the majority of authors who publish computational astronomy research do not share the source code they develop, making it difficult to replicate and reuse the work. In this paper we discuss the importance of sharing scientific source code with the entire astrophysics community, and propose that journals require authors to make their code publicly available when a paper is published. That is, we suggest that a paper that involves a computer program not be accepted for publication unless the source code becomes publicly available. The adoption of such a policy by editors, editorial boards, and reviewers will improve the ability to replicate scientific results, and will also make computational astronomy methods more available to other researchers who wish to apply them to their data.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Balsa Terzic, Gabriele Bassi

    In this paper we discuss representations of charge particle densities in particle-in-cell (PIC) simulations, analyze the sources and profiles of the intrinsic numerical noise, and present efficient methods for their removal. We devise two alternative estimation methods for charged particle distribution which represent significant improvement over the Monte Carlo cosine expansion used in the 2d code of Bassi, designed to simulate coherent synchrotron radiation (CSR) in charged particle beams. The improvement is achieved by employing an alternative beam density estimation to the Monte Carlo cosine expansion. The representation is first binned onto a finite grid, after which two grid-based methodsmore » are employed to approximate particle distributions: (i) truncated fast cosine transform (TFCT); and (ii) thresholded wavelet transform (TWT). We demonstrate that these alternative methods represent a staggering upgrade over the original Monte Carlo cosine expansion in terms of efficiency, while the TWT approximation also provides an appreciable improvement in accuracy. The improvement in accuracy comes from a judicious removal of the numerical noise enabled by the wavelet formulation. The TWT method is then integrated into Bassi's CSR code, and benchmarked against the original version. We show that the new density estimation method provides a superior performance in terms of efficiency and spatial resolution, thus enabling high-fidelity simulations of CSR effects, including microbunching instability.« less

  5. Empirically evaluating the WHO global code of practice on the international recruitment of health personnel's impact on four high-income countries four years after adoption.

    PubMed

    Tam, Vivian; Edge, Jennifer S; Hoffman, Steven J

    2016-10-12

    Shortages of health workers in low-income countries are exacerbated by the international migration of health workers to more affluent countries. This problem is compounded by the active recruitment of health workers by destination countries, particularly Australia, Canada, UK and USA. The World Health Organization (WHO) adopted a voluntary Code of Practice in May 2010 to mitigate tensions between health workers' right to migrate and the shortage of health workers in source countries. The first empirical impact evaluation of this Code was conducted 11-months after its adoption and demonstrated a lack of impact on health workforce recruitment policy and practice in the short-term. This second empirical impact evaluation was conducted 4-years post-adoption using the same methodology to determine whether there have been any changes in the perceived utility, applicability, and implementation of the Code in the medium-term. Forty-four respondents representing government, civil society and the private sector from Australia, Canada, UK and USA completed an email-based survey evaluating their awareness of the Code, perceived impact, changes to policy or recruitment practices resulting from the Code, and the effectiveness of non-binding Codes generally. The same survey instrument from the original study was used to facilitate direct comparability of responses. Key lessons were identified through thematic analysis. The main findings between the initial impact evaluation and the current one are unchanged. Both sets of key informants reported no significant policy or regulatory changes to health worker recruitment in their countries as a direct result of the Code due to its lack of incentives, institutional mechanisms and interest mobilizers. Participants emphasized the existence of previous bilateral and regional Codes, the WHO Code's non-binding nature, and the primacy of competing domestic healthcare priorities in explaining this perceived lack of impact. The Code has probably still not produced the tangible improvements in health worker flows it aspired to achieve. Several actions, including a focus on developing bilateral codes, linking the Code to topical global priorities, and reframing the Code's purpose to emphasize health system sustainability, are proposed to improve the Code's uptake and impact.

  6. The impact of conventional dietary intake data coding methods on foods typically consumed by low-income African-American and White urban populations

    PubMed Central

    Mason, Marc A; Kuczmarski, Marie Fanelli; Allegro, Deanne; Zonderman, Alan B; Evans, Michele K

    2016-01-01

    Objective Analysing dietary data to capture how individuals typically consume foods is dependent on the coding variables used. Individual foods consumed simultaneously, like coffee with milk, are given codes to identify these combinations. Our literature review revealed a lack of discussion about using combination codes in analysis. The present study identified foods consumed at mealtimes and by race when combination codes were or were not utilized. Design Duplicate analysis methods were performed on separate data sets. The original data set consisted of all foods reported; each food was coded as if it was consumed individually. The revised data set was derived from the original data set by first isolating coded foods consumed as individual items from those foods consumed simultaneously and assigning a code to designate a combination. Foods assigned a combination code, like pancakes with syrup, were aggregated and associated with a food group, defined by the major food component (i.e. pancakes), and then appended to the isolated coded foods. Setting Healthy Aging in Neighborhoods of Diversity across the Life Span study. Subjects African-American and White adults with two dietary recalls (n 2177). Results Differences existed in lists of foods most frequently consumed by mealtime and race when comparing results based on original and revised data sets. African Americans reported consumption of sausage/luncheon meat and poultry, while ready-to-eat cereals and cakes/doughnuts/pastries were reported by Whites on recalls. Conclusions Use of combination codes provided more accurate representation of how foods were consumed by populations. This information is beneficial when creating interventions and exploring diet–health relationships. PMID:25435191

  7. W-026, Waste Receiving and Processing Facility data management system validation and verification report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Palmer, M.E.

    1997-12-05

    This V and V Report includes analysis of two revisions of the DMS [data management system] System Requirements Specification (SRS) and the Preliminary System Design Document (PSDD); the source code for the DMS Communication Module (DMSCOM) messages; the source code for selected DMS Screens, and the code for the BWAS Simulator. BDM Federal analysts used a series of matrices to: compare the requirements in the System Requirements Specification (SRS) to the specifications found in the System Design Document (SDD), to ensure the design supports the business functions, compare the discreet parts of the SDD with each other, to ensure thatmore » the design is consistent and cohesive, compare the source code of the DMS Communication Module with the specifications, to ensure that the resultant messages will support the design, compare the source code of selected screens to the specifications to ensure that resultant system screens will support the design, compare the source code of the BWAS simulator with the requirements to interface with DMS messages and data transfers relating to the BWAS operations.« less

  8. Second Generation Integrated Composite Analyzer (ICAN) Computer Code

    NASA Technical Reports Server (NTRS)

    Murthy, Pappu L. N.; Ginty, Carol A.; Sanfeliz, Jose G.

    1993-01-01

    This manual updates the original 1986 NASA TP-2515, Integrated Composite Analyzer (ICAN) Users and Programmers Manual. The various enhancements and newly added features are described to enable the user to prepare the appropriate input data to run this updated version of the ICAN code. For reference, the micromechanics equations are provided in an appendix and should be compared to those in the original manual for modifications. A complete output for a sample case is also provided in a separate appendix. The input to the code includes constituent material properties, factors reflecting the fabrication process, and laminate configuration. The code performs micromechanics, macromechanics, and laminate analyses, including the hygrothermal response of polymer-matrix-based fiber composites. The output includes the various ply and composite properties, the composite structural response, and the composite stress analysis results with details on failure. The code is written in FORTRAN 77 and can be used efficiently as a self-contained package (or as a module) in complex structural analysis programs. The input-output format has changed considerably from the original version of ICAN and is described extensively through the use of a sample problem.

  9. Simulations of ultra-high energy cosmic rays in the local Universe and the origin of cosmic magnetic fields

    NASA Astrophysics Data System (ADS)

    Hackstein, S.; Vazza, F.; Brüggen, M.; Sorce, J. G.; Gottlöber, S.

    2018-04-01

    We simulate the propagation of cosmic rays at ultra-high energies, ≳1018 eV, in models of extragalactic magnetic fields in constrained simulations of the local Universe. We use constrained initial conditions with the cosmological magnetohydrodynamics code ENZO. The resulting models of the distribution of magnetic fields in the local Universe are used in the CRPROPA code to simulate the propagation of ultra-high energy cosmic rays. We investigate the impact of six different magneto-genesis scenarios, both primordial and astrophysical, on the propagation of cosmic rays over cosmological distances. Moreover, we study the influence of different source distributions around the Milky Way. Our study shows that different scenarios of magneto-genesis do not have a large impact on the anisotropy measurements of ultra-high energy cosmic rays. However, at high energies above the Greisen-Zatsepin-Kuzmin (GZK)-limit, there is anisotropy caused by the distribution of nearby sources, independent of the magnetic field model. This provides a chance to identify cosmic ray sources with future full-sky measurements and high number statistics at the highest energies. Finally, we compare our results to the dipole signal measured by the Pierre Auger Observatory. All our source models and magnetic field models could reproduce the observed dipole amplitude with a pure iron injection composition. Our results indicate that the dipole is observed due to clustering of secondary nuclei in direction of nearby sources of heavy nuclei. A light injection composition is disfavoured, since the increase in dipole angular power from 4 to 8 EeV is too slow compared to observation by the Pierre Auger Observatory.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Viktor K. Decyk

    The UCLA work on this grant was to design and help implement an object-oriented version of the GTC code, which is written in Fortran90. The GTC code is the main global gyrokinetic code used in this project, and over the years multiple, incompatible versions have evolved. The reason for this effort is to allow multiple authors to work together on GTC and to simplify future enhancements to GTC. The effort was designed to proceed incrementally. Initially, an upper layer of classes (derived types and methods) was implemented which called the original GTC code 'under the hood.' The derived types pointedmore » to data in the original GTC code, and the methods called the original GTC subroutines. The original GTC code was modified only very slightly. This allowed one to define (and refine) a set of classes which described the important features of the GTC code in a new, more abstract way, with a minimum of implementation. Furthermore, classes could be added one at a time, and at the end of the each day, the code continued to work correctly. This work was done in close collaboration with Y. Nishimura from UC Irvine and Stefan Ethier from PPPL. Ten classes were ultimately defined and implemented: gyrokinetic and drift kinetic particles, scalar and vector fields, a mesh, jacobian, FLR, equilibrium, interpolation, and particles species descriptors. In the second state of this development, some of the scaffolding was removed. The constructors in the class objects now allocated the data and the array data in the original GTC code was removed. This isolated the components and now allowed multiple instantiations of the objects to be created, in particular, multiple ion species. Again, the work was done incrementally, one class at a time, so that the code was always working properly. This work was done in close collaboration with Y. Nishimura and W. Zhang from UC Irvine and Stefan Ethier from PPPL. The third stage of this work was to integrate the capabilities of the various versions of the GTC code into one flexible and extensible version. To do this, we developed a methodology to implement Design Patterns in Fortran90. Design Patterns are abstract solutions to generic programming problems, which allow one to handle increased complexity. This work was done in collaboration with Henry Gardner, a computer scientist (and former plasma physicist) from the Australian National University. As an example, the Strategy Pattern is being used in GTC to support multiple solvers. This new code is currently being used in the study of energetic particles. A document describing the evolution of the GTC code to this new object-oriented version is available to users of GTC.« less

  11. Numerical Modeling of Poroelastic-Fluid Systems Using High-Resolution Finite Volume Methods

    NASA Astrophysics Data System (ADS)

    Lemoine, Grady

    Poroelasticity theory models the mechanics of porous, fluid-saturated, deformable solids. It was originally developed by Maurice Biot to model geophysical problems, such as seismic waves in oil reservoirs, but has also been applied to modeling living bone and other porous media. Poroelastic media often interact with fluids, such as in ocean bottom acoustics or propagation of waves from soft tissue into bone. This thesis describes the development and testing of high-resolution finite volume numerical methods, and simulation codes implementing these methods, for modeling systems of poroelastic media and fluids in two and three dimensions. These methods operate on both rectilinear grids and logically rectangular mapped grids. To allow the use of these methods, Biot's equations of poroelasticity are formulated as a first-order hyperbolic system with a source term; this source term is incorporated using operator splitting. Some modifications are required to the classical high-resolution finite volume method. Obtaining correct solutions at interfaces between poroelastic media and fluids requires a novel transverse propagation scheme and the removal of the classical second-order correction term at the interface, and in three dimensions a new wave limiting algorithm is also needed to correctly limit shear waves. The accuracy and convergence rates of the methods of this thesis are examined for a variety of analytical solutions, including simple plane waves, reflection and transmission of waves at an interface between different media, and scattering of acoustic waves by a poroelastic cylinder. Solutions are also computed for a variety of test problems from the computational poroelasticity literature, as well as some original test problems designed to mimic possible applications for the simulation code.

  12. AERO2S - SUBSONIC AERODYNAMIC ANALYSIS OF WINGS WITH LEADING- AND TRAILING-EDGE FLAPS IN COMBINATION WITH CANARD OR HORIZONTAL TAIL SURFACES (IBM PC VERSION)

    NASA Technical Reports Server (NTRS)

    Carlson, H. W.

    1994-01-01

    This code was developed to aid design engineers in the selection and evaluation of aerodynamically efficient wing-canard and wing-horizontal-tail configurations that may employ simple hinged-flap systems. Rapid estimates of the longitudinal aerodynamic characteristics of conceptual airplane lifting surface arrangements are provided. The method is particularly well suited to configurations which, because of high speed flight requirements, must employ thin wings with highly swept leading edges. The code is applicable to wings with either sharp or rounded leading edges. The code provides theoretical pressure distributions over the wing, the canard or horizontal tail, and the deflected flap surfaces as well as estimates of the wing lift, drag, and pitching moments which account for attainable leading edge thrust and leading edge separation vortex forces. The wing planform information is specified by a series of leading edge and trailing edge breakpoints for a right hand wing panel. Up to 21 pairs of coordinates may be used to describe both the leading edge and the trailing edge. The code has been written to accommodate 2000 right hand panel elements, but can easily be modified to accommodate a larger or smaller number of elements depending on the capacity of the target computer platform. The code provides solutions for wing surfaces composed of all possible combinations of leading edge and trailing edge flap settings provided by the original deflection multipliers and by the flap deflection multipliers. Up to 25 pairs of leading edge and trailing edge flap deflection schedules may thus be treated simultaneously. The code also provides for an improved accounting of hinge-line singularities in determination of wing forces and moments. To determine lifting surface perturbation velocity distributions, the code provides for a maximum of 70 iterations. The program is constructed so that successive runs may be made with a given code entry. To make additional runs, it is necessary only to add an identification record and the namelist data that are to be changed from the previous run. This code was originally developed in 1989 in FORTRAN V on a CDC 6000 computer system, and was later ported to an MS-DOS environment. Both versions are available from COSMIC. There are only a few differences between the PC version (LAR-14458) and CDC version (LAR-14178) of AERO2S distributed by COSMIC. The CDC version has one main source code file while the PC version has two files which are easier to edit and compile on a PC. The PC version does not require a FORTRAN compiler which supports NAMELIST because a special INPUT subroutine has been added. The CDC version includes two MODIFY decks which can be used to improve the code and prevent the possibility of some infrequently occurring errors while PC-version users will have to make these code changes manually. The PC version includes an executable which was generated with the Ryan McFarland/FORTRAN compiler and requires 253K RAM and an 80x87 math co-processor. Using this executable, the sample case requires about four hours to execute on an 8MHz AT-class microcomputer with a co-processor. The source code conforms to the FORTRAN 77 standard except that it uses variables longer than six characters. With two minor modifications, the PC version should be portable to any computer with a FORTRAN compiler and sufficient memory. The CDC version of AERO2S is available in CDC NOS Internal format on a 9-track 1600 BPI magnetic tape. The PC version is available on a set of two 5.25 inch 360K MS-DOS format diskettes. IBM AT is a registered trademark of International Business Machines. MS-DOS is a registered trademark of Microsoft Corporation. CDC is a registered trademark of Control Data Corporation. NOS is a trademark of Control Data Corporation.

  13. AERO2S - SUBSONIC AERODYNAMIC ANALYSIS OF WINGS WITH LEADING- AND TRAILING-EDGE FLAPS IN COMBINATION WITH CANARD OR HORIZONTAL TAIL SURFACES (CDC VERSION)

    NASA Technical Reports Server (NTRS)

    Darden, C. M.

    1994-01-01

    This code was developed to aid design engineers in the selection and evaluation of aerodynamically efficient wing-canard and wing-horizontal-tail configurations that may employ simple hinged-flap systems. Rapid estimates of the longitudinal aerodynamic characteristics of conceptual airplane lifting surface arrangements are provided. The method is particularly well suited to configurations which, because of high speed flight requirements, must employ thin wings with highly swept leading edges. The code is applicable to wings with either sharp or rounded leading edges. The code provides theoretical pressure distributions over the wing, the canard or horizontal tail, and the deflected flap surfaces as well as estimates of the wing lift, drag, and pitching moments which account for attainable leading edge thrust and leading edge separation vortex forces. The wing planform information is specified by a series of leading edge and trailing edge breakpoints for a right hand wing panel. Up to 21 pairs of coordinates may be used to describe both the leading edge and the trailing edge. The code has been written to accommodate 2000 right hand panel elements, but can easily be modified to accommodate a larger or smaller number of elements depending on the capacity of the target computer platform. The code provides solutions for wing surfaces composed of all possible combinations of leading edge and trailing edge flap settings provided by the original deflection multipliers and by the flap deflection multipliers. Up to 25 pairs of leading edge and trailing edge flap deflection schedules may thus be treated simultaneously. The code also provides for an improved accounting of hinge-line singularities in determination of wing forces and moments. To determine lifting surface perturbation velocity distributions, the code provides for a maximum of 70 iterations. The program is constructed so that successive runs may be made with a given code entry. To make additional runs, it is necessary only to add an identification record and the namelist data that are to be changed from the previous run. This code was originally developed in 1989 in FORTRAN V on a CDC 6000 computer system, and was later ported to an MS-DOS environment. Both versions are available from COSMIC. There are only a few differences between the PC version (LAR-14458) and CDC version (LAR-14178) of AERO2S distributed by COSMIC. The CDC version has one main source code file while the PC version has two files which are easier to edit and compile on a PC. The PC version does not require a FORTRAN compiler which supports NAMELIST because a special INPUT subroutine has been added. The CDC version includes two MODIFY decks which can be used to improve the code and prevent the possibility of some infrequently occurring errors while PC-version users will have to make these code changes manually. The PC version includes an executable which was generated with the Ryan McFarland/FORTRAN compiler and requires 253K RAM and an 80x87 math co-processor. Using this executable, the sample case requires about four hours to execute on an 8MHz AT-class microcomputer with a co-processor. The source code conforms to the FORTRAN 77 standard except that it uses variables longer than six characters. With two minor modifications, the PC version should be portable to any computer with a FORTRAN compiler and sufficient memory. The CDC version of AERO2S is available in CDC NOS Internal format on a 9-track 1600 BPI magnetic tape. The PC version is available on a set of two 5.25 inch 360K MS-DOS format diskettes. IBM AT is a registered trademark of International Business Machines. MS-DOS is a registered trademark of Microsoft Corporation. CDC is a registered trademark of Control Data Corporation. NOS is a trademark of Control Data Corporation.

  14. The mathematical theory of signal processing and compression-designs

    NASA Astrophysics Data System (ADS)

    Feria, Erlan H.

    2006-05-01

    The mathematical theory of signal processing, named processor coding, will be shown to inherently arise as the computational time dual of Shannon's mathematical theory of communication which is also known as source coding. Source coding is concerned with signal source memory space compression while processor coding deals with signal processor computational time compression. Their combination is named compression-designs and referred as Conde in short. A compelling and pedagogically appealing diagram will be discussed highlighting Conde's remarkable successful application to real-world knowledge-aided (KA) airborne moving target indicator (AMTI) radar.

  15. Accurate Modeling of Ionospheric Electromagnetic Fields Generated by a Low-Altitude VLF Transmitter

    DTIC Science & Technology

    2007-08-31

    latitude) for 3 different grid spacings. 14 8. Low-altitude fields produced by a 10-kHz source computed using the FD and TD codes. The agreement is...excellent, validating the new FD code. 16 9. High-altitude fields produced by a 10-kHz source computed using the FD and TD codes. The agreement is...again excellent. 17 10. Low-altitude fields produced by a 20-k.Hz source computed using the FD and TD codes. 17 11. High-altitude fields produced

  16. Biomedicalization and the public sphere: newspaper coverage of health and medicine, 1960s-2000s.

    PubMed

    Hallin, Daniel C; Brandt, Marisa; Briggs, Charles L

    2013-11-01

    This article examines historical trends in the reporting of health and medicine in The New York Times and Chicago Tribune from the 1960s to the 2000s. It focuses on the extent to which health reporting can be said to have become increasingly politicized, or to have shifted from treating the production of medical knowledge as something belonging to a restricted, specialized sphere, to treating it as a part of the general arena of public debate. We coded a sample of 400 stories from the two newspapers for four different Implied Audiences which health stories can address: Scientific/Professional, Patient/Consumer, Investor and Citizen/Policymaker. Stories were also coded for the origin of the story, the sources cited, the presence of controversy, and the positive or negative representation of biomedical institutions and actors. The data show that through all five decades, news reporting on health and medicine addressed readers as Citizen/Policymakers most often, though Patient/Consumer and Investor-oriented stories increased over time. Biomedical researchers eclipsed individual physicians and public health officials as sources of news, and the sources diversified to include more business sources, civil society organizations and patients and other lay people. The reporting of controversy increased, and portrayals of biomedicine shifted from lopsidedly positive to more mixed. We use these data in pinpointing how media play a constitutive role in the process of "biomedicalization," through which biomedicine has both extended its reach into and become entangled with other spheres of society and of knowledge production. Copyright © 2013 Elsevier Ltd. All rights reserved.

  17. Optical image encryption using QR code and multilevel fingerprints in gyrator transform domains

    NASA Astrophysics Data System (ADS)

    Wei, Yang; Yan, Aimin; Dong, Jiabin; Hu, Zhijuan; Zhang, Jingtao

    2017-11-01

    A new concept of GT encryption scheme is proposed in this paper. We present a novel optical image encryption method by using quick response (QR) code and multilevel fingerprint keys in gyrator transform (GT) domains. In this method, an original image is firstly transformed into a QR code, which is placed in the input plane of cascaded GTs. Subsequently, the QR code is encrypted into the cipher-text by using multilevel fingerprint keys. The original image can be obtained easily by reading the high-quality retrieved QR code with hand-held devices. The main parameters used as private keys are GTs' rotation angles and multilevel fingerprints. Biometrics and cryptography are integrated with each other to improve data security. Numerical simulations are performed to demonstrate the validity and feasibility of the proposed encryption scheme. In the future, the method of applying QR codes and fingerprints in GT domains possesses much potential for information security.

  18. A blind dual color images watermarking based on IWT and state coding

    NASA Astrophysics Data System (ADS)

    Su, Qingtang; Niu, Yugang; Liu, Xianxi; Zhu, Yu

    2012-04-01

    In this paper, a state-coding based blind watermarking algorithm is proposed to embed color image watermark to color host image. The technique of state coding, which makes the state code of data set be equal to the hiding watermark information, is introduced in this paper. When embedding watermark, using Integer Wavelet Transform (IWT) and the rules of state coding, these components, R, G and B, of color image watermark are embedded to these components, Y, Cr and Cb, of color host image. Moreover, the rules of state coding are also used to extract watermark from the watermarked image without resorting to the original watermark or original host image. Experimental results show that the proposed watermarking algorithm cannot only meet the demand on invisibility and robustness of the watermark, but also have well performance compared with other proposed methods considered in this work.

  19. Differences in reported sepsis incidence according to study design: a literature review.

    PubMed

    Mariansdatter, Saga Elise; Eiset, Andreas Halgreen; Søgaard, Kirstine Kobberøe; Christiansen, Christian Fynbo

    2016-10-12

    Sepsis and severe sepsis are common conditions in hospital settings, and are associated with high rates of morbidity and mortality, but reported incidences vary considerably. In this literature review, we describe the variation in reported population-based incidences of sepsis and severe sepsis. We also examine methodological and demographic differences between studies that may explain this variation. We carried out a literature review searching three major databases and reference lists of relevant articles, to identify all original studies reporting the incidence of sepsis or severe sepsis in the general population. Two authors independently assessed all articles, and the final decision to exclude an article was reached by consensus. We extracted data according to predetermined variables, including study country, sepsis definition, and data source. We then calculated descriptive statistics for the reported incidences of sepsis and severe sepsis. The studies were classified according to the method used to identify cases of sepsis or severe sepsis: chart-based (i.e. review of patient charts) or code-based (i.e. predetermined International Classification of Diseases [ICD] codes). Among 482 articles initially screened, we identified 23 primary publications reporting incidence of sepsis and/or severe sepsis in the general population. The reported incidences ranged from 74 to 1180 per 100,000 person-years and 3 to 1074 per 100,000 person-years for sepsis and severe sepsis, respectively. Most chart-based studies used the Bone criteria (or a modification hereof) and Protein C Worldwide Evaluation in Severe Sepsis (PROWESS) study criteria to identify cases of sepsis and severe sepsis. Most code-based studies used ICD-9 codes, but the number of codes used ranged from 1 to more than 1200. We found that the incidence varied according to how sepsis was identified (chart-based vs. code-based), calendar year, data source, and world region. The reported incidences of sepsis and severe sepsis in the general population varied greatly between studies. Such differences may be attributable to differences in the methods used to collect the data, the study period, or the world region where the study was undertaken. This finding highlights the importance of standardised definitions and acquisition of data regarding sepsis and severe sepsis.

  20. Conducting Retrospective Ontological Clinical Trials in ICD-9-CM in the Age of ICD-10-CM.

    PubMed

    Venepalli, Neeta K; Shergill, Ardaman; Dorestani, Parvaneh; Boyd, Andrew D

    2014-01-01

    To quantify the impact of International Classification of Disease 10th Revision Clinical Modification (ICD-10-CM) transition in cancer clinical trials by comparing coding accuracy and data discontinuity in backward ICD-10-CM to ICD-9-CM mapping via two tools, and to develop a standard ICD-9-CM and ICD-10-CM bridging methodology for retrospective analyses. While the transition to ICD-10-CM has been delayed until October 2015, its impact on cancer-related studies utilizing ICD-9-CM diagnoses has been inadequately explored. Three high impact journals with broad national and international readerships were reviewed for cancer-related studies utilizing ICD-9-CM diagnoses codes in study design, methods, or results. Forward ICD-9-CM to ICD-10-CM mapping was performing using a translational methodology with the Motif web portal ICD-9-CM conversion tool. Backward mapping from ICD-10-CM to ICD-9-CM was performed using both Centers for Medicare and Medicaid Services (CMS) general equivalence mappings (GEMs) files and the Motif web portal tool. Generated ICD-9-CM codes were compared with the original ICD-9-CM codes to assess data accuracy and discontinuity. While both methods yielded additional ICD-9-CM codes, the CMS GEMs method provided incomplete coverage with 16 of the original ICD-9-CM codes missing, whereas the Motif web portal method provided complete coverage. Of these 16 codes, 12 ICD-9-CM codes were present in 2010 Illinois Medicaid data, and accounted for 0.52% of patient encounters and 0.35% of total Medicaid reimbursements. Extraneous ICD-9-CM codes from both methods (Centers for Medicare and Medicaid Services general equivalent mapping [CMS GEMs, n = 161; Motif web portal, n = 246]) in excess of original ICD-9-CM codes accounted for 2.1% and 2.3% of total patient encounters and 3.4% and 4.1% of total Medicaid reimbursements from the 2010 Illinois Medicare database. Longitudinal data analyses post-ICD-10-CM transition will require backward ICD-10-CM to ICD-9-CM coding, and data comparison for accuracy. Researchers must be aware that all methods for backward coding are not comparable in yielding original ICD-9-CM codes. The mandated delay is an opportunity for organizations to better understand areas of financial risk with regards to data management via backward coding. Our methodology is relevant for all healthcare-related coding data, and can be replicated by organizations as a strategy to mitigate financial risk.

  1. Review of particle-in-cell modeling for the extraction region of large negative hydrogen ion sources for fusion

    NASA Astrophysics Data System (ADS)

    Wünderlich, D.; Mochalskyy, S.; Montellano, I. M.; Revel, A.

    2018-05-01

    Particle-in-cell (PIC) codes are used since the early 1960s for calculating self-consistently the motion of charged particles in plasmas, taking into account external electric and magnetic fields as well as the fields created by the particles itself. Due to the used very small time steps (in the order of the inverse plasma frequency) and mesh size, the computational requirements can be very high and they drastically increase with increasing plasma density and size of the calculation domain. Thus, usually small computational domains and/or reduced dimensionality are used. In the last years, the available central processing unit (CPU) power strongly increased. Together with a massive parallelization of the codes, it is now possible to describe in 3D the extraction of charged particles from a plasma, using calculation domains with an edge length of several centimeters, consisting of one extraction aperture, the plasma in direct vicinity of the aperture, and a part of the extraction system. Large negative hydrogen or deuterium ion sources are essential parts of the neutral beam injection (NBI) system in future fusion devices like the international fusion experiment ITER and the demonstration reactor (DEMO). For ITER NBI RF driven sources with a source area of 0.9 × 1.9 m2 and 1280 extraction apertures will be used. The extraction of negative ions is accompanied by the co-extraction of electrons which are deflected onto an electron dump. Typically, the maximum negative extracted ion current is limited by the amount and the temporal instability of the co-extracted electrons, especially for operation in deuterium. Different PIC codes are available for the extraction region of large driven negative ion sources for fusion. Additionally, some effort is ongoing in developing codes that describe in a simplified manner (coarser mesh or reduced dimensionality) the plasma of the whole ion source. The presentation first gives a brief overview of the current status of the ion source development for ITER NBI and of the PIC method. Different PIC codes for the extraction region are introduced as well as the coupling to codes describing the whole source (PIC codes or fluid codes). Presented and discussed are different physical and numerical aspects of applying PIC codes to negative hydrogen ion sources for fusion as well as selected code results. The main focus of future calculations will be the meniscus formation and identifying measures for reducing the co-extracted electrons, in particular for deuterium operation. The recent results of the 3D PIC code ONIX (calculation domain: one extraction aperture and its vicinity) for the ITER prototype source (1/8 size of the ITER NBI source) are presented.

  2. Normative Database and Color-code Agreement of Peripapillary Retinal Nerve Fiber Layer and Macular Ganglion Cell-inner Plexiform Layer Thickness in a Vietnamese Population.

    PubMed

    Perez, Claudio I; Chansangpetch, Sunee; Thai, Andy; Nguyen, Anh-Hien; Nguyen, Anwell; Mora, Marta; Nguyen, Ngoc; Lin, Shan C

    2018-06-05

    Evaluate the distribution and the color probability codes of the peripapillary retinal nerve fiber layer (RNFL) and macular ganglion cell-inner plexiform layer (GCIPL) thickness in a healthy Vietnamese population and compare them with the original color-codes provided by the Cirrus spectral domain OCT. Cross-sectional study. We recruited non-glaucomatous Vietnamese subjects and constructed a normative database for peripapillary RNFL and macular GCIPL thickness. The probability color-codes for each decade of age were calculated. We evaluated the agreement with Kappa coefficient (κ) between OCT color probability codes with Cirrus built-in original normative database and the Vietnamese normative database. 149 eyes of 149 subjects were included. The mean age of enrollees was 60.77 (±11.09) years, with a mean spherical equivalent of +0.65 (±1.58) D and mean axial length of 23.4 (±0.87) mm. Average RNFL thickness was 97.86 (±9.19) microns and average macular GCIPL was 82.49 (±6.09) microns. Agreement between original and adjusted normative database for RNFL was fair for average and inferior quadrant (κ=0.25 and 0.2, respectively); and good for other quadrants (range: κ=0.63-0.73). For macular GCIPL κ agreement ranged between 0.39 and 0.69. After adjusting with the normative Vietnamese database, the percent of yellow and red color-codes increased significantly for peripapillary RNFL thickness. Vietnamese population has a thicker RNFL in comparison with Cirrus normative database. This leads to a poor color-code agreement in average and inferior quadrant between the original and adjusted database. These findings should encourage to create a peripapillary RNFL normative database for each ethnicity.

  3. McEliece PKC Calculator

    NASA Astrophysics Data System (ADS)

    Marek, Repka

    2015-01-01

    The original McEliece PKC proposal is interesting thanks to its resistance against all known attacks, even using quantum cryptanalysis, in an IND-CCA2 secure conversion. Here we present a generic implementation of the original McEliece PKC proposal, which provides test vectors (for all important intermediate results), and also in which a measurement tool for side-channel analysis is employed. To our best knowledge, this is the first such an implementation. This Calculator is valuable in implementation optimization, in further McEliece/Niederreiter like PKCs properties investigations, and also in teaching. Thanks to that, one can, for example, examine side-channel vulnerability of a certain implementation, or one can find out and test particular parameters of the cryptosystem in order to make them appropriate for an efficient hardware implementation. This implementation is available [1] in executable binary format, and as a static C++ library, as well as in form of source codes, for Linux and Windows operating systems.

  4. Monte Carlo dose calculations of beta-emitting sources for intravascular brachytherapy: a comparison between EGS4, EGSnrc, and MCNP.

    PubMed

    Wang, R; Li, X A

    2001-02-01

    The dose parameters for the beta-particle emitting 90Sr/90Y source for intravascular brachytherapy (IVBT) have been calculated by different investigators. At a distant distance from the source, noticeable differences are seen in these parameters calculated using different Monte Carlo codes. The purpose of this work is to quantify as well as to understand these differences. We have compared a series of calculations using an EGS4, an EGSnrc, and the MCNP Monte Carlo codes. Data calculated and compared include the depth dose curve for a broad parallel beam of electrons, and radial dose distributions for point electron sources (monoenergetic or polyenergetic) and for a real 90Sr/90Y source. For the 90Sr/90Y source, the doses at the reference position (2 mm radial distance) calculated by the three code agree within 2%. However, the differences between the dose calculated by the three codes can be over 20% in the radial distance range interested in IVBT. The difference increases with radial distance from source, and reaches 30% at the tail of dose curve. These differences may be partially attributed to the different multiple scattering theories and Monte Carlo models for electron transport adopted in these three codes. Doses calculated by the EGSnrc code are more accurate than those by the EGS4. The two calculations agree within 5% for radial distance <6 mm.

  5. Information retrieval based on single-pixel optical imaging with quick-response code

    NASA Astrophysics Data System (ADS)

    Xiao, Yin; Chen, Wen

    2018-04-01

    Quick-response (QR) code technique is combined with ghost imaging (GI) to recover original information with high quality. An image is first transformed into a QR code. Then the QR code is treated as an input image in the input plane of a ghost imaging setup. After measurements, traditional correlation algorithm of ghost imaging is utilized to reconstruct an image (QR code form) with low quality. With this low-quality image as an initial guess, a Gerchberg-Saxton-like algorithm is used to improve its contrast, which is actually a post processing. Taking advantage of high error correction capability of QR code, original information can be recovered with high quality. Compared to the previous method, our method can obtain a high-quality image with comparatively fewer measurements, which means that the time-consuming postprocessing procedure can be avoided to some extent. In addition, for conventional ghost imaging, the larger the image size is, the more measurements are needed. However, for our method, images with different sizes can be converted into QR code with the same small size by using a QR generator. Hence, for the larger-size images, the time required to recover original information with high quality will be dramatically reduced. Our method makes it easy to recover a color image in a ghost imaging setup, because it is not necessary to divide the color image into three channels and respectively recover them.

  6. Source Authentication for Code Dissemination Supporting Dynamic Packet Size in Wireless Sensor Networks.

    PubMed

    Kim, Daehee; Kim, Dongwan; An, Sunshin

    2016-07-09

    Code dissemination in wireless sensor networks (WSNs) is a procedure for distributing a new code image over the air in order to update programs. Due to the fact that WSNs are mostly deployed in unattended and hostile environments, secure code dissemination ensuring authenticity and integrity is essential. Recent works on dynamic packet size control in WSNs allow enhancing the energy efficiency of code dissemination by dynamically changing the packet size on the basis of link quality. However, the authentication tokens attached by the base station become useless in the next hop where the packet size can vary according to the link quality of the next hop. In this paper, we propose three source authentication schemes for code dissemination supporting dynamic packet size. Compared to traditional source authentication schemes such as μTESLA and digital signatures, our schemes provide secure source authentication under the environment, where the packet size changes in each hop, with smaller energy consumption.

  7. Source Authentication for Code Dissemination Supporting Dynamic Packet Size in Wireless Sensor Networks †

    PubMed Central

    Kim, Daehee; Kim, Dongwan; An, Sunshin

    2016-01-01

    Code dissemination in wireless sensor networks (WSNs) is a procedure for distributing a new code image over the air in order to update programs. Due to the fact that WSNs are mostly deployed in unattended and hostile environments, secure code dissemination ensuring authenticity and integrity is essential. Recent works on dynamic packet size control in WSNs allow enhancing the energy efficiency of code dissemination by dynamically changing the packet size on the basis of link quality. However, the authentication tokens attached by the base station become useless in the next hop where the packet size can vary according to the link quality of the next hop. In this paper, we propose three source authentication schemes for code dissemination supporting dynamic packet size. Compared to traditional source authentication schemes such as μTESLA and digital signatures, our schemes provide secure source authentication under the environment, where the packet size changes in each hop, with smaller energy consumption. PMID:27409616

  8. Model-Based Least Squares Reconstruction of Coded Source Neutron Radiographs: Integrating the ORNL HFIR CG1D Source Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Santos-Villalobos, Hector J; Gregor, Jens; Bingham, Philip R

    2014-01-01

    At the present, neutron sources cannot be fabricated small and powerful enough in order to achieve high resolution radiography while maintaining an adequate flux. One solution is to employ computational imaging techniques such as a Magnified Coded Source Imaging (CSI) system. A coded-mask is placed between the neutron source and the object. The system resolution is increased by reducing the size of the mask holes and the flux is increased by increasing the size of the coded-mask and/or the number of holes. One limitation of such system is that the resolution of current state-of-the-art scintillator-based detectors caps around 50um. Tomore » overcome this challenge, the coded-mask and object are magnified by making the distance from the coded-mask to the object much smaller than the distance from object to detector. In previous work, we have shown via synthetic experiments that our least squares method outperforms other methods in image quality and reconstruction precision because of the modeling of the CSI system components. However, the validation experiments were limited to simplistic neutron sources. In this work, we aim to model the flux distribution of a real neutron source and incorporate such a model in our least squares computational system. We provide a full description of the methodology used to characterize the neutron source and validate the method with synthetic experiments.« less

  9. An Evaluation of Comparability between NEISS and ICD-9-CM Injury Coding

    PubMed Central

    Thompson, Meghan C.; Wheeler, Krista K.; Shi, Junxin; Smith, Gary A.; Xiang, Huiyun

    2014-01-01

    Objective To evaluate the National Electronic Injury Surveillance System’s (NEISS) comparability with a data source that uses ICD-9-CM coding. Methods A sample of NEISS cases from a children’s hospital in 2008 was selected, and cases were linked with their original medical record. Medical records were reviewed and an ICD-9-CM code was assigned to each case. Cases in the NEISS sample that were non-injuries by ICD-9-CM standards were identified. A bridging matrix between the NEISS and ICD-9-CM injury coding systems, by type of injury classification, was proposed and evaluated. Results Of the 2,890 cases reviewed, 13.32% (n = 385) were non-injuries according to the ICD-9-CM diagnosis. Using the proposed matrix, the comparability of the NEISS with ICD-9-CM coding was favorable among injury cases (κ = 0.87, 95% CI: 0.85–0.88). The distribution of injury types among the entire sample was similar for the two systems, with percentage differences ≥1% for only open wounds or amputation, poisoning, and other or unspecified injury types. Conclusions There is potential for conducting comparable injury research using NEISS and ICD-9-CM data. Due to the inclusion of some non-injuries in the NEISS and some differences in type of injury definitions between NEISS and ICD-9-CM coding, best practice for studies using NEISS data obtained from the CPSC should include manual review of case narratives. Use of the standardized injury and injury type definitions presented in this study will facilitate more accurate comparisons in injury research. PMID:24658100

  10. An evaluation of comparability between NEISS and ICD-9-CM injury coding.

    PubMed

    Thompson, Meghan C; Wheeler, Krista K; Shi, Junxin; Smith, Gary A; Xiang, Huiyun

    2014-01-01

    To evaluate the National Electronic Injury Surveillance System's (NEISS) comparability with a data source that uses ICD-9-CM coding. A sample of NEISS cases from a children's hospital in 2008 was selected, and cases were linked with their original medical record. Medical records were reviewed and an ICD-9-CM code was assigned to each case. Cases in the NEISS sample that were non-injuries by ICD-9-CM standards were identified. A bridging matrix between the NEISS and ICD-9-CM injury coding systems, by type of injury classification, was proposed and evaluated. Of the 2,890 cases reviewed, 13.32% (n = 385) were non-injuries according to the ICD-9-CM diagnosis. Using the proposed matrix, the comparability of the NEISS with ICD-9-CM coding was favorable among injury cases (κ = 0.87, 95% CI: 0.85-0.88). The distribution of injury types among the entire sample was similar for the two systems, with percentage differences ≥1% for only open wounds or amputation, poisoning, and other or unspecified injury types. There is potential for conducting comparable injury research using NEISS and ICD-9-CM data. Due to the inclusion of some non-injuries in the NEISS and some differences in type of injury definitions between NEISS and ICD-9-CM coding, best practice for studies using NEISS data obtained from the CPSC should include manual review of case narratives. Use of the standardized injury and injury type definitions presented in this study will facilitate more accurate comparisons in injury research.

  11. Streamlined Genome Sequence Compression using Distributed Source Coding

    PubMed Central

    Wang, Shuang; Jiang, Xiaoqian; Chen, Feng; Cui, Lijuan; Cheng, Samuel

    2014-01-01

    We aim at developing a streamlined genome sequence compression algorithm to support alternative miniaturized sequencing devices, which have limited communication, storage, and computation power. Existing techniques that require heavy client (encoder side) cannot be applied. To tackle this challenge, we carefully examined distributed source coding theory and developed a customized reference-based genome compression protocol to meet the low-complexity need at the client side. Based on the variation between source and reference, our protocol will pick adaptively either syndrome coding or hash coding to compress subsequences of changing code length. Our experimental results showed promising performance of the proposed method when compared with the state-of-the-art algorithm (GRS). PMID:25520552

  12. Sensitivity of the Cherenkov Telescope Array to the Detection of Intergalactic Magnetic Fields

    NASA Astrophysics Data System (ADS)

    Meyer, Manuel; Conrad, Jan; Dickinson, Hugh

    2016-08-01

    Very high energy (VHE; energy E ≳ 100 GeV) γ-rays originating from extragalactic sources undergo pair production with low-energy photons of background radiation fields. These pairs can inverse-Compton-scatter background photons, initiating an electromagnetic cascade. The spatial and temporal structure of this secondary γ-ray signal is altered as the {e}+{e}- pairs are deflected in an intergalactic magnetic field (IGMF). We investigate how VHE observations with the future Cherenkov Telescope Array, with its high angular resolution and broad energy range, can potentially probe the IGMF. We identify promising sources and simulate γ-ray spectra over a wide range of values of the IGMF strength and coherence length using the publicly available ELMAG Monte Carlo code. Combining simulated observations in a joint likelihood approach, we find that current limits on the IGMF can be significantly improved. The projected sensitivity depends strongly on the time a source has been γ-ray active and on the emitted maximum γ-ray energy.

  13. The FORTRAN static source code analyzer program (SAP) system description

    NASA Technical Reports Server (NTRS)

    Decker, W.; Taylor, W.; Merwarth, P.; Oneill, M.; Goorevich, C.; Waligora, S.

    1982-01-01

    A source code analyzer program (SAP) designed to assist personnel in conducting studies of FORTRAN programs is described. The SAP scans FORTRAN source code and produces reports that present statistics and measures of statements and structures that make up a module. The processing performed by SAP and of the routines, COMMON blocks, and files used by SAP are described. The system generation procedure for SAP is also presented.

  14. Data integration of structured and unstructured sources for assigning clinical codes to patient stays

    PubMed Central

    Luyckx, Kim; Luyten, Léon; Daelemans, Walter; Van den Bulcke, Tim

    2016-01-01

    Objective Enormous amounts of healthcare data are becoming increasingly accessible through the large-scale adoption of electronic health records. In this work, structured and unstructured (textual) data are combined to assign clinical diagnostic and procedural codes (specifically ICD-9-CM) to patient stays. We investigate whether integrating these heterogeneous data types improves prediction strength compared to using the data types in isolation. Methods Two separate data integration approaches were evaluated. Early data integration combines features of several sources within a single model, and late data integration learns a separate model per data source and combines these predictions with a meta-learner. This is evaluated on data sources and clinical codes from a broad set of medical specialties. Results When compared with the best individual prediction source, late data integration leads to improvements in predictive power (eg, overall F-measure increased from 30.6% to 38.3% for International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM) diagnostic codes), while early data integration is less consistent. The predictive strength strongly differs between medical specialties, both for ICD-9-CM diagnostic and procedural codes. Discussion Structured data provides complementary information to unstructured data (and vice versa) for predicting ICD-9-CM codes. This can be captured most effectively by the proposed late data integration approach. Conclusions We demonstrated that models using multiple electronic health record data sources systematically outperform models using data sources in isolation in the task of predicting ICD-9-CM codes over a broad range of medical specialties. PMID:26316458

  15. 78 FR 38750 - Self-Regulatory Organizations; NASDAQ OMX PHLX LLC; Notice of Filing of Proposed Rule Change...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-06-27

    ... (also known as origin code) refers to the participant types listed in Rule 1080.08(b) and Rule 1000(b..., and, therefore, is referring to the participant origin codes in Rule 1080.08(b) only. The proposed...-Regulatory Organizations; NASDAQ OMX PHLX LLC; Notice of Filing of Proposed Rule Change Relating to Which...

  16. Multispectral Terrain Background Simulation Techniques For Use In Airborne Sensor Evaluation

    NASA Astrophysics Data System (ADS)

    Weinberg, Michael; Wohlers, Ronald; Conant, John; Powers, Edward

    1988-08-01

    A background simulation code developed at Aerodyne Research, Inc., called AERIE is designed to reflect the major sources of clutter that are of concern to staring and scanning sensors of the type being considered for various airborne threat warning (both aircraft and missiles) sensors. The code is a first principles model that could be used to produce a consistent image of the terrain for various spectral bands, i.e., provide the proper scene correlation both spectrally and spatially. The code utilizes both topographic and cultural features to model terrain, typically from DMA data, with a statistical overlay of the critical underlying surface properties (reflectance, emittance, and thermal factors) to simulate the resulting texture in the scene. Strong solar scattering from water surfaces is included with allowance for wind driven surface roughness. Clouds can be superimposed on the scene using physical cloud models and an analytical representation of the reflectivity obtained from scattering off spherical particles. The scene generator is augmented by collateral codes that allow for the generation of images at finer resolution. These codes provide interpolation of the basic DMA databases using fractal procedures that preserve the high frequency power spectral density behavior of the original scene. Scenes are presented illustrating variations in altitude, radiance, resolution, material, thermal factors, and emissivities. The basic models utilized for simulation of the various scene components and various "engineering level" approximations are incorporated to reduce the computational complexity of the simulation.

  17. Structural Phylogenomics Retrodicts the Origin of the Genetic Code and Uncovers the Evolutionary Impact of Protein Flexibility

    PubMed Central

    Caetano-Anollés, Gustavo; Wang, Minglei; Caetano-Anollés, Derek

    2013-01-01

    The genetic code shapes the genetic repository. Its origin has puzzled molecular scientists for over half a century and remains a long-standing mystery. Here we show that the origin of the genetic code is tightly coupled to the history of aminoacyl-tRNA synthetase enzymes and their interactions with tRNA. A timeline of evolutionary appearance of protein domain families derived from a structural census in hundreds of genomes reveals the early emergence of the ‘operational’ RNA code and the late implementation of the standard genetic code. The emergence of codon specificities and amino acid charging involved tight coevolution of aminoacyl-tRNA synthetases and tRNA structures as well as episodes of structural recruitment. Remarkably, amino acid and dipeptide compositions of single-domain proteins appearing before the standard code suggest archaic synthetases with structures homologous to catalytic domains of tyrosyl-tRNA and seryl-tRNA synthetases were capable of peptide bond formation and aminoacylation. Results reveal that genetics arose through coevolutionary interactions between polypeptides and nucleic acid cofactors as an exacting mechanism that favored flexibility and folding of the emergent proteins. These enhancements of phenotypic robustness were likely internalized into the emerging genetic system with the early rise of modern protein structure. PMID:23991065

  18. Entropy-Based Bounds On Redundancies Of Huffman Codes

    NASA Technical Reports Server (NTRS)

    Smyth, Padhraic J.

    1992-01-01

    Report presents extension of theory of redundancy of binary prefix code of Huffman type which includes derivation of variety of bounds expressed in terms of entropy of source and size of alphabet. Recent developments yielded bounds on redundancy of Huffman code in terms of probabilities of various components in source alphabet. In practice, redundancies of optimal prefix codes often closer to 0 than to 1.

  19. 40 CFR Appendix A to Subpart A of... - Tables

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... phone number ✓ ✓ (6) FIPS code ✓ ✓ (7) Facility ID codes ✓ ✓ (8) Unit ID code ✓ ✓ (9) Process ID code... for Reporting on Emissions From Nonpoint Sources and Nonroad Mobile Sources, Where Required by 40 CFR... start date ✓ ✓ (3) Inventory end date ✓ ✓ (4) Contact name ✓ ✓ (5) Contact phone number ✓ ✓ (6) FIPS...

  20. SOURCELESS STARTUP. A MACHINE CODE FOR COMPUTING LOW-SOURCE REACTOR STARTUPS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    MacMillan, D.B.

    1960-06-01

    >A revision to the sourceless start-up code is presented. The code solves a system of differential equations encountered in computing the probability distribution of activity at an observed power level during reactor start-up from a very low source level. (J.R.D.)

  1. 48 CFR 252.227-7013 - Rights in technical data-Noncommercial items.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... causing a computer to perform a specific operation or series of operations. (3) Computer software means computer programs, source code, source code listings, object code listings, design details, algorithms... or will be developed exclusively with Government funds; (ii) Studies, analyses, test data, or similar...

  2. 48 CFR 252.227-7013 - Rights in technical data-Noncommercial items.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... causing a computer to perform a specific operation or series of operations. (3) Computer software means computer programs, source code, source code listings, object code listings, design details, algorithms... or will be developed exclusively with Government funds; (ii) Studies, analyses, test data, or similar...

  3. 48 CFR 252.227-7013 - Rights in technical data-Noncommercial items.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... causing a computer to perform a specific operation or series of operations. (3) Computer software means computer programs, source code, source code listings, object code listings, design details, algorithms... or will be developed exclusively with Government funds; (ii) Studies, analyses, test data, or similar...

  4. 48 CFR 252.227-7013 - Rights in technical data-Noncommercial items.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... causing a computer to perform a specific operation or series of operations. (3) Computer software means computer programs, source code, source code listings, object code listings, design details, algorithms... developed exclusively with Government funds; (ii) Studies, analyses, test data, or similar data produced for...

  5. 12 CFR Appendix A to Part 203 - Form and Instructions for Completion of HMDA Loan/Application Register

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ...—American Indian or Alaska Native Code 2—Asian Code 3—Black or African American Code 4—Native Hawaiian or... secondary market entity within the same calendar year: Code 0—Loan was not originated or was not sold in...

  6. 12 CFR Appendix A to Part 203 - Form and Instructions for Completion of HMDA Loan/Application Register

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ...—American Indian or Alaska Native Code 2—Asian Code 3—Black or African American Code 4—Native Hawaiian or... secondary market entity within the same calendar year: Code 0—Loan was not originated or was not sold in...

  7. 12 CFR Appendix A to Part 1003 - Form and Instructions for Completion of HMDA Loan/Application Register

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ...—American Indian or Alaska Native Code 2—Asian Code 3—Black or African American Code 4—Native Hawaiian or... secondary market entity within the same calendar year: Code 0—Loan was not originated or was not sold in...

  8. 12 CFR Appendix A to Part 203 - Form and Instructions for Completion of HMDA Loan/Application Register

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ...—American Indian or Alaska Native Code 2—Asian Code 3—Black or African American Code 4—Native Hawaiian or... secondary market entity within the same calendar year: Code 0—Loan was not originated or was not sold in...

  9. 12 CFR Appendix A to Part 1003 - Form and Instructions for Completion of HMDA Loan/Application Register

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ...—American Indian or Alaska Native Code 2—Asian Code 3—Black or African American Code 4—Native Hawaiian or... secondary market entity within the same calendar year: Code 0—Loan was not originated or was not sold in...

  10. 12 CFR Appendix A to Part 1003 - Form and Instructions for Completion of HMDA Loan/Application Register

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ...—American Indian or Alaska Native Code 2—Asian Code 3—Black or African American Code 4—Native Hawaiian or... secondary market entity within the same calendar year: Code 0—Loan was not originated or was not sold in...

  11. Experimental QR code optical encryption: noise-free data recovering.

    PubMed

    Barrera, John Fredy; Mira-Agudelo, Alejandro; Torroba, Roberto

    2014-05-15

    We report, to our knowledge for the first time, the experimental implementation of a quick response (QR) code as a "container" in an optical encryption system. A joint transform correlator architecture in an interferometric configuration is chosen as the experimental scheme. As the implementation is not possible in a single step, a multiplexing procedure to encrypt the QR code of the original information is applied. Once the QR code is correctly decrypted, the speckle noise present in the recovered QR code is eliminated by a simple digital procedure. Finally, the original information is retrieved completely free of any kind of degradation after reading the QR code. Additionally, we propose and implement a new protocol in which the reception of the encrypted QR code and its decryption, the digital block processing, and the reading of the decrypted QR code are performed employing only one device (smartphone, tablet, or computer). The overall method probes to produce an outcome far more attractive to make the adoption of the technique a plausible option. Experimental results are presented to demonstrate the practicality of the proposed security system.

  12. Embedding QR codes in tumor board presentations, enhancing educational content for oncology information management.

    PubMed

    Siderits, Richard; Yates, Stacy; Rodriguez, Arelis; Lee, Tina; Rimmer, Cheryl; Roche, Mark

    2011-01-01

    Quick Response (QR) Codes are standard in supply management and seen with increasing frequency in advertisements. They are now present regularly in healthcare informatics and education. These 2-dimensional square bar codes, originally designed by the Toyota car company, are free of license and have a published international standard. The codes can be generated by free online software and the resulting images incorporated into presentations. The images can be scanned by "smart" phones and tablets using either the iOS or Android platforms, which link the device with the information represented by the QR code (uniform resource locator or URL, online video, text, v-calendar entries, short message service [SMS] and formatted text). Once linked to the device, the information can be viewed at any time after the original presentation, saved in the device or to a Web-based "cloud" repository, printed, or shared with others via email or Bluetooth file transfer. This paper describes how we use QR codes in our tumor board presentations, discusses the benefits, the different QR codes from Web links and how QR codes facilitate the distribution of educational content.

  13. A Wideband Satcom Based Avionics Network with CDMA Uplink and TDM Downlink

    NASA Technical Reports Server (NTRS)

    Agrawal, D.; Johnson, B. S.; Madhow, U.; Ramchandran, K.; Chun, K. S.

    2000-01-01

    The purpose of this paper is to describe some key technical ideas behind our vision of a future satcom based digital communication network for avionics applications The key features of our design are as follows: (a) Packetized transmission to permit efficient use of system resources for multimedia traffic; (b) A time division multiplexed (TDM) satellite downlink whose physical layer is designed to operate the satellite link at maximum power efficiency. We show how powerful turbo codes (invented originally for linear modulation) can be used with nonlinear constant envelope modulation, thus permitting the satellite amplifier to operate in a power efficient nonlinear regime; (c) A code division multiple access (CDMA) satellite uplink, which permits efficient access to the satellite from multiple asynchronous users. Closed loop power control is difficult for bursty packetized traffic, especially given the large round trip delay to the satellite. We show how adaptive interference suppression techniques can be used to deal with the ensuing near-far problem; (d) Joint source-channel coding techniques are required both at the physical and the data transport layer to optimize the end-to-end performance. We describe a novel approach to multiple description image encoding at the data transport layer in this paper.

  14. Metabolic basis for the self-referential genetic code.

    PubMed

    Guimarães, Romeu Cardoso

    2011-08-01

    An investigation of the biosynthesis pathways producing glycine and serine was necessary to clarify an apparent inconsistency between the self-referential model (SRM) for the formation of the genetic code and the model of coevolution of encodings and of amino acid biosynthesis routes. According to the SRM proposal, glycine was the first amino acid encoded, followed by serine. The coevolution model does not state precisely which the first encodings were, only presenting a list of about ten early assignments including the derivation of glycine from serine-this being derived from the glycolysis intermediate glycerate, which reverses the order proposed by the self-referential model. Our search identified the glycine-serine pathway of syntheses based on one-carbon sources, involving activities of the glycine decarboxylase complex and its associated serine hydroxymethyltransferase, which is consistent with the order proposed by the self-referential model and supports its rationale for the origin of the genetic code: protein synthesis was developed inside an early metabolic system, serving the function of a sink of amino acids; the first peptides were glycine-rich and fit for the function of building the early ribonucleoproteins; glycine consumption in proteins drove the fixation of the glycine-serine pathway.

  15. LDPC-based iterative joint source-channel decoding for JPEG2000.

    PubMed

    Pu, Lingling; Wu, Zhenyu; Bilgin, Ali; Marcellin, Michael W; Vasic, Bane

    2007-02-01

    A framework is proposed for iterative joint source-channel decoding of JPEG2000 codestreams. At the encoder, JPEG2000 is used to perform source coding with certain error-resilience (ER) modes, and LDPC codes are used to perform channel coding. During decoding, the source decoder uses the ER modes to identify corrupt sections of the codestream and provides this information to the channel decoder. Decoding is carried out jointly in an iterative fashion. Experimental results indicate that the proposed method requires fewer iterations and improves overall system performance.

  16. 49 CFR 230.8 - Definitions.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ....) NBIC. National Board Inspection Code published by the National Board of Boiler and Pressure Vessel.... American National Standards Institute. API. American Petroleum Institute. ASME. American Society of... separation into parts. Code of original construction. The manufacturer's or industry code in effect when the...

  17. 49 CFR 230.8 - Definitions.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ....) NBIC. National Board Inspection Code published by the National Board of Boiler and Pressure Vessel.... American National Standards Institute. API. American Petroleum Institute. ASME. American Society of... separation into parts. Code of original construction. The manufacturer's or industry code in effect when the...

  18. 49 CFR 230.8 - Definitions.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ....) NBIC. National Board Inspection Code published by the National Board of Boiler and Pressure Vessel.... American National Standards Institute. API. American Petroleum Institute. ASME. American Society of... separation into parts. Code of original construction. The manufacturer's or industry code in effect when the...

  19. 48 CFR 252.227-7013 - Rights in technical data-Noncommercial items.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... causing a computer to perform a specific operation or series of operations. (3) Computer software means computer programs, source code, source code listings, object code listings, design details, algorithms... funds; (ii) Studies, analyses, test data, or similar data produced for this contract, when the study...

  20. Content-based multiple bitstream image transmission over noisy channels.

    PubMed

    Cao, Lei; Chen, Chang Wen

    2002-01-01

    In this paper, we propose a novel combined source and channel coding scheme for image transmission over noisy channels. The main feature of the proposed scheme is a systematic decomposition of image sources so that unequal error protection can be applied according to not only bit error sensitivity but also visual content importance. The wavelet transform is adopted to hierarchically decompose the image. The association between the wavelet coefficients and what they represent spatially in the original image is fully exploited so that wavelet blocks are classified based on their corresponding image content. The classification produces wavelet blocks in each class with similar content and statistics, therefore enables high performance source compression using the set partitioning in hierarchical trees (SPIHT) algorithm. To combat the channel noise, an unequal error protection strategy with rate-compatible punctured convolutional/cyclic redundancy check (RCPC/CRC) codes is implemented based on the bit contribution to both peak signal-to-noise ratio (PSNR) and visual quality. At the receiving end, a postprocessing method making use of the SPIHT decoding structure and the classification map is developed to restore the degradation due to the residual error after channel decoding. Experimental results show that the proposed scheme is indeed able to provide protection both for the bits that are more sensitive to errors and for the more important visual content under a noisy transmission environment. In particular, the reconstructed images illustrate consistently better visual quality than using the single-bitstream-based schemes.

  1. A review of routinely collected data studies in urology: Methodological considerations, reporting quality, and future directions.

    PubMed

    Welk, Blayne; Kwong, Justin

    2017-01-01

    Studies using routinely collected data (RCD) are common in the urological literature; however, there are important considerations in the creation and review of RCD discoveries. A recent reporting guideline (REporting of studies Conducted using Observational Routinely-collected health Data, RECORD) was developed to improve the reporting of these studies. This narrative review examines important considerations for RCD studies. To assess the current level of reporting in the urological literature, we reviewed all the original research articles published in Journal of Urology and European Urology in 2014, and determined the proportion of the RECORD checklist items that were reported for RCD studies. There were 56 RCD studies identified among the 608 articles. When the RECORD items were considered applicable to the specific study, they were reported in 52.5% of cases. Studies most consistently (>80% of them) reported the names of the data sources, the study time frame, the extent to which the authors could access the database source, the patient selection, and discussed missing data. Few studies (<25%) discussed validation of key coding elements, details on data-linkage, data-cleaning, the impact of changing eligibility over time, or provided the complete list of coding elements used to define key study variables. Reporting factors specifically relevant in RCD studies may serve to increase the quality of these studies in the urological literature. With increased technological integration in healthcare and the proliferation of electronic medical records, RCD will continue to be an important source for urological research.

  2. Code Optimization and Parallelization on the Origins: Looking from Users' Perspective

    NASA Technical Reports Server (NTRS)

    Chang, Yan-Tyng Sherry; Thigpen, William W. (Technical Monitor)

    2002-01-01

    Parallel machines are becoming the main compute engines for high performance computing. Despite their increasing popularity, it is still a challenge for most users to learn the basic techniques to optimize/parallelize their codes on such platforms. In this paper, we present some experiences on learning these techniques for the Origin systems at the NASA Advanced Supercomputing Division. Emphasis of this paper will be on a few essential issues (with examples) that general users should master when they work with the Origins as well as other parallel systems.

  3. Comparative analysis of LWR and FBR spent fuels for nuclear forensics evaluation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Permana, Sidik; Suzuki, Mitsutoshi; Su'ud, Zaki

    2012-06-06

    Some interesting issues are attributed to nuclide compositions of spent fuels from thermal reactors as well as fast reactors such as a potential to reuse as recycled fuel, and a possible capability to be manage as a fuel for destructive devices. In addition, analysis on nuclear forensics which is related to spent fuel compositions becomes one of the interesting topics to evaluate the origin and the composition of spent fuels from the spent fuel foot-prints. Spent fuel compositions of different fuel types give some typical spent fuel foot prints and can be estimated the origin of source of those spentmore » fuel compositions. Some technics or methods have been developing based on some science and technological capability including experimental and modeling or theoretical aspects of analyses. Some foot-print of nuclear forensics will identify the typical information of spent fuel compositions such as enrichment information, burnup or irradiation time, reactor types as well as the cooling time which is related to the age of spent fuels. This paper intends to evaluate the typical spent fuel compositions of light water (LWR) and fast breeder reactors (FBR) from the view point of some foot prints of nuclear forensics. An established depletion code of ORIGEN is adopted to analyze LWR spent fuel (SF) for several burnup constants and decay times. For analyzing some spent fuel compositions of FBR, some coupling codes such as SLAROM code, JOINT and CITATION codes including JFS-3-J-3.2R as nuclear data library have been adopted. Enriched U-235 fuel composition of oxide type is used for fresh fuel of LWR and a mixed oxide fuel (MOX) for FBR fresh fuel. Those MOX fuels of FBR come from the spent fuels of LWR. Some typical spent fuels from both LWR and FBR will be compared to distinguish some typical foot-prints of SF based on nuclear forensic analysis.« less

  4. SolTrace | Concentrating Solar Power | NREL

    Science.gov Websites

    NREL packaged distribution or from source code at the SolTrace open source project website. NREL Publications Support FAQs SolTrace open source project The code uses Monte-Carlo ray-tracing methodology. The -tracing capabilities. With the release of the SolTrace open source project, the software has adopted

  5. Parallelization of interpolation, solar radiation and water flow simulation modules in GRASS GIS using OpenMP

    NASA Astrophysics Data System (ADS)

    Hofierka, Jaroslav; Lacko, Michal; Zubal, Stanislav

    2017-10-01

    In this paper, we describe the parallelization of three complex and computationally intensive modules of GRASS GIS using the OpenMP application programming interface for multi-core computers. These include the v.surf.rst module for spatial interpolation, the r.sun module for solar radiation modeling and the r.sim.water module for water flow simulation. We briefly describe the functionality of the modules and parallelization approaches used in the modules. Our approach includes the analysis of the module's functionality, identification of source code segments suitable for parallelization and proper application of OpenMP parallelization code to create efficient threads processing the subtasks. We document the efficiency of the solutions using the airborne laser scanning data representing land surface in the test area and derived high-resolution digital terrain model grids. We discuss the performance speed-up and parallelization efficiency depending on the number of processor threads. The study showed a substantial increase in computation speeds on a standard multi-core computer while maintaining the accuracy of results in comparison to the output from original modules. The presented parallelization approach showed the simplicity and efficiency of the parallelization of open-source GRASS GIS modules using OpenMP, leading to an increased performance of this geospatial software on standard multi-core computers.

  6. An Adaption Broadcast Radius-Based Code Dissemination Scheme for Low Energy Wireless Sensor Networks.

    PubMed

    Yu, Shidi; Liu, Xiao; Liu, Anfeng; Xiong, Naixue; Cai, Zhiping; Wang, Tian

    2018-05-10

    Due to the Software Defined Network (SDN) technology, Wireless Sensor Networks (WSNs) are getting wider application prospects for sensor nodes that can get new functions after updating program codes. The issue of disseminating program codes to every node in the network with minimum delay and energy consumption have been formulated and investigated in the literature. The minimum-transmission broadcast (MTB) problem, which aims to reduce broadcast redundancy, has been well studied in WSNs where the broadcast radius is assumed to be fixed in the whole network. In this paper, an Adaption Broadcast Radius-based Code Dissemination (ABRCD) scheme is proposed to reduce delay and improve energy efficiency in duty cycle-based WSNs. In the ABCRD scheme, a larger broadcast radius is set in areas with more energy left, generating more optimized performance than previous schemes. Thus: (1) with a larger broadcast radius, program codes can reach the edge of network from the source in fewer hops, decreasing the number of broadcasts and at the same time, delay. (2) As the ABRCD scheme adopts a larger broadcast radius for some nodes, program codes can be transmitted to more nodes in one broadcast transmission, diminishing the number of broadcasts. (3) The larger radius in the ABRCD scheme causes more energy consumption of some transmitting nodes, but radius enlarging is only conducted in areas with an energy surplus, and energy consumption in the hot-spots can be reduced instead due to some nodes transmitting data directly to sink without forwarding by nodes in the original hot-spot, thus energy consumption can almost reach a balance and network lifetime can be prolonged. The proposed ABRCD scheme first assigns a broadcast radius, which doesn’t affect the network lifetime, to nodes having different distance to the code source, then provides an algorithm to construct a broadcast backbone. In the end, a comprehensive performance analysis and simulation result shows that the proposed ABRCD scheme shows better performance in different broadcast situations. Compared to previous schemes, the transmission delay is reduced by 41.11~78.42%, the number of broadcasts is reduced by 36.18~94.27% and the energy utilization ratio is improved up to 583.42%, while the network lifetime can be prolonged up to 274.99%.

  7. An Adaption Broadcast Radius-Based Code Dissemination Scheme for Low Energy Wireless Sensor Networks

    PubMed Central

    Yu, Shidi; Liu, Xiao; Cai, Zhiping; Wang, Tian

    2018-01-01

    Due to the Software Defined Network (SDN) technology, Wireless Sensor Networks (WSNs) are getting wider application prospects for sensor nodes that can get new functions after updating program codes. The issue of disseminating program codes to every node in the network with minimum delay and energy consumption have been formulated and investigated in the literature. The minimum-transmission broadcast (MTB) problem, which aims to reduce broadcast redundancy, has been well studied in WSNs where the broadcast radius is assumed to be fixed in the whole network. In this paper, an Adaption Broadcast Radius-based Code Dissemination (ABRCD) scheme is proposed to reduce delay and improve energy efficiency in duty cycle-based WSNs. In the ABCRD scheme, a larger broadcast radius is set in areas with more energy left, generating more optimized performance than previous schemes. Thus: (1) with a larger broadcast radius, program codes can reach the edge of network from the source in fewer hops, decreasing the number of broadcasts and at the same time, delay. (2) As the ABRCD scheme adopts a larger broadcast radius for some nodes, program codes can be transmitted to more nodes in one broadcast transmission, diminishing the number of broadcasts. (3) The larger radius in the ABRCD scheme causes more energy consumption of some transmitting nodes, but radius enlarging is only conducted in areas with an energy surplus, and energy consumption in the hot-spots can be reduced instead due to some nodes transmitting data directly to sink without forwarding by nodes in the original hot-spot, thus energy consumption can almost reach a balance and network lifetime can be prolonged. The proposed ABRCD scheme first assigns a broadcast radius, which doesn’t affect the network lifetime, to nodes having different distance to the code source, then provides an algorithm to construct a broadcast backbone. In the end, a comprehensive performance analysis and simulation result shows that the proposed ABRCD scheme shows better performance in different broadcast situations. Compared to previous schemes, the transmission delay is reduced by 41.11~78.42%, the number of broadcasts is reduced by 36.18~94.27% and the energy utilization ratio is improved up to 583.42%, while the network lifetime can be prolonged up to 274.99%. PMID:29748525

  8. Neutrons Flux Distributions of the Pu-Be Source and its Simulation by the MCNP-4B Code

    NASA Astrophysics Data System (ADS)

    Faghihi, F.; Mehdizadeh, S.; Hadad, K.

    Neutron Fluence rate of a low intense Pu-Be source is measured by Neutron Activation Analysis (NAA) of 197Au foils. Also, the neutron fluence rate distribution versus energy is calculated using the MCNP-4B code based on ENDF/B-V library. Theoretical simulation as well as our experimental performance are a new experience for Iranians to make reliability with the code for further researches. In our theoretical investigation, an isotropic Pu-Be source with cylindrical volume distribution is simulated and relative neutron fluence rate versus energy is calculated using MCNP-4B code. Variation of the fast and also thermal neutrons fluence rate, which are measured by NAA method and MCNP code, are compared.

  9. Forming images with thermal neutrons

    NASA Astrophysics Data System (ADS)

    Vanier, Peter E.; Forman, Leon

    2003-01-01

    Thermal neutrons passing through air have scattering lengths of about 20 meters. At further distances, the majority of neutrons emanating from a moderated source will scatter multiple times in the air before being detected, and will not retain information about the location of the source, except that their density will fall off somewhat faster than 1/r2. However, there remains a significant fraction of the neutrons that will travel 20 meters or more without scattering and can be used to create an image of the source. A few years ago, a proof-of-principle "camera" was demonstrated that could produce images of a scene containing sources of thermalized neutrons and could locate a source comparable in strength with an improvised nuclear device at ranges over 60 meters. The instrument makes use of a coded aperture with a uniformly redundant array of openings, analogous to those used in x-ray and gamma cameras. The detector is a position-sensitive He-3 proportional chamber, originally used for neutron diffraction. A neutron camera has many features in common with those designed for non-focusable photons, as well as some important differences. Potential applications include detecting nuclear smuggling, locating non-metallic land mines, assaying nuclear waste, and surveying for health physics purposes.

  10. Littoral lichens as a novel source of potentially bioactive Actinobacteria.

    PubMed

    Parrot, Delphine; Antony-Babu, Sanjay; Intertaglia, Laurent; Grube, Martin; Tomasi, Sophie; Suzuki, Marcelino T

    2015-10-30

    Cultivable Actinobacteria are the largest source of microbially derived bioactive molecules. The high demand for novel antibiotics highlights the need for exploring novel sources of these bacteria. Microbial symbioses with sessile macro-organisms, known to contain bioactive compounds likely of bacterial origin, represent an interesting and underexplored source of Actinobacteria. We studied the diversity and potential for bioactive-metabolite production of Actinobacteria associated with two marine lichens (Lichina confinis and L. pygmaea; from intertidal and subtidal zones) and one littoral lichen (Roccella fuciformis; from supratidal zone) from the Brittany coast (France), as well as the terrestrial lichen Collema auriforme (from a riparian zone, Austria). A total of 247 bacterial strains were isolated using two selective media. Isolates were identified and clustered into 101 OTUs (98% identity) including 51 actinobacterial OTUs. The actinobacterial families observed were: Brevibacteriaceae, Cellulomonadaceae, Gordoniaceae, Micrococcaceae, Mycobacteriaceae, Nocardioidaceae, Promicromonosporaceae, Pseudonocardiaceae, Sanguibacteraceae and Streptomycetaceae. Interestingly, the diversity was most influenced by the selective media rather than lichen species or the level of lichen thallus association. The potential for bioactive-metabolite biosynthesis of the isolates was confirmed by screening genes coding for polyketide synthases types I and II. These results show that littoral lichens are a source of diverse potentially bioactive Actinobacteria.

  11. Littoral lichens as a novel source of potentially bioactive Actinobacteria

    PubMed Central

    Parrot, Delphine; Antony-Babu, Sanjay; Intertaglia, Laurent; Grube, Martin; Tomasi, Sophie; Suzuki, Marcelino T.

    2015-01-01

    Cultivable Actinobacteria are the largest source of microbially derived bioactive molecules. The high demand for novel antibiotics highlights the need for exploring novel sources of these bacteria. Microbial symbioses with sessile macro-organisms, known to contain bioactive compounds likely of bacterial origin, represent an interesting and underexplored source of Actinobacteria. We studied the diversity and potential for bioactive-metabolite production of Actinobacteria associated with two marine lichens (Lichina confinis and L. pygmaea; from intertidal and subtidal zones) and one littoral lichen (Roccella fuciformis; from supratidal zone) from the Brittany coast (France), as well as the terrestrial lichen Collema auriforme (from a riparian zone, Austria). A total of 247 bacterial strains were isolated using two selective media. Isolates were identified and clustered into 101 OTUs (98% identity) including 51 actinobacterial OTUs. The actinobacterial families observed were: Brevibacteriaceae, Cellulomonadaceae, Gordoniaceae, Micrococcaceae, Mycobacteriaceae, Nocardioidaceae, Promicromonosporaceae, Pseudonocardiaceae, Sanguibacteraceae and Streptomycetaceae. Interestingly, the diversity was most influenced by the selective media rather than lichen species or the level of lichen thallus association. The potential for bioactive-metabolite biosynthesis of the isolates was confirmed by screening genes coding for polyketide synthases types I and II. These results show that littoral lichens are a source of diverse potentially bioactive Actinobacteria. PMID:26514347

  12. Two-terminal video coding.

    PubMed

    Yang, Yang; Stanković, Vladimir; Xiong, Zixiang; Zhao, Wei

    2009-03-01

    Following recent works on the rate region of the quadratic Gaussian two-terminal source coding problem and limit-approaching code designs, this paper examines multiterminal source coding of two correlated, i.e., stereo, video sequences to save the sum rate over independent coding of both sequences. Two multiterminal video coding schemes are proposed. In the first scheme, the left sequence of the stereo pair is coded by H.264/AVC and used at the joint decoder to facilitate Wyner-Ziv coding of the right video sequence. The first I-frame of the right sequence is successively coded by H.264/AVC Intracoding and Wyner-Ziv coding. An efficient stereo matching algorithm based on loopy belief propagation is then adopted at the decoder to produce pixel-level disparity maps between the corresponding frames of the two decoded video sequences on the fly. Based on the disparity maps, side information for both motion vectors and motion-compensated residual frames of the right sequence are generated at the decoder before Wyner-Ziv encoding. In the second scheme, source splitting is employed on top of classic and Wyner-Ziv coding for compression of both I-frames to allow flexible rate allocation between the two sequences. Experiments with both schemes on stereo video sequences using H.264/AVC, LDPC codes for Slepian-Wolf coding of the motion vectors, and scalar quantization in conjunction with LDPC codes for Wyner-Ziv coding of the residual coefficients give a slightly lower sum rate than separate H.264/AVC coding of both sequences at the same video quality.

  13. Comparison of TG-43 dosimetric parameters of brachytherapy sources obtained by three different versions of MCNP codes.

    PubMed

    Zaker, Neda; Zehtabian, Mehdi; Sina, Sedigheh; Koontz, Craig; Meigooni, Ali S

    2016-03-08

    Monte Carlo simulations are widely used for calculation of the dosimetric parameters of brachytherapy sources. MCNP4C2, MCNP5, MCNPX, EGS4, EGSnrc, PTRAN, and GEANT4 are among the most commonly used codes in this field. Each of these codes utilizes a cross-sectional library for the purpose of simulating different elements and materials with complex chemical compositions. The accuracies of the final outcomes of these simulations are very sensitive to the accuracies of the cross-sectional libraries. Several investigators have shown that inaccuracies of some of the cross section files have led to errors in 125I and 103Pd parameters. The purpose of this study is to compare the dosimetric parameters of sample brachytherapy sources, calculated with three different versions of the MCNP code - MCNP4C, MCNP5, and MCNPX. In these simulations for each source type, the source and phantom geometries, as well as the number of the photons, were kept identical, thus eliminating the possible uncertainties. The results of these investigations indicate that for low-energy sources such as 125I and 103Pd there are discrepancies in gL(r) values. Discrepancies up to 21.7% and 28% are observed between MCNP4C and other codes at a distance of 6 cm for 103Pd and 10 cm for 125I from the source, respectively. However, for higher energy sources, the discrepancies in gL(r) values are less than 1.1% for 192Ir and less than 1.2% for 137Cs between the three codes.

  14. More Than Bar Codes: Integrating Global Standards-Based Bar Code Technology Into National Health Information Systems in Ethiopia and Pakistan to Increase End-to-End Supply Chain Visibility.

    PubMed

    Hara, Liuichi; Guirguis, Ramy; Hummel, Keith; Villanueva, Monica

    2017-12-28

    The United Nations Population Fund (UNFPA) and the United States Agency for International Development (USAID) DELIVER PROJECT work together to strengthen public health commodity supply chains by standardizing bar coding under a single set of global standards. From 2015, UNFPA and USAID collaborated to pilot test how tracking and tracing of bar coded health products could be operationalized in the public health supply chains of Ethiopia and Pakistan and inform the ecosystem needed to begin full implementation. Pakistan had been using proprietary bar codes for inventory management of contraceptive supplies but transitioned to global standards-based bar codes during the pilot. The transition allowed Pakistan to leverage the original bar codes that were preprinted by global manufacturers as opposed to printing new bar codes at the central warehouse. However, barriers at lower service delivery levels prevented full realization of end-to-end data visibility. Key barriers at the district level were the lack of a digital inventory management system and absence of bar codes at the primary-level packaging level, such as single blister packs. The team in Ethiopia developed an open-sourced smartphone application that allowed the team to scan bar codes using the mobile phone's camera and to push the captured data to the country's data mart. Real-time tracking and tracing occurred from the central warehouse to the Addis Ababa distribution hub and to 2 health centers. These pilots demonstrated that standardized product identification and bar codes can significantly improve accuracy over manual stock counts while significantly streamlining the stock-taking process, resulting in efficiencies. The pilots also showed that bar coding technology by itself is not sufficient to ensure data visibility. Rather, by using global standards for identification and data capture of pharmaceuticals and medical devices, and integrating the data captured into national and global tracking systems, countries are able to lay the foundation for interoperability and ensure a harmonized language between global health stakeholders. © Hara et al.

  15. More Than Bar Codes: Integrating Global Standards-Based Bar Code Technology Into National Health Information Systems in Ethiopia and Pakistan to Increase End-to-End Supply Chain Visibility

    PubMed Central

    Hara, Liuichi; Guirguis, Ramy; Hummel, Keith; Villanueva, Monica

    2017-01-01

    The United Nations Population Fund (UNFPA) and the United States Agency for International Development (USAID) DELIVER PROJECT work together to strengthen public health commodity supply chains by standardizing bar coding under a single set of global standards. From 2015, UNFPA and USAID collaborated to pilot test how tracking and tracing of bar coded health products could be operationalized in the public health supply chains of Ethiopia and Pakistan and inform the ecosystem needed to begin full implementation. Pakistan had been using proprietary bar codes for inventory management of contraceptive supplies but transitioned to global standards-based bar codes during the pilot. The transition allowed Pakistan to leverage the original bar codes that were preprinted by global manufacturers as opposed to printing new bar codes at the central warehouse. However, barriers at lower service delivery levels prevented full realization of end-to-end data visibility. Key barriers at the district level were the lack of a digital inventory management system and absence of bar codes at the primary-level packaging level, such as single blister packs. The team in Ethiopia developed an open-sourced smartphone application that allowed the team to scan bar codes using the mobile phone's camera and to push the captured data to the country's data mart. Real-time tracking and tracing occurred from the central warehouse to the Addis Ababa distribution hub and to 2 health centers. These pilots demonstrated that standardized product identification and bar codes can significantly improve accuracy over manual stock counts while significantly streamlining the stock-taking process, resulting in efficiencies. The pilots also showed that bar coding technology by itself is not sufficient to ensure data visibility. Rather, by using global standards for identification and data capture of pharmaceuticals and medical devices, and integrating the data captured into national and global tracking systems, countries are able to lay the foundation for interoperability and ensure a harmonized language between global health stakeholders. PMID:29284701

  16. Parallelization of an Object-Oriented Unstructured Aeroacoustics Solver

    NASA Technical Reports Server (NTRS)

    Baggag, Abdelkader; Atkins, Harold; Oezturan, Can; Keyes, David

    1999-01-01

    A computational aeroacoustics code based on the discontinuous Galerkin method is ported to several parallel platforms using MPI. The discontinuous Galerkin method is a compact high-order method that retains its accuracy and robustness on non-smooth unstructured meshes. In its semi-discrete form, the discontinuous Galerkin method can be combined with explicit time marching methods making it well suited to time accurate computations. The compact nature of the discontinuous Galerkin method also makes it well suited for distributed memory parallel platforms. The original serial code was written using an object-oriented approach and was previously optimized for cache-based machines. The port to parallel platforms was achieved simply by treating partition boundaries as a type of boundary condition. Code modifications were minimal because boundary conditions were abstractions in the original program. Scalability results are presented for the SCI Origin, IBM SP2, and clusters of SGI and Sun workstations. Slightly superlinear speedup is achieved on a fixed-size problem on the Origin, due to cache effects.

  17. The circulating transcriptome as a source of non-invasive cancer biomarkers: concepts and controversies of non-coding and coding RNA in body fluids

    PubMed Central

    Fernandez-Mercado, Marta; Manterola, Lorea; Larrea, Erika; Goicoechea, Ibai; Arestin, María; Armesto, María; Otaegui, David; Lawrie, Charles H

    2015-01-01

    The gold standard for cancer diagnosis remains the histological examination of affected tissue, obtained either by surgical excision, or radiologically guided biopsy. Such procedures however are expensive, not without risk to the patient, and require consistent evaluation by expert pathologists. Consequently, the search for non-invasive tools for the diagnosis and management of cancer has led to great interest in the field of circulating nucleic acids in plasma and serum. An additional benefit of blood-based testing is the ability to carry out screening and repeat sampling on patients undergoing therapy, or monitoring disease progression allowing for the development of a personalized approach to cancer patient management. Despite having been discovered over 60 years ago, the clear clinical potential of circulating nucleic acids, with the notable exception of prenatal diagnostic testing, has yet to translate into the clinic. The recent discovery of non-coding (nc) RNA (in particular micro(mi)RNAs) in the blood has provided fresh impetuous for the field. In this review, we discuss the potential of the circulating transcriptome (coding and ncRNA), as novel cancer biomarkers, the controversy surrounding their origin and biology, and most importantly the hurdles that remain to be overcome if they are really to become part of future clinical practice. PMID:26119132

  18. Analysis of the influence of the heat transfer phenomena on the late phase of the ThAI Iod-12 test

    NASA Astrophysics Data System (ADS)

    Gonfiotti, B.; Paci, S.

    2014-11-01

    Iodine is one of the major contributors to the source term during a severe accident in a Nuclear Power Plant for its volatility and high radiological consequences. Therefore, large efforts have been made to describe the Iodine behaviour during an accident, especially in the containment system. Due to the lack of experimental data, in the last years many attempts were carried out to fill the gaps on the knowledge of Iodine behaviour. In this framework, two tests (ThAI Iod-11 and Iod-12) were carried out inside a multi-compartment steel vessel. A quite complex transient characterizes these two tests; therefore they are also suitable for thermal- hydraulic benchmarks. The two tests were originally released for a benchmark exercise during the SARNET2 EU Project. At the end of this benchmark a report covering the main findings was issued, stating that the common codes employed in SA studies were able to simulate the tests but with large discrepancies. The present work is then related to the application of the new versions of ASTEC and MELCOR codes with the aim of carry out a new code-to-code comparison vs. ThAI Iod-12 experimental data, focusing on the influence of the heat exchanges with the outer environment, which seems to be one of the most challenging issues to cope with.

  19. Evaluation of radiological dispersion/consequence codes supporting DOE nuclear facility SARs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    O`Kula, K.R.; Paik, I.K.; Chung, D.Y.

    1996-12-31

    Since the early 1990s, the authorization basis documentation of many U.S. Department of Energy (DOE) nuclear facilities has been upgraded to comply with DOE orders and standards. In this process, many safety analyses have been revised. Unfortunately, there has been nonuniform application of software, and the most appropriate computer and engineering methodologies often are not applied. A DOE Accident Phenomenology and Consequence (APAC) Methodology Evaluation Program was originated at the request of DOE Defense Programs to evaluate the safety analysis methodologies used in nuclear facility authorization basis documentation and to define future cost-effective support and development initiatives. Six areas, includingmore » source term development (fire, spills, and explosion analysis), in-facility transport, and dispersion/ consequence analysis (chemical and radiological) are contained in the APAC program. The evaluation process, codes considered, key results, and recommendations for future model and software development of the Radiological Dispersion/Consequence Working Group are summarized in this paper.« less

  20. APORT: a program for the area-based apportionment of county variables to cells of a polar grid. [Airborne pollutant transport models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fields, D.E.; Little, C.A.

    1978-11-01

    The APORT computer code was developed to apportion variables tabulated for polygon-structured civil districts onto cells of a polar grid. The apportionment is based on fractional overlap between the polygon and the grid cells. Centering the origin of the polar system at a pollutant source site yields results that are very useful for assessing and interpreting the effects of airborne pollutant dissemination. The APOPLT graphics code, which uses the same data set as APORT, provides a convenient visual display of the polygon structure and the extent of the polar grid. The APORT/APOPLT methodology was verified by application to county summariesmore » of cattle population for counties surrounding the Oyster Creek, New Jersey, nuclear power plant. These numerical results, which were obtained using approximately 2-min computer time on an IBM System 360/91 computer, compare favorably to results of manual computations in both speed and accuracy.« less

  1. Robust GRMHD Evolutions of Merging Black-Hole Binaries in Magnetized Plasma

    NASA Astrophysics Data System (ADS)

    Kelly, Bernard; Etienne, Zachariah; Giacomazzo, Bruno; Baker, John

    2016-03-01

    Black-hole binary (BHB) mergers are expected to be powerful sources of gravitational radiation at stellar and galactic scales. A typical astrophysical environment for these mergers will involve magnetized plasmas accreting onto each hole; the strong-field gravitational dynamics of the merger may churn this plasma in ways that produce characteristic electromagnetic radiation visible to high-energy EM detectors on and above the Earth. Here we return to a cutting-edge GRMHD simulation of equal-mass BHBs in a uniform plasma, originally performed with the Whisky code. Our new tool is the recently released IllinoisGRMHD, a compact, highly-optimized ideal GRMHD code that meshes with the Einstein Toolkit. We establish consistency of IllinoisGRMHD results with the older Whisky results, and investigate the robustness of these results to changes in initial configuration of the BHB and the plasma magnetic field, and discuss the interpretation of the ``jet-like'' features seen in the Poynting flux post-merger. Work supported in part by NASA Grant 13-ATP13-0077.

  2. Juicebox.js Provides a Cloud-Based Visualization System for Hi-C Data.

    PubMed

    Robinson, James T; Turner, Douglass; Durand, Neva C; Thorvaldsdóttir, Helga; Mesirov, Jill P; Aiden, Erez Lieberman

    2018-02-28

    Contact mapping experiments such as Hi-C explore how genomes fold in 3D. Here, we introduce Juicebox.js, a cloud-based web application for exploring the resulting datasets. Like the original Juicebox application, Juicebox.js allows users to zoom in and out of such datasets using an interface similar to Google Earth. Juicebox.js also has many features designed to facilitate data reproducibility and sharing. Furthermore, Juicebox.js encodes the exact state of the browser in a shareable URL. Creating a public browser for a new Hi-C dataset does not require coding and can be accomplished in under a minute. The web app also makes it possible to create interactive figures online that can complement or replace ordinary journal figures. When combined with Juicer, this makes the entire process of data analysis transparent, insofar as every step from raw reads to published figure is publicly available as open source code. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.

  3. Breaking and Fixing Origin-Based Access Control in Hybrid Web/Mobile Application Frameworks.

    PubMed

    Georgiev, Martin; Jana, Suman; Shmatikov, Vitaly

    2014-02-01

    Hybrid mobile applications (apps) combine the features of Web applications and "native" mobile apps. Like Web applications, they are implemented in portable, platform-independent languages such as HTML and JavaScript. Like native apps, they have direct access to local device resources-file system, location, camera, contacts, etc. Hybrid apps are typically developed using hybrid application frameworks such as PhoneGap. The purpose of the framework is twofold. First, it provides an embedded Web browser (for example, WebView on Android) that executes the app's Web code. Second, it supplies "bridges" that allow Web code to escape the browser and access local resources on the device. We analyze the software stack created by hybrid frameworks and demonstrate that it does not properly compose the access-control policies governing Web code and local code, respectively. Web code is governed by the same origin policy, whereas local code is governed by the access-control policy of the operating system (for example, user-granted permissions in Android). The bridges added by the framework to the browser have the same local access rights as the entire application, but are not correctly protected by the same origin policy. This opens the door to fracking attacks, which allow foreign-origin Web content included into a hybrid app (e.g., ads confined in iframes) to drill through the layers and directly access device resources. Fracking vulnerabilities are generic: they affect all hybrid frameworks, all embedded Web browsers, all bridge mechanisms, and all platforms on which these frameworks are deployed. We study the prevalence of fracking vulnerabilities in free Android apps based on the PhoneGap framework. Each vulnerability exposes sensitive local resources-the ability to read and write contacts list, local files, etc.-to dozens of potentially malicious Web domains. We also analyze the defenses deployed by hybrid frameworks to prevent resource access by foreign-origin Web content and explain why they are ineffectual. We then present NoFrak, a capability-based defense against fracking attacks. NoFrak is platform-independent, compatible with any framework and embedded browser, requires no changes to the code of the existing hybrid apps, and does not break their advertising-supported business model.

  4. Using the Astrophysics Source Code Library

    NASA Astrophysics Data System (ADS)

    Allen, Alice; Teuben, P. J.; Berriman, G. B.; DuPrie, K.; Hanisch, R. J.; Mink, J. D.; Nemiroff, R. J.; Shamir, L.; Wallin, J. F.

    2013-01-01

    The Astrophysics Source Code Library (ASCL) is a free on-line registry of source codes that are of interest to astrophysicists; with over 500 codes, it is the largest collection of scientist-written astrophysics programs in existence. All ASCL source codes have been used to generate results published in or submitted to a refereed journal and are available either via a download site or from an identified source. An advisory committee formed in 2011 provides input and guides the development and expansion of the ASCL, and since January 2012, all accepted ASCL entries are indexed by ADS. Though software is increasingly important for the advancement of science in astrophysics, these methods are still often hidden from view or difficult to find. The ASCL (ascl.net/) seeks to improve the transparency and reproducibility of research by making these vital methods discoverable, and to provide recognition and incentive to those who write and release programs useful for astrophysics research. This poster provides a description of the ASCL, an update on recent additions, and the changes in the astrophysics community we are starting to see because of the ASCL.

  5. McSKY: A hybrid Monte-Carlo lime-beam code for shielded gamma skyshine calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shultis, J.K.; Faw, R.E.; Stedry, M.H.

    1994-07-01

    McSKY evaluates skyshine dose from an isotropic, monoenergetic, point photon source collimated into either a vertical cone or a vertical structure with an N-sided polygon cross section. The code assumes an overhead shield of two materials, through the user can specify zero shield thickness for an unshielded calculation. The code uses a Monte-Carlo algorithm to evaluate transport through source shields and the integral line source to describe photon transport through the atmosphere. The source energy must be between 0.02 and 100 MeV. For heavily shielded sources with energies above 20 MeV, McSKY results must be used cautiously, especially at detectormore » locations near the source.« less

  6. Implementing Scientific Simulation Codes Highly Tailored for Vector Architectures Using Custom Configurable Computing Machines

    NASA Technical Reports Server (NTRS)

    Rutishauser, David

    2006-01-01

    The motivation for this work comes from an observation that amidst the push for Massively Parallel (MP) solutions to high-end computing problems such as numerical physical simulations, large amounts of legacy code exist that are highly optimized for vector supercomputers. Because re-hosting legacy code often requires a complete re-write of the original code, which can be a very long and expensive effort, this work examines the potential to exploit reconfigurable computing machines in place of a vector supercomputer to implement an essentially unmodified legacy source code. Custom and reconfigurable computing resources could be used to emulate an original application's target platform to the extent required to achieve high performance. To arrive at an architecture that delivers the desired performance subject to limited resources involves solving a multi-variable optimization problem with constraints. Prior research in the area of reconfigurable computing has demonstrated that designing an optimum hardware implementation of a given application under hardware resource constraints is an NP-complete problem. The premise of the approach is that the general issue of applying reconfigurable computing resources to the implementation of an application, maximizing the performance of the computation subject to physical resource constraints, can be made a tractable problem by assuming a computational paradigm, such as vector processing. This research contributes a formulation of the problem and a methodology to design a reconfigurable vector processing implementation of a given application that satisfies a performance metric. A generic, parametric, architectural framework for vector processing implemented in reconfigurable logic is developed as a target for a scheduling/mapping algorithm that maps an input computation to a given instance of the architecture. This algorithm is integrated with an optimization framework to arrive at a specification of the architecture parameters that attempts to minimize execution time, while staying within resource constraints. The flexibility of using a custom reconfigurable implementation is exploited in a unique manner to leverage the lessons learned in vector supercomputer development. The vector processing framework is tailored to the application, with variable parameters that are fixed in traditional vector processing. Benchmark data that demonstrates the functionality and utility of the approach is presented. The benchmark data includes an identified bottleneck in a real case study example vector code, the NASA Langley Terminal Area Simulation System (TASS) application.

  7. Joint design of QC-LDPC codes for coded cooperation system with joint iterative decoding

    NASA Astrophysics Data System (ADS)

    Zhang, Shunwai; Yang, Fengfan; Tang, Lei; Ejaz, Saqib; Luo, Lin; Maharaj, B. T.

    2016-03-01

    In this paper, we investigate joint design of quasi-cyclic low-density-parity-check (QC-LDPC) codes for coded cooperation system with joint iterative decoding in the destination. First, QC-LDPC codes based on the base matrix and exponent matrix are introduced, and then we describe two types of girth-4 cycles in QC-LDPC codes employed by the source and relay. In the equivalent parity-check matrix corresponding to the jointly designed QC-LDPC codes employed by the source and relay, all girth-4 cycles including both type I and type II are cancelled. Theoretical analysis and numerical simulations show that the jointly designed QC-LDPC coded cooperation well combines cooperation gain and channel coding gain, and outperforms the coded non-cooperation under the same conditions. Furthermore, the bit error rate performance of the coded cooperation employing jointly designed QC-LDPC codes is better than those of random LDPC codes and separately designed QC-LDPC codes over AWGN channels.

  8. Performance and Architecture Lab Modeling Tool

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2014-06-19

    Analytical application performance models are critical for diagnosing performance-limiting resources, optimizing systems, and designing machines. Creating models, however, is difficult. Furthermore, models are frequently expressed in forms that are hard to distribute and validate. The Performance and Architecture Lab Modeling tool, or Palm, is a modeling tool designed to make application modeling easier. Palm provides a source code modeling annotation language. Not only does the modeling language divide the modeling task into sub problems, it formally links an application's source code with its model. This link is important because a model's purpose is to capture application behavior. Furthermore, this linkmore » makes it possible to define rules for generating models according to source code organization. Palm generates hierarchical models according to well-defined rules. Given an application, a set of annotations, and a representative execution environment, Palm will generate the same model. A generated model is a an executable program whose constituent parts directly correspond to the modeled application. Palm generates models by combining top-down (human-provided) semantic insight with bottom-up static and dynamic analysis. A model's hierarchy is defined by static and dynamic source code structure. Because Palm coordinates models and source code, Palm's models are 'first-class' and reproducible. Palm automates common modeling tasks. For instance, Palm incorporates measurements to focus attention, represent constant behavior, and validate models. Palm's workflow is as follows. The workflow's input is source code annotated with Palm modeling annotations. The most important annotation models an instance of a block of code. Given annotated source code, the Palm Compiler produces executables and the Palm Monitor collects a representative performance profile. The Palm Generator synthesizes a model based on the static and dynamic mapping of annotations to program behavior. The model -- an executable program -- is a hierarchical composition of annotation functions, synthesized functions, statistics for runtime values, and performance measurements.« less

  9. The evolution of the genetic code: Impasses and challenges.

    PubMed

    Kun, Ádám; Radványi, Ádám

    2018-02-01

    The origin of the genetic code and translation is a "notoriously difficult problem". In this survey we present a list of questions that a full theory of the genetic code needs to answer. We assess the leading hypotheses according to these criteria. The stereochemical, the coding coenzyme handle, the coevolution, the four-column theory, the error minimization and the frozen accident hypotheses are discussed. The integration of these hypotheses can account for the origin of the genetic code. But experiments are badly needed. Thus we suggest a host of experiments that could (in)validate some of the models. We focus especially on the coding coenzyme handle hypothesis (CCH). The CCH suggests that amino acids attached to RNA handles enhanced catalytic activities of ribozymes. Alternatively, amino acids without handles or with a handle consisting of a single adenine, like in contemporary coenzymes could have been employed. All three scenarios can be tested in in vitro compartmentalized systems. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. Chromaticity calculations and code comparisons for x-ray lithography source XLS and SXLS rings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parsa, Z.

    1988-06-16

    This note presents the chromaticity calculations and code comparison results for the (x-ray lithography source) XLS (Chasman Green, XUV Cosy lattice) and (2 magnet 4T) SXLS lattices, with the standard beam optic codes, including programs SYNCH88.5, MAD6, PATRICIA88.4, PATPET88.2, DIMAD, BETA, and MARYLIE. This analysis is a part of our ongoing accelerator physics code studies. 4 figs., 10 tabs.

  11. The Astrophysics Source Code Library: Where Do We Go from Here?

    NASA Astrophysics Data System (ADS)

    Allen, A.; Berriman, B.; DuPrie, K.; Hanisch, R. J.; Mink, J.; Nemiroff, R. J.; Shamir, L.; Shortridge, K.; Taylor, M. B.; Teuben, P.; Wallen, J.

    2014-05-01

    The Astrophysics Source Code Library1, started in 1999, has in the past three years grown from a repository for 40 codes to a registry of over 700 codes that are now indexed by ADS. What comes next? We examine the future of the , the challenges facing it, the rationale behind its practices, and the need to balance what we might do with what we have the resources to accomplish.

  12. Evaluating Open-Source Full-Text Search Engines for Matching ICD-10 Codes.

    PubMed

    Jurcău, Daniel-Alexandru; Stoicu-Tivadar, Vasile

    2016-01-01

    This research presents the results of evaluating multiple free, open-source engines on matching ICD-10 diagnostic codes via full-text searches. The study investigates what it takes to get an accurate match when searching for a specific diagnostic code. For each code the evaluation starts by extracting the words that make up its text and continues with building full-text search queries from the combinations of these words. The queries are then run against all the ICD-10 codes until a match indicates the code in question as a match with the highest relative score. This method identifies the minimum number of words that must be provided in order for the search engines choose the desired entry. The engines analyzed include a popular Java-based full-text search engine, a lightweight engine written in JavaScript which can even execute on the user's browser, and two popular open-source relational database management systems.

  13. Digital TV processing system

    NASA Technical Reports Server (NTRS)

    1975-01-01

    Two digital video data compression systems directly applicable to the Space Shuttle TV Communication System were described: (1) For the uplink, a low rate monochrome data compressor is used. The compression is achieved by using a motion detection technique in the Hadamard domain. To transform the variable source rate into a fixed rate, an adaptive rate buffer is provided. (2) For the downlink, a color data compressor is considered. The compression is achieved first by intra-color transformation of the original signal vector, into a vector which has lower information entropy. Then two-dimensional data compression techniques are applied to the Hadamard transformed components of this last vector. Mathematical models and data reliability analyses were also provided for the above video data compression techniques transmitted over a channel encoded Gaussian channel. It was shown that substantial gains can be achieved by the combination of video source and channel coding.

  14. LSST communications middleware implementation

    NASA Astrophysics Data System (ADS)

    Mills, Dave; Schumacher, German; Lotz, Paul

    2016-07-01

    The LSST communications middleware is based on a set of software abstractions; which provide standard interfaces for common communications services. The observatory requires communication between diverse subsystems, implemented by different contractors, and comprehensive archiving of subsystem status data. The Service Abstraction Layer (SAL) is implemented using open source packages that implement open standards of DDS (Data Distribution Service1) for data communication, and SQL (Standard Query Language) for database access. For every subsystem, abstractions for each of the Telemetry datastreams, along with Command/Response and Events, have been agreed with the appropriate component vendor (such as Dome, TMA, Hexapod), and captured in ICD's (Interface Control Documents).The OpenSplice (Prismtech) Community Edition of DDS provides an LGPL licensed distribution which may be freely redistributed. The availability of the full source code provides assurances that the project will be able to maintain it over the full 10 year survey, independent of the fortunes of the original providers.

  15. Challenges with secondary use of multi-source water-quality data in the United States

    USGS Publications Warehouse

    Sprague, Lori A.; Oelsner, Gretchen P.; Argue, Denise M.

    2017-01-01

    Combining water-quality data from multiple sources can help counterbalance diminishing resources for stream monitoring in the United States and lead to important regional and national insights that would not otherwise be possible. Individual monitoring organizations understand their own data very well, but issues can arise when their data are combined with data from other organizations that have used different methods for reporting the same common metadata elements. Such use of multi-source data is termed “secondary use”—the use of data beyond the original intent determined by the organization that collected the data. In this study, we surveyed more than 25 million nutrient records collected by 488 organizations in the United States since 1899 to identify major inconsistencies in metadata elements that limit the secondary use of multi-source data. Nearly 14.5 million of these records had missing or ambiguous information for one or more key metadata elements, including (in decreasing order of records affected) sample fraction, chemical form, parameter name, units of measurement, precise numerical value, and remark codes. As a result, metadata harmonization to make secondary use of these multi-source data will be time consuming, expensive, and inexact. Different data users may make different assumptions about the same ambiguous data, potentially resulting in different conclusions about important environmental issues. The value of these ambiguous data is estimated at \\$US12 billion, a substantial collective investment by water-resource organizations in the United States. By comparison, the value of unambiguous data is estimated at \\$US8.2 billion. The ambiguous data could be preserved for uses beyond the original intent by developing and implementing standardized metadata practices for future and legacy water-quality data throughout the United States.

  16. Methodology for sensitivity analysis, approximate analysis, and design optimization in CFD for multidisciplinary applications

    NASA Technical Reports Server (NTRS)

    Taylor, Arthur C., III; Hou, Gene W.

    1994-01-01

    The straightforward automatic-differentiation and the hand-differentiated incremental iterative methods are interwoven to produce a hybrid scheme that captures some of the strengths of each strategy. With this compromise, discrete aerodynamic sensitivity derivatives are calculated with the efficient incremental iterative solution algorithm of the original flow code. Moreover, the principal advantage of automatic differentiation is retained (i.e., all complicated source code for the derivative calculations is constructed quickly with accuracy). The basic equations for second-order sensitivity derivatives are presented; four methods are compared. Each scheme requires that large systems are solved first for the first-order derivatives and, in all but one method, for the first-order adjoint variables. Of these latter three schemes, two require no solutions of large systems thereafter. For the other two for which additional systems are solved, the equations and solution procedures are analogous to those for the first order derivatives. From a practical viewpoint, implementation of the second-order methods is feasible only with software tools such as automatic differentiation, because of the extreme complexity and large number of terms. First- and second-order sensitivities are calculated accurately for two airfoil problems, including a turbulent flow example; both geometric-shape and flow-condition design variables are considered. Several methods are tested; results are compared on the basis of accuracy, computational time, and computer memory. For first-order derivatives, the hybrid incremental iterative scheme obtained with automatic differentiation is competitive with the best hand-differentiated method; for six independent variables, it is at least two to four times faster than central finite differences and requires only 60 percent more memory than the original code; the performance is expected to improve further in the future.

  17. Synoptic Scale North American Weather Tracks and the Formation of North Atlantic Windstorms

    NASA Astrophysics Data System (ADS)

    Baum, A. J.; Godek, M. L.

    2014-12-01

    Each winter, dozens of fatalities occur when intense North Atlantic windstorms impact Western Europe. Forecasting the tracks of these storms in the short term is often problematic, but long term forecasts provide an even greater challenge. Improved prediction necessitates the ability to identify these low pressure areas at formation and understand commonalities that distinguish these storms from other systems crossing the Atlantic, such as where they develop. There is some evidence that indicates the majority of intense windstorms that reach Europe have origins far west, as low pressure systems that develop over the North American continent. This project aims to identify the specific cyclogenesis regions in North America that produce a significantly greater number of dangerous storms. NOAA Ocean Prediction Center surface pressure reanalysis maps are used to examine the tracks of storms. Strong windstorms are characterized by those with a central pressure of less than 965 hPa at any point in their life cycle. Tracks are recorded using a coding system based on source region, storm track and dissipation region. The codes are analyzed to determine which region contains the most statistical significance with respect to strong Atlantic windstorm generation. The resultant set of codes also serves as a climatology of North Atlantic extratropical cyclones. Results indicate that a number of windstorms favor cyclogenesis regions off the east coast of the United States. A large number of strong storms that encounter east coast cyclogenesis zones originate in the central mountain region, around Colorado. These storms follow a path that exits North America around New England and subsequently travel along the Canadian coast. Some of these are then primed to become "bombs" over the open Atlantic Ocean.

  18. On origin of genetic code and tRNA before translation

    PubMed Central

    2011-01-01

    Background Synthesis of proteins is based on the genetic code - a nearly universal assignment of codons to amino acids (aas). A major challenge to the understanding of the origins of this assignment is the archetypal "key-lock vs. frozen accident" dilemma. Here we re-examine this dilemma in light of 1) the fundamental veto on "foresight evolution", 2) modular structures of tRNAs and aminoacyl-tRNA synthetases, and 3) the updated library of aa-binding sites in RNA aptamers successfully selected in vitro for eight amino acids. Results The aa-binding sites of arginine, isoleucine and tyrosine contain both their cognate triplets, anticodons and codons. We have noticed that these cases might be associated with palindrome-dinucleotides. For example, one-base shift to the left brings arginine codons CGN, with CG at 1-2 positions, to the respective anticodons NCG, with CG at 2-3 positions. Formally, the concomitant presence of codons and anticodons is also expected in the reverse situation, with codons containing palindrome-dinucleotides at their 2-3 positions, and anticodons exhibiting them at 1-2 positions. A closer analysis reveals that, surprisingly, RNA binding sites for Arg, Ile and Tyr "prefer" (exactly as in the actual genetic code) the anticodon(2-3)/codon(1-2) tetramers to their anticodon(1-2)/codon(2-3) counterparts, despite the seemingly perfect symmetry of the latter. However, since in vitro selection of aa-specific RNA aptamers apparently had nothing to do with translation, this striking preference provides a new strong support to the notion of the genetic code emerging before translation, in response to catalytic (and possibly other) needs of ancient RNA life. Consistently with the pre-translation origin of the code, we propose here a new model of tRNA origin by the gradual, Fibonacci process-like, elongation of a tRNA molecule from a primordial coding triplet and 5'DCCA3' quadruplet (D is a base-determinator) to the eventual 76 base-long cloverleaf-shaped molecule. Conclusion Taken together, our findings necessarily imply that primordial tRNAs, tRNA aminoacylating ribozymes, and (later) the translation machinery in general have been co-evolving to ''fit'' the (likely already defined) genetic code, rather than the opposite way around. Coding triplets in this primal pre-translational code were likely similar to the anticodons, with second and third nucleotides being more important than the less specific first one. Later, when the code was expanding in co-evolution with the translation apparatus, the importance of 2-3 nucleotides of coding triplets "transferred" to the 1-2 nucleotides of their complements, thus distinguishing anticodons from codons. This evolutionary primacy of anticodons in genetic coding makes the hypothesis of primal stereo-chemical affinity between amino acids and cognate triplets, the hypothesis of coding coenzyme handles for amino acids, the hypothesis of tRNA-like genomic 3' tags suggesting that tRNAs originated in replication, and the hypothesis of ancient ribozymes-mediated operational code of tRNA aminoacylation not mutually contradicting but rather co-existing in harmony. Reviewers This article was reviewed by Eugene V. Koonin, Wentao Ma (nominated by Juergen Brosius) and Anthony Poole. PMID:21342520

  19. Image authentication using distributed source coding.

    PubMed

    Lin, Yao-Chung; Varodayan, David; Girod, Bernd

    2012-01-01

    We present a novel approach using distributed source coding for image authentication. The key idea is to provide a Slepian-Wolf encoded quantized image projection as authentication data. This version can be correctly decoded with the help of an authentic image as side information. Distributed source coding provides the desired robustness against legitimate variations while detecting illegitimate modification. The decoder incorporating expectation maximization algorithms can authenticate images which have undergone contrast, brightness, and affine warping adjustments. Our authentication system also offers tampering localization by using the sum-product algorithm.

  20. Aquarius Project: Research in the System Architecture of Accelerators for the High Performance Execution of Logic Programs.

    DTIC Science & Technology

    1991-05-31

    benchmarks ............ .... . .. .. . . .. 220 Appendix G : Source code of the Aquarius Prolog compiler ........ . 224 Chapter I Introduction "You’re given...notation, a tool that is used throughout the compiler’s implementation. Appendix F lists the source code of the C and Prolog benchmarks. Appendix G lists the...source code of the compilcr. 5 "- standard form Prolog / a-sfomadon / head umrvln Convert to tmeikernel Prol g vrans~fonaon 1symbolic execution

  1. A code of ethics for nurse educators: revised.

    PubMed

    Rosenkoetter, Marlene M; Milstead, Jeri A

    2010-01-01

    Nurse educators have the responsibility of assisting students and their colleagues with understanding and practicing ethical conduct. There is an inherent responsibility to keep codes current and relevant for existing nursing practice. The code presented here is a revision of the Code of ethics for nurse educators originally published in 1983 and includes changes that are intended to provide for that relevancy.

  2. Driver anger on the information superhighway: A content analysis of online complaints of offensive driver behaviour.

    PubMed

    Wickens, Christine M; Wiesenthal, David L; Hall, Ashley; Roseborough, James E W

    2013-03-01

    In recent years, several websites have been developed allowing drivers to post their complaints about other motorists online. These websites allow drivers to describe the nature of the offensive behaviour and to identify the offending motorist by vehicle type, colour, and license plate number. Some websites also ask drivers to list the location where the event took place and the exact date and time of the offence. The current study was a content analysis of complaints posted to RoadRagers.com between 1999 and 2007 (N=5624). The purpose of the study was to: (1) assess the research value of this novel data source; (2) demonstrate the value of content analysis to the study of driver behaviour; (3) further validate an existing coding scheme; (4) determine whether this new data source would replicate previous research findings regarding the most frequent types of driver complaints and temporal distribution of these reports; (5) provide recommendations for improved driver training and public safety initiatives based on these data. A coding scheme that was originally developed for an assessment of complaints submitted to the Ontario Provincial Police (OPP) (Wickens et al., 2005) was revised to accommodate the new dataset. The inter-rater reliability of the revised coding scheme as applied to the website complaints was very good (kappa=.85). The most frequently reported improper driver behaviours were cutting/weaving, speeding, perceived displays of hostility, and tailgating. Reports were most frequent on weekdays and during the morning and afternoon rush hour. The current study replicated several findings from the analysis of reports to the OPP, but possible differences in the sample and data collection method also produced some differences in findings. The value of content analysis to driver behaviour research and of driver complaint websites as a data source was demonstrated. Implications for driver safety initiatives and future research will be discussed. Copyright © 2012 Elsevier Ltd. All rights reserved.

  3. Pigeons may not use dual coding in the radial maze analog task.

    PubMed

    DiGian, Kelly A; Zentall, Thomas R

    2007-07-01

    Using a radial maze analog task, T. R. Zentall, J. N. Steirn, and P. Jackson-Smith (1990) found evidence that when a delay was interpolated early in a trial, pigeons coded locations retrospectively, but when the delay was interpolated late in the trial, they coded locations prospectively (support for a dual coding hypothesis). In Experiment 1 of the present study, the authors replicated the original finding of dual coding. In Experiments 2 and 3, they used a 2-alternative test procedure that does not require the assumption that pigeons' choice criterion, which changes over the course of the trial, is the same on delay and control trials. Under these conditions, the pigeons no longer showed evidence for dual coding. Instead, there was some evidence that they showed prospective coding, but a more parsimonious account of the results may be that the delay produced a relatively constant decrement in performance at all points of delay interpolation. The original finding of dual coding by Zentall et al. might have been biased by more impulsive choices early in control trials but not in delay trials and by a more stringent choice criterion late in delay trials. ((c) 2007 APA, all rights reserved).

  4. Novel Integration of Frame Rate Up Conversion and HEVC Coding Based on Rate-Distortion Optimization.

    PubMed

    Guo Lu; Xiaoyun Zhang; Li Chen; Zhiyong Gao

    2018-02-01

    Frame rate up conversion (FRUC) can improve the visual quality by interpolating new intermediate frames. However, high frame rate videos by FRUC are confronted with more bitrate consumption or annoying artifacts of interpolated frames. In this paper, a novel integration framework of FRUC and high efficiency video coding (HEVC) is proposed based on rate-distortion optimization, and the interpolated frames can be reconstructed at encoder side with low bitrate cost and high visual quality. First, joint motion estimation (JME) algorithm is proposed to obtain robust motion vectors, which are shared between FRUC and video coding. What's more, JME is embedded into the coding loop and employs the original motion search strategy in HEVC coding. Then, the frame interpolation is formulated as a rate-distortion optimization problem, where both the coding bitrate consumption and visual quality are taken into account. Due to the absence of original frames, the distortion model for interpolated frames is established according to the motion vector reliability and coding quantization error. Experimental results demonstrate that the proposed framework can achieve 21% ~ 42% reduction in BDBR, when compared with the traditional methods of FRUC cascaded with coding.

  5. Opening up Architectures of Software-Intensive Systems: A Functional Decomposition to Support System Comprehension

    DTIC Science & Technology

    2007-10-01

    Architecture ................................................................................ 14 Figure 2. Eclipse Java Model...16 Figure 3. Eclipse Java Model at the Source Code Level...24 Figure 9. Java Source Code

  6. Scalable Video Transmission Over Multi-Rate Multiple Access Channels

    DTIC Science & Technology

    2007-06-01

    Rate - compatible punctured convolutional codes (RCPC codes ) and their ap- plications,” IEEE...source encoded using the MPEG-4 video codec. The source encoded bitstream is then channel encoded with Rate Compatible Punctured Convolutional (RCPC...Clark, and J. M. Geist, “ Punctured convolutional codes or rate (n-1)/n and simplified maximum likelihood decoding,” IEEE Transactions on

  7. Comparison of TG‐43 dosimetric parameters of brachytherapy sources obtained by three different versions of MCNP codes

    PubMed Central

    Zaker, Neda; Sina, Sedigheh; Koontz, Craig; Meigooni1, Ali S.

    2016-01-01

    Monte Carlo simulations are widely used for calculation of the dosimetric parameters of brachytherapy sources. MCNP4C2, MCNP5, MCNPX, EGS4, EGSnrc, PTRAN, and GEANT4 are among the most commonly used codes in this field. Each of these codes utilizes a cross‐sectional library for the purpose of simulating different elements and materials with complex chemical compositions. The accuracies of the final outcomes of these simulations are very sensitive to the accuracies of the cross‐sectional libraries. Several investigators have shown that inaccuracies of some of the cross section files have led to errors in  125I and  103Pd parameters. The purpose of this study is to compare the dosimetric parameters of sample brachytherapy sources, calculated with three different versions of the MCNP code — MCNP4C, MCNP5, and MCNPX. In these simulations for each source type, the source and phantom geometries, as well as the number of the photons, were kept identical, thus eliminating the possible uncertainties. The results of these investigations indicate that for low‐energy sources such as  125I and  103Pd there are discrepancies in gL(r) values. Discrepancies up to 21.7% and 28% are observed between MCNP4C and other codes at a distance of 6 cm for  103Pd and 10 cm for  125I from the source, respectively. However, for higher energy sources, the discrepancies in gL(r) values are less than 1.1% for  192Ir and less than 1.2% for  137Cs between the three codes. PACS number(s): 87.56.bg PMID:27074460

  8. Coding conventions and principles for a National Land-Change Modeling Framework

    USGS Publications Warehouse

    Donato, David I.

    2017-07-14

    This report establishes specific rules for writing computer source code for use with the National Land-Change Modeling Framework (NLCMF). These specific rules consist of conventions and principles for writing code primarily in the C and C++ programming languages. Collectively, these coding conventions and coding principles create an NLCMF programming style. In addition to detailed naming conventions, this report provides general coding conventions and principles intended to facilitate the development of high-performance software implemented with code that is extensible, flexible, and interoperable. Conventions for developing modular code are explained in general terms and also enabled and demonstrated through the appended templates for C++ base source-code and header files. The NLCMF limited-extern approach to module structure, code inclusion, and cross-module access to data is both explained in the text and then illustrated through the module templates. Advice on the use of global variables is provided.

  9. ROS Hexapod

    NASA Technical Reports Server (NTRS)

    Davis, Kirsch; Bankieris, Derek

    2016-01-01

    As an intern project for NASA Johnson Space Center (JSC), my job was to familiarize myself and operate a Robotics Operating System (ROS). The project outcome converted existing software assets into ROS using nodes, enabling a robotic Hexapod to communicate to be functional and controlled by an existing PlayStation 3 (PS3) controller. Existing control algorithms and current libraries have no ROS capabilities within the Hexapod C++ source code when the internship started, but that has changed throughout my internship. Conversion of C++ codes to ROS enabled existing code to be compatible with ROS, and is now controlled using an existing PS3 controller. Furthermore, my job description was to design ROS messages and script programs that enabled assets to participate in the ROS ecosystem by subscribing and publishing messages. Software programming source code is written in directories using C++. Testing of software assets included compiling code within the Linux environment using a terminal. The terminal ran the code from a directory. Several problems occurred while compiling code and the code would not compile. So modifying code to where C++ can read the source code were made. Once the code was compiled and ran, the code was uploaded to Hexapod and then controlled by a PS3 controller. The project outcome has the Hexapod fully functional and compatible with ROS and operates using the PlayStation 3 controller. In addition, an open source software (IDE) Arduino board will be integrated into the ecosystem with designing circuitry on a breadboard to add additional behavior with push buttons, potentiometers and other simple elements in the electrical circuitry. Other projects with the Arduino will be a GPS module, digital clock that will run off 22 satellites to show accurate real time using a GPS signal and an internal patch antenna to communicate with satellites. In addition, this internship experience has led me to pursue myself to learn coding more efficiently and effectively to write, subscribe and publish my own source code in different programming languages. With some familiarity with software programming, it will enhance my skills in the electrical engineering field. In contrast, my experience here at JSC with the Simulation and Graphics Branch (ER7) has led me to take my coding skill to be more proficient to increase my knowledge in software programming, and also enhancing my skills in ROS. This knowledge will be taken back to my university to implement coding in a school project that will use source coding and ROS to work on the PR2 robot which is controlled by ROS software. My skills learned here will be used to integrate messages to subscribe and publish ROS messages to a PR2 robot. The PR2 robot will be controlled by an existing PS3 controller by changing C++ coding to subscribe and publish messages to ROS. Overall the skills that were obtained here will not be lost, but increased.

  10. Runtime Detection of C-Style Errors in UPC Code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pirkelbauer, P; Liao, C; Panas, T

    2011-09-29

    Unified Parallel C (UPC) extends the C programming language (ISO C 99) with explicit parallel programming support for the partitioned global address space (PGAS), which provides a global memory space with localized partitions to each thread. Like its ancestor C, UPC is a low-level language that emphasizes code efficiency over safety. The absence of dynamic (and static) safety checks allows programmer oversights and software flaws that can be hard to spot. In this paper, we present an extension of a dynamic analysis tool, ROSE-Code Instrumentation and Runtime Monitor (ROSECIRM), for UPC to help programmers find C-style errors involving the globalmore » address space. Built on top of the ROSE source-to-source compiler infrastructure, the tool instruments source files with code that monitors operations and keeps track of changes to the system state. The resulting code is linked to a runtime monitor that observes the program execution and finds software defects. We describe the extensions to ROSE-CIRM that were necessary to support UPC. We discuss complications that arise from parallel code and our solutions. We test ROSE-CIRM against a runtime error detection test suite, and present performance results obtained from running error-free codes. ROSE-CIRM is released as part of the ROSE compiler under a BSD-style open source license.« less

  11. Investigation of ISIS and Brookhaven National Laboratory ion source electrodes after extended operation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lettry J.; Alessi J.; Faircloth, D.

    2012-02-23

    Linac4 accelerator of Centre Europeen de Recherches Nucleaires is under construction and a RF-driven H{sup -} ion source is being developed. The beam current requirement for Linac4 is very challenging: 80 mA must be provided. Cesiated plasma discharge ion sources such as Penning or magnetron sources are also potential candidates. Accelerator ion sources must achieve typical reliability figures of 95% and above. Investigating and understanding the underlying mechanisms involved with source failure or ageing is critical when selecting the ion source technology. Plasma discharge driven surface ion sources rely on molybdenum cathodes. Deformation of the cathode surfaces is visible aftermore » extended operation periods. A metallurgical investigation of an ISIS ion source is presented. The origin of the deformation is twofold: Molybdenum sputtering by cesium ions digs few tenths of mm cavities while a growth of molybdenum is observed in the immediate vicinity. The molybdenum growth under hydrogen atmosphere is hard and loosely bound to the bulk. It is, therefore, likely to peel off and be transported within the plasma volume. The observation of the cathode, anode, and extraction electrodes of the magnetron source operated at BNL for two years are presented. A beam simulation of H{sup -}, electrons, and Cs{sup -} ions was performed with the IBSimu code package to qualitatively explain the observations. This paper describes the operation conditions of the ion sources and discusses the metallurgical analysis and beam simulation results.« less

  12. Investigation of ISIS and Brookhaven National Laboratory ion source electrodes after extended operation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lettry, J.; Gerardin, A.; Pereira, H.

    2012-02-15

    Linac4 accelerator of Centre Europeen de Recherches Nucleaires is under construction and a RF-driven H{sup -} ion source is being developed. The beam current requirement for Linac4 is very challenging: 80 mA must be provided. Cesiated plasma discharge ion sources such as Penning or magnetron sources are also potential candidates. Accelerator ion sources must achieve typical reliability figures of 95% and above. Investigating and understanding the underlying mechanisms involved with source failure or ageing is critical when selecting the ion source technology. Plasma discharge driven surface ion sources rely on molybdenum cathodes. Deformation of the cathode surfaces is visible aftermore » extended operation periods. A metallurgical investigation of an ISIS ion source is presented. The origin of the deformation is twofold: Molybdenum sputtering by cesium ions digs few tenths of mm cavities while a growth of molybdenum is observed in the immediate vicinity. The molybdenum growth under hydrogen atmosphere is hard and loosely bound to the bulk. It is, therefore, likely to peel off and be transported within the plasma volume. The observation of the cathode, anode, and extraction electrodes of the magnetron source operated at BNL for two years are presented. A beam simulation of H{sup -}, electrons, and Cs{sup -} ions was performed with the IBSimu code package to qualitatively explain the observations. This paper describes the operation conditions of the ion sources and discusses the metallurgical analysis and beam simulation results.« less

  13. Performance of the OVERFLOW-MLP and LAURA-MLP CFD Codes on the NASA Ames 512 CPU Origin System

    NASA Technical Reports Server (NTRS)

    Taft, James R.

    2000-01-01

    The shared memory Multi-Level Parallelism (MLP) technique, developed last year at NASA Ames has been very successful in dramatically improving the performance of important NASA CFD codes. This new and very simple parallel programming technique was first inserted into the OVERFLOW production CFD code in FY 1998. The OVERFLOW-MLP code's parallel performance scaled linearly to 256 CPUs on the NASA Ames 256 CPU Origin 2000 system (steger). Overall performance exceeded 20.1 GFLOP/s, or about 4.5x the performance of a dedicated 16 CPU C90 system. All of this was achieved without any major modification to the original vector based code. The OVERFLOW-MLP code is now in production on the inhouse Origin systems as well as being used offsite at commercial aerospace companies. Partially as a result of this work, NASA Ames has purchased a new 512 CPU Origin 2000 system to further test the limits of parallel performance for NASA codes of interest. This paper presents the performance obtained from the latest optimization efforts on this machine for the LAURA-MLP and OVERFLOW-MLP codes. The Langley Aerothermodynamics Upwind Relaxation Algorithm (LAURA) code is a key simulation tool in the development of the next generation shuttle, interplanetary reentry vehicles, and nearly all "X" plane development. This code sustains about 4-5 GFLOP/s on a dedicated 16 CPU C90. At this rate, expected workloads would require over 100 C90 CPU years of computing over the next few calendar years. It is not feasible to expect that this would be affordable or available to the user community. Dramatic performance gains on cheaper systems are needed. This code is expected to be perhaps the largest consumer of NASA Ames compute cycles per run in the coming year.The OVERFLOW CFD code is extensively used in the government and commercial aerospace communities to evaluate new aircraft designs. It is one of the largest consumers of NASA supercomputing cycles and large simulations of highly resolved full aircraft are routinely undertaken. Typical large problems might require 100s of Cray C90 CPU hours to complete. The dramatic performance gains with the 256 CPU steger system are exciting. Obtaining results in hours instead of months is revolutionizing the way in which aircraft manufacturers are looking at future aircraft simulation work. Figure 2 below is a current state of the art plot of OVERFLOW-MLP performance on the 512 CPU Lomax system. As can be seen, the chart indicates that OVERFLOW-MLP continues to scale linearly with CPU count up to 512 CPUs on a large 35 million point full aircraft RANS simulation. At this point performance is such that a fully converged simulation of 2500 time steps is completed in less than 2 hours of elapsed time. Further work over the next few weeks will improve the performance of this code even further.The LAURA code has been converted to the MLP format as well. This code is currently being optimized for the 512 CPU system. Performance statistics indicate that the goal of 100 GFLOP/s will be achieved by year's end. This amounts to 20x the 16 CPU C90 result and strongly demonstrates the viability of the new parallel systems rapidly solving very large simulations in a production environment.

  14. A Comparison of Automatic Parallelization Tools/Compilers on the SGI Origin 2000 Using the NAS Benchmarks

    NASA Technical Reports Server (NTRS)

    Saini, Subhash; Frumkin, Michael; Hribar, Michelle; Jin, Hao-Qiang; Waheed, Abdul; Yan, Jerry

    1998-01-01

    Porting applications to new high performance parallel and distributed computing platforms is a challenging task. Since writing parallel code by hand is extremely time consuming and costly, porting codes would ideally be automated by using some parallelization tools and compilers. In this paper, we compare the performance of the hand written NAB Parallel Benchmarks against three parallel versions generated with the help of tools and compilers: 1) CAPTools: an interactive computer aided parallelization too] that generates message passing code, 2) the Portland Group's HPF compiler and 3) using compiler directives with the native FORTAN77 compiler on the SGI Origin2000.

  15. Seismic Canvas: Evolution as a Data Exploration and Analysis Tool

    NASA Astrophysics Data System (ADS)

    Kroeger, G. C.

    2015-12-01

    SeismicCanvas, originally developed as a prototype interactive waveform display and printing application for educational use has evolved to include significant data exploration and analysis functionality. The most recent version supports data import from a variety of standard file formats including SAC and mini-SEED, as well as search and download capabilities via IRIS/FDSN Web Services. Data processing tools now include removal of means and trends, interactive windowing, filtering, smoothing, tapering, resampling. Waveforms can be displayed in a free-form canvas or as a record section based on angular or great circle distance, azimuth or back azimuth. Integrated tau-p code allows the calculation and display of theoretical phase arrivals from a variety of radial Earth models. Waveforms can be aligned by absolute time, event time, picked or theoretical arrival times and can be stacked after alignment. Interactive measurements include means, amplitudes, time delays, ray parameters and apparent velocities. Interactive picking of an arbitrary list of seismic phases is supported. Bode plots of amplitude and phase spectra and spectrograms can be created from multiple seismograms or selected windows of seismograms. Direct printing is implemented on all supported platforms along with output of high-resolution pdf files. With these added capabilities, the application is now being used as a data exploration tool for research. Coded in C++ and using the cross-platform Qt framework, the most recent version is available as a 64-bit application for Windows 7-10, Mac OS X 10.6-10.11, and most distributions of Linux, and a 32-bit version for Windows XP and 7. With the latest improvements and refactoring of trace display classes, the 64-bit versions have been tested with over 250 million samples and remain responsive in interactive operations. The source code is available under a LPGLv3 license and both source and executables are available through the IRIS SeisCode repository.

  16. [A quality controllable algorithm for ECG compression based on wavelet transform and ROI coding].

    PubMed

    Zhao, An; Wu, Baoming

    2006-12-01

    This paper presents an ECG compression algorithm based on wavelet transform and region of interest (ROI) coding. The algorithm has realized near-lossless coding in ROI and quality controllable lossy coding outside of ROI. After mean removal of the original signal, multi-layer orthogonal discrete wavelet transform is performed. Simultaneously,feature extraction is performed on the original signal to find the position of ROI. The coefficients related to the ROI are important coefficients and kept. Otherwise, the energy loss of the transform domain is calculated according to the goal PRDBE (Percentage Root-mean-square Difference with Baseline Eliminated), and then the threshold of the coefficients outside of ROI is determined according to the loss of energy. The important coefficients, which include the coefficients of ROI and the coefficients that are larger than the threshold outside of ROI, are put into a linear quantifier. The map, which records the positions of the important coefficients in the original wavelet coefficients vector, is compressed with a run-length encoder. Huffman coding has been applied to improve the compression ratio. ECG signals taken from the MIT/BIH arrhythmia database are tested, and satisfactory results in terms of clinical information preserving, quality and compress ratio are obtained.

  17. Admiralty Inlet Advanced Turbulence Measurements: final data and code archive

    DOE Data Explorer

    Kilcher, Levi (ORCID:0000000183851131); Thomson, Jim (ORCID:0000000289290088); Harding, Samuel

    2011-02-01

    Data and code that is not already in a public location that is used in Kilcher, Thomson, Harding, and Nylund (2017) "Turbulence Measurements from Compliant Moorings - Part II: Motion Correction" doi: 10.1175/JTECH-D-16-0213.1. The links point to Python source code used in the publication. All other files are source data used in the publication.

  18. Numerical Electromagnetic Code (NEC)-Basic Scattering Code. Part 2. Code Manual

    DTIC Science & Technology

    1979-09-01

    imaging of source axes for magnetic source. Ax R VSOURC(1,1) + 9 VSOURC(1,2) + T VSOURC(1,3) 4pi = x VIMAG(I,1) + ^ VINAG (1,2)+ VIMAG(l,3) An =unit...VNC A. yt and z components of the end cap unit normal OUTPUT VARIABLE VINAG X.. Y, and z components defining thesource image coordinate system axesin

  19. Java Source Code Analysis for API Migration to Embedded Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Winter, Victor; McCoy, James A.; Guerrero, Jonathan

    Embedded systems form an integral part of our technological infrastructure and oftentimes play a complex and critical role within larger systems. From the perspective of reliability, security, and safety, strong arguments can be made favoring the use of Java over C in such systems. In part, this argument is based on the assumption that suitable subsets of Java’s APIs and extension libraries are available to embedded software developers. In practice, a number of Java-based embedded processors do not support the full features of the JVM. For such processors, source code migration is a mechanism by which key abstractions offered bymore » APIs and extension libraries can made available to embedded software developers. The analysis required for Java source code-level library migration is based on the ability to correctly resolve element references to their corresponding element declarations. A key challenge in this setting is how to perform analysis for incomplete source-code bases (e.g., subsets of libraries) from which types and packages have been omitted. This article formalizes an approach that can be used to extend code bases targeted for migration in such a manner that the threats associated the analysis of incomplete code bases are eliminated.« less

  20. High Order Modulation Protograph Codes

    NASA Technical Reports Server (NTRS)

    Nguyen, Thuy V. (Inventor); Nosratinia, Aria (Inventor); Divsalar, Dariush (Inventor)

    2014-01-01

    Digital communication coding methods for designing protograph-based bit-interleaved code modulation that is general and applies to any modulation. The general coding framework can support not only multiple rates but also adaptive modulation. The method is a two stage lifting approach. In the first stage, an original protograph is lifted to a slightly larger intermediate protograph. The intermediate protograph is then lifted via a circulant matrix to the expected codeword length to form a protograph-based low-density parity-check code.

  1. Application of Modern Fortran to Spacecraft Trajectory Design and Optimization

    NASA Technical Reports Server (NTRS)

    Williams, Jacob; Falck, Robert D.; Beekman, Izaak B.

    2018-01-01

    In this paper, applications of the modern Fortran programming language to the field of spacecraft trajectory optimization and design are examined. Modern object-oriented Fortran has many advantages for scientific programming, although many legacy Fortran aerospace codes have not been upgraded to use the newer standards (or have been rewritten in other languages perceived to be more modern). NASA's Copernicus spacecraft trajectory optimization program, originally a combination of Fortran 77 and Fortran 95, has attempted to keep up with modern standards and makes significant use of the new language features. Various algorithms and methods are presented from trajectory tools such as Copernicus, as well as modern Fortran open source libraries and other projects.

  2. A Survey of Variable Extragalactic Sources with XTE's All Sky Monitor (ASM)

    NASA Technical Reports Server (NTRS)

    Jernigan, Garrett

    1998-01-01

    The original goal of the project was the near real-time detection of AGN utilizing the SSC 3 of the ASM on XTE which does a deep integration on one 100 square degree region of the sky. While the SSC never performed sufficiently well to allow the success of this goal, the work on the project has led to the development of a new analysis method for coded aperture systems which has now been applied to ASM data for mapping regions near clusters of galaxies such as the Perseus Cluster and the Coma Cluster. Publications are in preparation that describe both the new method and the results from mapping clusters of galaxies.

  3. Correcting X-ray spectra obtained from the AXAF VETA-I mirror calibration for pileup, continuum, background and deadtime

    NASA Technical Reports Server (NTRS)

    Chartas, G.; Flanagan, K.; Hughes, J. P.; Kellogg, E. M.; Nguyen, D.; Zombek, M.; Joy, M.; Kolodziejezak, J.

    1993-01-01

    The VETA-I mirror was calibrated with the use of a collimated soft X-ray source produced by electron bombardment of various anode materials. The FWHM, effective area and encircled energy were measured with the use of proportional counters that were scanned with a set of circular apertures. The pulsers from the proportional counters were sent through a multichannel analyzer that produced a pulse height spectrum. In order to characterize the properties of the mirror at different discrete photon energies one desires to extract from the pulse height distribution only those photons that originated from the characteristic line emission of the X-ray target source. We have developed a code that fits a modeled spectrum to the observed X-ray data, extracts the counts that originated from the line emission, and estimates the error in these counts. The function that is fitted to the X-ray spectra includes a Prescott function for the resolution of the detector a second Prescott function for a pileup peak and a X-ray continuum function. The continuum component is determined by calculating the absorption of the target Bremsstrahlung through various filters, correcting for the reflectivity of the mirror and convolving with the detector response.

  4. Correcting x ray spectra obtained from the AXAF VETA-I mirror calibration for pileup, continuum, background and deadtime

    NASA Technical Reports Server (NTRS)

    Chartas, G.; Flanagan, Kathy; Hughes, John P.; Kellogg, Edwin M.; Nguyen, D.; Zombeck, M.; Joy, M.; Kolodziejezak, J.

    1992-01-01

    The VETA-I mirror was calibrated with the use of a collimated soft X-ray source produced by electron bombardment of various anode materials. The FWHM, effective area and encircled energy were measured with the use of proportional counters that were scanned with a set of circular apertures. The pulsers from the proportional counters were sent through a multichannel analyzer that produced a pulse height spectrum. In order to characterize the properties of the mirror at different discrete photon energies one desires to extract from the pulse height distribution only those photons that originated from the characteristic line emission of the X-ray target source. We have developed a code that fits a modeled spectrum to the observed X-ray data, extracts the counts that originated from the line emission, and estimates the error in these counts. The function that is fitted to the X-ray spectra includes a Prescott function for the resolution of the detector a second Prescott function for a pileup peak and a X-ray continuum function. The continuum component is determined by calculating the absorption of the target Bremsstrahlung through various filters correcting for the reflectivity of the mirror and convolving with the detector response.

  5. High-speed asynchronous data mulitiplexer/demultiplexer for high-density digital recorders

    NASA Astrophysics Data System (ADS)

    Berdugo, Albert; Small, Martin B.

    1996-11-01

    Modern High Density Digital Recorders are ideal devices for the storage of large amounts of digital and/or wideband analog data. Ruggedized versions of these recorders are currently available and are supporting many military and commercial flight test applications. However, in certain cases, the storage format becomes very critical, e.g., when a large number of data types are involved, or when channel- to-channel correlation is critical, or when the original data source must be accurately recreated during post mission analysis. A properly designed storage format will not only preserve data quality, but will yield the maximum storage capacity and record time for any given recorder family or data type. This paper describes a multiplex/demultiplex technique that formats multiple high speed data sources into a single, common format for recording. The method is compatible with many popular commercial recorder standards such as DCRsi, VLDS, and DLT. Types of input data typically include PCM, wideband analog data, video, aircraft data buses, avionics, voice, time code, and many others. The described method preserves tight data correlation with minimal data overhead. The described technique supports full reconstruction of the original input signals during data playback. Output data correlation across channels is preserved for all types of data inputs. Simultaneous real- time data recording and reconstruction are also supported.

  6. Investigation of Finite Sources through Time Reversal

    NASA Astrophysics Data System (ADS)

    Kremers, Simon; Brietzke, Gilbert; Igel, Heiner; Larmat, Carene; Fichtner, Andreas; Johnson, Paul A.; Huang, Lianjie

    2010-05-01

    Under certain conditions time reversal is a promising method to determine earthquake source characteristics without any a-priori information (except the earth model and the data). It consists of injecting flipped-in-time records from seismic stations within the model to create an approximate reverse movie of wave propagation from which the location of the hypocenter and other information might be inferred. In this study, the backward propagation is performed numerically using a parallel cartesian spectral element code. Initial tests using point source moment tensors serve as control for the adaptability of the used wave propagation algorithm. After that we investigated the potential of time reversal to recover finite source characteristics (e.g., size of ruptured area, rupture velocity etc.). We used synthetic data from the SPICE kinematic source inversion blind test initiated to investigate the performance of current kinematic source inversion approaches (http://www.spice-rtn.org/library/valid). The synthetic data set attempts to reproduce the 2000 Tottori earthquake with 33 records close to the fault. We discuss the influence of various assumptions made on the source (e.g., origin time, hypocenter, fault location, etc.), adjoint source weighting (e.g., correct for epicentral distance) and structure (uncertainty in the velocity model) on the results of the time reversal process. We give an overview about the quality of focussing of the different wavefield properties (i.e., displacements, strains, rotations, energies). Additionally, the potential to recover source properties of multiple point sources at the same time is discussed.

  7. Making your code citable with the Astrophysics Source Code Library

    NASA Astrophysics Data System (ADS)

    Allen, Alice; DuPrie, Kimberly; Schmidt, Judy; Berriman, G. Bruce; Hanisch, Robert J.; Mink, Jessica D.; Nemiroff, Robert J.; Shamir, Lior; Shortridge, Keith; Taylor, Mark B.; Teuben, Peter J.; Wallin, John F.

    2016-01-01

    The Astrophysics Source Code Library (ASCL, ascl.net) is a free online registry of codes used in astronomy research. With nearly 1,200 codes, it is the largest indexed resource for astronomy codes in existence. Established in 1999, it offers software authors a path to citation of their research codes even without publication of a paper describing the software, and offers scientists a way to find codes used in refereed publications, thus improving the transparency of the research. It also provides a method to quantify the impact of source codes in a fashion similar to the science metrics of journal articles. Citations using ASCL IDs are accepted by major astronomy journals and if formatted properly are tracked by ADS and other indexing services. The number of citations to ASCL entries increased sharply from 110 citations in January 2014 to 456 citations in September 2015. The percentage of code entries in ASCL that were cited at least once rose from 7.5% in January 2014 to 17.4% in September 2015. The ASCL's mid-2014 infrastructure upgrade added an easy entry submission form, more flexible browsing, search capabilities, and an RSS feeder for updates. A Changes/Additions form added this past fall lets authors submit links for papers that use their codes for addition to the ASCL entry even if those papers don't formally cite the codes, thus increasing the transparency of that research and capturing the value of their software to the community.

  8. Consistent Adjoint Driven Importance Sampling using Space, Energy and Angle

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peplow, Douglas E.; Mosher, Scott W; Evans, Thomas M

    2012-08-01

    For challenging radiation transport problems, hybrid methods combine the accuracy of Monte Carlo methods with the global information present in deterministic methods. One of the most successful hybrid methods is CADIS Consistent Adjoint Driven Importance Sampling. This method uses a deterministic adjoint solution to construct a biased source distribution and consistent weight windows to optimize a specific tally in a Monte Carlo calculation. The method has been implemented into transport codes using just the spatial and energy information from the deterministic adjoint and has been used in many applications to compute tallies with much higher figures-of-merit than analog calculations. CADISmore » also outperforms user-supplied importance values, which usually take long periods of user time to develop. This work extends CADIS to develop weight windows that are a function of the position, energy, and direction of the Monte Carlo particle. Two types of consistent source biasing are presented: one method that biases the source in space and energy while preserving the original directional distribution and one method that biases the source in space, energy, and direction. Seven simple example problems are presented which compare the use of the standard space/energy CADIS with the new space/energy/angle treatments.« less

  9. PARMELA_B: a new version of PARMELA with coherent synchrotron radiation effects and a finite difference space charge routine

    NASA Astrophysics Data System (ADS)

    Koltenbah, Benjamin E. C.; Parazzoli, Claudio G.; Greegor, Robert B.; Dowell, David H.

    2002-07-01

    Recent interest in advanced laser light sources has stimulated development of accelerator systems of intermediate beam energy, 100-200 MeV, and high charge, 1-10 nC, for high power FEL applications and high energy, 1-2 GeV, high charge, SASE-FEL applications. The current generation of beam transport codes which were developed for high-energy, low-charge beams with low self-fields are inadequate to address this energy and charge regime, and better computational tools are required to accurately calculate self-fields. To that end, we have developed a new version of PARMELA, named PARMELA_B and written in Fortran 95, which includes a coherent synchrotron radiation (CSR) routine and an improved, generalized space charge (SC) routine. An electron bunch is simulated by a collection of macro-particles, which traverses a series of beam line elements. At each time step through the calculation, the momentum of each particle is updated due to the presence of external and self-fields. The self-fields are due to CSR and SC. For the CSR calculations, the macro-particles are further combined into macro-particle-bins that follow the central trajectory of the bend. The energy change through the time step is calculated from expressions derived from the Liénard-Wiechart formulae, and from this energy change the particle's momentum is updated. For the SC calculations, we maintain the same rest-frame-electrostatic approach of the original PARMELA; however, we employ a finite difference Poisson equation solver instead of the symmetrical ring algorithm of the original code. In this way, we relax the symmetry assumptions in the original code. This method is based upon standard numerical procedures and conserves momentum to first order. The SC computational grid is adaptive and conforms to the size of the pulse as it evolves through the calculation. We provide descriptions of these two algorithms, validation comparisons with other CSR and SC methods, and a limited comparison with experimental results.

  10. 40 CFR 52.824 - Original identification of plan section.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... rules, “Iowa Administrative Code,” effective February 22, 1995. This revision approves new definitions... definition updates. (E) “Iowa Administrative Code,” section 567-31.1, effective February 22, 1995. This rule... Quality and replaced the Iowa air pollution control statute which appeared as Chapter 136B of the Code of...

  11. 40 CFR 52.824 - Original identification of plan section.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... rules, “Iowa Administrative Code,” effective February 22, 1995. This revision approves new definitions... definition updates. (E) “Iowa Administrative Code,” section 567-31.1, effective February 22, 1995. This rule... Quality and replaced the Iowa air pollution control statute which appeared as Chapter 136B of the Code of...

  12. 40 CFR 52.824 - Original identification of plan section.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... rules, “Iowa Administrative Code,” effective February 22, 1995. This revision approves new definitions... definition updates. (E) “Iowa Administrative Code,” section 567-31.1, effective February 22, 1995. This rule... Quality and replaced the Iowa air pollution control statute which appeared as Chapter 136B of the Code of...

  13. 40 CFR 52.824 - Original identification of plan section.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... rules, “Iowa Administrative Code,” effective February 22, 1995. This revision approves new definitions... definition updates. (E) “Iowa Administrative Code,” section 567-31.1, effective February 22, 1995. This rule... Quality and replaced the Iowa air pollution control statute which appeared as Chapter 136B of the Code of...

  14. 26 CFR 1.42-5 - Monitoring compliance with low-income housing credit requirements.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... be required to retain the original local health, safety, or building code violation reports or... account local health, safety, and building codes (or other habitability standards), and the State or local government unit responsible for making local health, safety, or building code inspections did not issue a...

  15. 26 CFR 1.42-5 - Monitoring compliance with low-income housing credit requirements.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... be required to retain the original local health, safety, or building code violation reports or... account local health, safety, and building codes (or other habitability standards), and the State or local government unit responsible for making local health, safety, or building code inspections did not issue a...

  16. A two-dimensional transient analytical solution for a ponded ditch drainage system under the influence of source/sink

    NASA Astrophysics Data System (ADS)

    Sarmah, Ratan; Tiwari, Shubham

    2018-03-01

    An analytical solution is developed for predicting two-dimensional transient seepage into ditch drainage network receiving water from a non-uniform steady ponding field from the surface of the soil under the influence of source/sink in the flow domain. The flow domain is assumed to be saturated, homogeneous and anisotropic in nature and have finite extends in horizontal and vertical directions. The drains are assumed to be standing vertical and penetrating up to impervious layer. The water levels in the drains are unequal and invariant with time. The flow field is also assumed to be under the continuous influence of time-space dependent arbitrary source/sink term. The correctness of the proposed model is checked by developing a numerical code and also with the existing analytical solution for the simplified case. The study highlights the significance of source/sink influence in the subsurface flow. With the imposition of the source and sink term in the flow domain, the pathline and travel time of water particles started deviating from their original position and above that the side and top discharge to the drains were also observed to have a strong influence of the source/sink terms. The travel time and pathline of water particles are also observed to have a dependency on the height of water in the ditches and on the location of source/sink activation area.

  17. Satellite Image Mosaic Engine

    NASA Technical Reports Server (NTRS)

    Plesea, Lucian

    2006-01-01

    A computer program automatically builds large, full-resolution mosaics of multispectral images of Earth landmasses from images acquired by Landsat 7, complete with matching of colors and blending between adjacent scenes. While the code has been used extensively for Landsat, it could also be used for other data sources. A single mosaic of as many as 8,000 scenes, represented by more than 5 terabytes of data and the largest set produced in this work, demonstrated what the code could do to provide global coverage. The program first statistically analyzes input images to determine areas of coverage and data-value distributions. It then transforms the input images from their original universal transverse Mercator coordinates to other geographical coordinates, with scaling. It applies a first-order polynomial brightness correction to each band in each scene. It uses a data-mask image for selecting data and blending of input scenes. Under control by a user, the program can be made to operate on small parts of the output image space, with check-point and restart capabilities. The program runs on SGI IRIX computers. It is capable of parallel processing using shared-memory code, large memories, and tens of central processing units. It can retrieve input data and store output data at locations remote from the processors on which it is executed.

  18. Development of an Acoustic Signal Analysis Tool “Auto-F” Based on the Temperament Scale

    NASA Astrophysics Data System (ADS)

    Modegi, Toshio

    The MIDI interface is originally designed for electronic musical instruments but we consider this music-note based coding concept can be extended for general acoustic signal description. We proposed applying the MIDI technology to coding of bio-medical auscultation sound signals such as heart sounds for retrieving medical records and performing telemedicine. Then we have tried to extend our encoding targets including vocal sounds, natural sounds and electronic bio-signals such as ECG, using Generalized Harmonic Analysis method. Currently, we are trying to separate vocal sounds included in popular songs and encode both vocal sounds and background instrumental sounds into separate MIDI channels. And also, we are trying to extract articulation parameters such as MIDI pitch-bend parameters in order to reproduce natural acoustic sounds using a GM-standard MIDI tone generator. In this paper, we present an overall algorithm of our developed acoustic signal analysis tool, based on those research works, which can analyze given time-based signals on the musical temperament scale. The prominent feature of this tool is producing high-precision MIDI codes, which reproduce the similar signals as the given source signal using a GM-standard MIDI tone generator, and also providing analyzed texts in the XML format.

  19. An Assessment of Patient Navigator Activities in Breast Cancer Patient Navigation Programs Using a Nine-Principle Framework

    PubMed Central

    Gunn, Christine M; Clark, Jack A; Battaglia, Tracy A; Freund, Karen M; Parker, Victoria A

    2014-01-01

    Objective To determine how closely a published model of navigation reflects the practice of navigation in breast cancer patient navigation programs. Data Source Observational field notes describing patient navigator activities collected from 10 purposefully sampled, foundation-funded breast cancer navigation programs in 2008–2009. Study Design An exploratory study evaluated a model framework for patient navigation published by Harold Freeman by using an a priori coding scheme based on model domains. Data Collection Field notes were compiled and coded. Inductive codes were added during analysis to characterize activities not included in the original model. Principal Findings Programs were consistent with individual-level principles representing tasks focused on individual patients. There was variation with respect to program-level principles that related to program organization and structure. Program characteristics such as the use of volunteer or clinical navigators were identified as contributors to patterns of model concordance. Conclusions This research provides a framework for defining the navigator role as focused on eliminating barriers through the provision of individual-level interventions. The diversity observed at the program level in these programs was a reflection of implementation according to target population. Further guidance may be required to assist patient navigation programs to define and tailor goals and measurement to community needs. PMID:24820445

  20. Indirect Observation in Everyday Contexts: Concepts and Methodological Guidelines within a Mixed Methods Framework.

    PubMed

    Anguera, M Teresa; Portell, Mariona; Chacón-Moscoso, Salvador; Sanduvete-Chaves, Susana

    2018-01-01

    Indirect observation is a recent concept in systematic observation. It largely involves analyzing textual material generated either indirectly from transcriptions of audio recordings of verbal behavior in natural settings (e.g., conversation, group discussions) or directly from narratives (e.g., letters of complaint, tweets, forum posts). It may also feature seemingly unobtrusive objects that can provide relevant insights into daily routines. All these materials constitute an extremely rich source of information for studying everyday life, and they are continuously growing with the burgeoning of new technologies for data recording, dissemination, and storage. Narratives are an excellent vehicle for studying everyday life, and quantitization is proposed as a means of integrating qualitative and quantitative elements. However, this analysis requires a structured system that enables researchers to analyze varying forms and sources of information objectively. In this paper, we present a methodological framework detailing the steps and decisions required to quantitatively analyze a set of data that was originally qualitative. We provide guidelines on study dimensions, text segmentation criteria, ad hoc observation instruments, data quality controls, and coding and preparation of text for quantitative analysis. The quality control stage is essential to ensure that the code matrices generated from the qualitative data are reliable. We provide examples of how an indirect observation study can produce data for quantitative analysis and also describe the different software tools available for the various stages of the process. The proposed method is framed within a specific mixed methods approach that involves collecting qualitative data and subsequently transforming these into matrices of codes (not frequencies) for quantitative analysis to detect underlying structures and behavioral patterns. The data collection and quality control procedures fully meet the requirement of flexibility and provide new perspectives on data integration in the study of biopsychosocial aspects in everyday contexts.

  1. HELIOS: An Open-source, GPU-accelerated Radiative Transfer Code for Self-consistent Exoplanetary Atmospheres

    NASA Astrophysics Data System (ADS)

    Malik, Matej; Grosheintz, Luc; Mendonça, João M.; Grimm, Simon L.; Lavie, Baptiste; Kitzmann, Daniel; Tsai, Shang-Min; Burrows, Adam; Kreidberg, Laura; Bedell, Megan; Bean, Jacob L.; Stevenson, Kevin B.; Heng, Kevin

    2017-02-01

    We present the open-source radiative transfer code named HELIOS, which is constructed for studying exoplanetary atmospheres. In its initial version, the model atmospheres of HELIOS are one-dimensional and plane-parallel, and the equation of radiative transfer is solved in the two-stream approximation with nonisotropic scattering. A small set of the main infrared absorbers is employed, computed with the opacity calculator HELIOS-K and combined using a correlated-k approximation. The molecular abundances originate from validated analytical formulae for equilibrium chemistry. We compare HELIOS with the work of Miller-Ricci & Fortney using a model of GJ 1214b, and perform several tests, where we find: model atmospheres with single-temperature layers struggle to converge to radiative equilibrium; k-distribution tables constructed with ≳ 0.01 cm-1 resolution in the opacity function (≲ {10}3 points per wavenumber bin) may result in errors ≳ 1%-10% in the synthetic spectra; and a diffusivity factor of 2 approximates well the exact radiative transfer solution in the limit of pure absorption. We construct “null-hypothesis” models (chemical equilibrium, radiative equilibrium, and solar elemental abundances) for six hot Jupiters. We find that the dayside emission spectra of HD 189733b and WASP-43b are consistent with the null hypothesis, while the latter consistently underpredicts the observed fluxes of WASP-8b, WASP-12b, WASP-14b, and WASP-33b. We demonstrate that our results are somewhat insensitive to the choice of stellar models (blackbody, Kurucz, or PHOENIX) and metallicity, but are strongly affected by higher carbon-to-oxygen ratios. The code is publicly available as part of the Exoclimes Simulation Platform (exoclime.net).

  2. Differences in the causes of death of HIV-positive patients in a cohort study by data sources and coding algorithms.

    PubMed

    Hernando, Victoria; Sobrino-Vegas, Paz; Burriel, M Carmen; Berenguer, Juan; Navarro, Gemma; Santos, Ignacio; Reparaz, Jesús; Martínez, M Angeles; Antela, Antonio; Gutiérrez, Félix; del Amo, Julia

    2012-09-10

    To compare causes of death (CoDs) from two independent sources: National Basic Death File (NBDF) and deaths reported to the Spanish HIV Research cohort [Cohort de adultos con infección por VIH de la Red de Investigación en SIDA CoRIS)] and compare the two coding algorithms: International Classification of Diseases, 10th revision (ICD-10) and revised version of Coding Causes of Death in HIV (revised CoDe). Between 2004 and 2008, CoDs were obtained from the cohort records (free text, multiple causes) and also from NBDF (ICD-10). CoDs from CoRIS were coded according to ICD-10 and revised CoDe by a panel. Deaths were compared by 13 disease groups: HIV/AIDS, liver diseases, malignancies, infections, cardiovascular, blood disorders, pulmonary, central nervous system, drug use, external, suicide, other causes and ill defined. There were 160 deaths. Concordance for the 13 groups was observed in 111 (69%) cases for the two sources and in 115 (72%) cases for the two coding algorithms. According to revised CoDe, the commonest CoDs were HIV/AIDS (53%), non-AIDS malignancies (11%) and liver related (9%), these percentages were similar, 57, 10 and 8%, respectively, for NBDF (coded as ICD-10). When using ICD-10 to code deaths in CoRIS, wherein HIV infection was known in everyone, the proportion of non-AIDS malignancies was 13%, liver-related accounted for 3%, while HIV/AIDS reached 70% due to liver-related, infections and ill-defined causes being coded as HIV/AIDS. There is substantial variation in CoDs in HIV-infected persons according to sources and algorithms. ICD-10 in patients known to be HIV-positive overestimates HIV/AIDS-related deaths at the expense of underestimating liver-related diseases, infections and ill defined causes. CoDe seems as the best option for cohort studies.

  3. Improving accuracy of clinical coding in surgery: collaboration is key.

    PubMed

    Heywood, Nick A; Gill, Michael D; Charlwood, Natasha; Brindle, Rachel; Kirwan, Cliona C

    2016-08-01

    Clinical coding data provide the basis for Hospital Episode Statistics and Healthcare Resource Group codes. High accuracy of this information is required for payment by results, allocation of health and research resources, and public health data and planning. We sought to identify the level of accuracy of clinical coding in general surgical admissions across hospitals in the Northwest of England. Clinical coding departments identified a total of 208 emergency general surgical patients discharged between 1st March and 15th August 2013 from seven hospital trusts (median = 20, range = 16-60). Blinded re-coding was performed by a senior clinical coder and clinician, with results compared with the original coding outcome. Recorded codes were generated from OPCS-4 & ICD-10. Of all cases, 194 of 208 (93.3%) had at least one coding error and 9 of 208 (4.3%) had errors in both primary diagnosis and primary procedure. Errors were found in 64 of 208 (30.8%) of primary diagnoses and 30 of 137 (21.9%) of primary procedure codes. Median tariff using original codes was £1411.50 (range, £409-9138). Re-calculation using updated clinical codes showed a median tariff of £1387.50, P = 0.997 (range, £406-10,102). The most frequent reasons for incorrect coding were "coder error" and a requirement for "clinical interpretation of notes". Errors in clinical coding are multifactorial and have significant impact on primary diagnosis, potentially affecting the accuracy of Hospital Episode Statistics data and in turn the allocation of health care resources and public health planning. As we move toward surgeon specific outcomes, surgeons should increase collaboration with coding departments to ensure the system is robust. Copyright © 2016 Elsevier Inc. All rights reserved.

  4. Source Methodology for Turbofan Noise Prediction (SOURCE3D Technical Documentation)

    NASA Technical Reports Server (NTRS)

    Meyer, Harold D.

    1999-01-01

    This report provides the analytical documentation for the SOURCE3D Rotor Wake/Stator Interaction Code. It derives the equations for the rotor scattering coefficients and stator source vector and scattering coefficients that are needed for use in the TFANS (Theoretical Fan Noise Design/Prediction System). SOURCE3D treats the rotor and stator as isolated source elements. TFANS uses this information, along with scattering coefficients for inlet and exit elements, and provides complete noise solutions for turbofan engines. SOURCE3D is composed of a collection of FORTRAN programs that have been obtained by extending the approach of the earlier V072 Rotor Wake/Stator Interaction Code. Similar to V072, it treats the rotor and stator as a collection of blades and vanes having zero thickness and camber contained in an infinite, hardwall annular duct. SOURCE3D adds important features to the V072 capability-a rotor element, swirl flow and vorticity waves, actuator disks for flow turning, and combined rotor/actuator disk and stator/actuator disk elements. These items allow reflections from the rotor, frequency scattering, and mode trapping, thus providing more complete noise predictions than previously. The code has been thoroughly verified through comparison with D.B. Hanson's CUP2D two- dimensional code using a narrow annulus test case.

  5. 22 CFR 228.11 - Source and origin of commodities.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 22 Foreign Relations 1 2010-04-01 2010-04-01 false Source and origin of commodities. 228.11 Section 228.11 Foreign Relations AGENCY FOR INTERNATIONAL DEVELOPMENT RULES ON SOURCE, ORIGIN AND... Commodity Procurement Transactions for USAID Financing § 228.11 Source and origin of commodities. (a) The...

  6. Breaking and Fixing Origin-Based Access Control in Hybrid Web/Mobile Application Frameworks

    PubMed Central

    Georgiev, Martin; Jana, Suman; Shmatikov, Vitaly

    2014-01-01

    Hybrid mobile applications (apps) combine the features of Web applications and “native” mobile apps. Like Web applications, they are implemented in portable, platform-independent languages such as HTML and JavaScript. Like native apps, they have direct access to local device resources—file system, location, camera, contacts, etc. Hybrid apps are typically developed using hybrid application frameworks such as PhoneGap. The purpose of the framework is twofold. First, it provides an embedded Web browser (for example, WebView on Android) that executes the app's Web code. Second, it supplies “bridges” that allow Web code to escape the browser and access local resources on the device. We analyze the software stack created by hybrid frameworks and demonstrate that it does not properly compose the access-control policies governing Web code and local code, respectively. Web code is governed by the same origin policy, whereas local code is governed by the access-control policy of the operating system (for example, user-granted permissions in Android). The bridges added by the framework to the browser have the same local access rights as the entire application, but are not correctly protected by the same origin policy. This opens the door to fracking attacks, which allow foreign-origin Web content included into a hybrid app (e.g., ads confined in iframes) to drill through the layers and directly access device resources. Fracking vulnerabilities are generic: they affect all hybrid frameworks, all embedded Web browsers, all bridge mechanisms, and all platforms on which these frameworks are deployed. We study the prevalence of fracking vulnerabilities in free Android apps based on the PhoneGap framework. Each vulnerability exposes sensitive local resources—the ability to read and write contacts list, local files, etc.—to dozens of potentially malicious Web domains. We also analyze the defenses deployed by hybrid frameworks to prevent resource access by foreign-origin Web content and explain why they are ineffectual. We then present NoFrak, a capability-based defense against fracking attacks. NoFrak is platform-independent, compatible with any framework and embedded browser, requires no changes to the code of the existing hybrid apps, and does not break their advertising-supported business model. PMID:25485311

  7. The lack of foundation in the mechanism on which are based the physico-chemical theories for the origin of the genetic code is counterposed to the credible and natural mechanism suggested by the coevolution theory.

    PubMed

    Di Giulio, Massimo

    2016-06-21

    I analyze the mechanism on which are based the majority of theories that put to the center of the origin of the genetic code the physico-chemical properties of amino acids. As this mechanism is based on excessive mutational steps, I conclude that it could not have been operative or if operative it would not have allowed a full realization of predictions of these theories, because this mechanism contained, evidently, a high indeterminacy. I make that disapproving the four-column theory of the origin of the genetic code (Higgs, 2009) and reply to the criticism that was directed towards the coevolution theory of the origin of the genetic code. In this context, I suggest a new hypothesis that clarifies the mechanism by which the domains of codons of the precursor amino acids would have evolved, as predicted by the coevolution theory. This mechanism would have used particular elongation factors that would have constrained the evolution of all amino acids belonging to a given biosynthetic family to the progenitor pre-tRNA, that for first recognized, the first codons that evolved in a certain codon domain of a determined precursor amino acid. This happened because the elongation factors recognized two characteristics of the progenitor pre-tRNAs of precursor amino acids, which prevented the elongation factors from recognizing the pre-tRNAs belonging to biosynthetic families of different precursor amino acids. Finally, I analyze by means of Fisher's exact test, the distribution, within the genetic code, of the biosynthetic classes of amino acids and the ones of polarity values of amino acids. This analysis would seem to support the biosynthetic classes of amino acids over the ones of polarity values, as the main factor that led to the structuring of the genetic code, with the physico-chemical properties of amino acids playing only a subsidiary role in this evolution. As a whole, the full analysis brings to the conclusion that the coevolution theory of the origin of the genetic code would be a theory highly corroborated. Copyright © 2016 Elsevier Ltd. All rights reserved.

  8. The Cortical Organization of Speech Processing: Feedback Control and Predictive Coding the Context of a Dual-Stream Model

    ERIC Educational Resources Information Center

    Hickok, Gregory

    2012-01-01

    Speech recognition is an active process that involves some form of predictive coding. This statement is relatively uncontroversial. What is less clear is the source of the prediction. The dual-stream model of speech processing suggests that there are two possible sources of predictive coding in speech perception: the motor speech system and the…

  9. Active elimination of radio frequency interference for improved signal-to-noise ratio for in-situ NMR experiments in strong magnetic field gradients

    NASA Astrophysics Data System (ADS)

    Ibrahim, M.; Pardi, C. I.; Brown, T. W. C.; McDonald, P. J.

    2018-02-01

    Improvement in the signal-to-noise ratio of Nuclear Magnetic Resonance (NMR) systems may be achieved either by increasing the signal amplitude or by decreasing the noise. The noise has multiple origins - not all of which are strictly "noise": incoherent thermal noise originating in the probe and pre-amplifiers, probe ring down or acoustic noise and coherent externally broadcast radio frequency transmissions. The last cannot always be shielded in open access experiments. In this paper, we show that pulsed, low radio-frequency data communications are a significant source of broadcast interference. We explore two signal processing methods of de-noising short T2∗ NMR experiments corrupted by these communications: Linear Predictive Coding (LPC) and the Discrete Wavelet Transform (DWT). Results are shown for numerical simulations and experiments conducted under controlled conditions with pseudo radio frequency interference. We show that both the LPC and DWT methods have merit.

  10. OpenSWPC: an open-source integrated parallel simulation code for modeling seismic wave propagation in 3D heterogeneous viscoelastic media

    NASA Astrophysics Data System (ADS)

    Maeda, Takuto; Takemura, Shunsuke; Furumura, Takashi

    2017-07-01

    We have developed an open-source software package, Open-source Seismic Wave Propagation Code (OpenSWPC), for parallel numerical simulations of seismic wave propagation in 3D and 2D (P-SV and SH) viscoelastic media based on the finite difference method in local-to-regional scales. This code is equipped with a frequency-independent attenuation model based on the generalized Zener body and an efficient perfectly matched layer for absorbing boundary condition. A hybrid-style programming using OpenMP and the Message Passing Interface (MPI) is adopted for efficient parallel computation. OpenSWPC has wide applicability for seismological studies and great portability to allowing excellent performance from PC clusters to supercomputers. Without modifying the code, users can conduct seismic wave propagation simulations using their own velocity structure models and the necessary source representations by specifying them in an input parameter file. The code has various modes for different types of velocity structure model input and different source representations such as single force, moment tensor and plane-wave incidence, which can easily be selected via the input parameters. Widely used binary data formats, the Network Common Data Form (NetCDF) and the Seismic Analysis Code (SAC) are adopted for the input of the heterogeneous structure model and the outputs of the simulation results, so users can easily handle the input/output datasets. All codes are written in Fortran 2003 and are available with detailed documents in a public repository.[Figure not available: see fulltext.

  11. Total x-ray power measurements in the Sandia LIGA program.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Malinowski, Michael E.; Ting, Aili

    2005-08-01

    Total X-ray power measurements using aluminum block calorimetry and other techniques were made at LIGA X-ray scanner synchrotron beamlines located at both the Advanced Light Source (ALS) and the Advanced Photon Source (APS). This block calorimetry work was initially performed on the LIGA beamline 3.3.1 of the ALS to provide experimental checks of predictions of the LEX-D (LIGA Exposure- Development) code for LIGA X-ray exposures, version 7.56, the version of the code in use at the time calorimetry was done. These experiments showed that it was necessary to use bend magnet field strengths and electron storage ring energies different frommore » the default values originally in the code in order to obtain good agreement between experiment and theory. The results indicated that agreement between LEX-D predictions and experiment could be as good as 5% only if (1) more accurate values of the ring energies, (2) local values of the magnet field at the beamline source point, and (3) the NIST database for X-ray/materials interactions were used as code inputs. These local magnetic field value and accurate ring energies, together with NIST database, are now defaults in the newest release of LEX-D, version 7.61. Three dimensional simulations of the temperature distributions in the aluminum calorimeter block for a typical ALS power measurement were made with the ABAQUS code and found to be in good agreement with the experimental temperature data. As an application of the block calorimetry technique, the X-ray power exiting the mirror in place at a LIGA scanner located at the APS beamline 10 BM was measured with a calorimeter similar to the one used at the ALS. The overall results at the APS demonstrated the utility of calorimetry in helping to characterize the total X-ray power in LIGA beamlines. In addition to the block calorimetry work at the ALS and APS, a preliminary comparison of the use of heat flux sensors, photodiodes and modified beam calorimeters as total X-ray power monitors was made at the ALS, beamline 3.3.1. This work showed that a modification of a commercially available, heat flux sensor could result in a simple, direct reading beam power meter that could be a useful for monitoring total X-ray power in Sandia's LIGA exposure stations at the ALS, APS and Stanford Synchrotron Radiation Laboratory (SSRL).« less

  12. Hypersonic simulations using open-source CFD and DSMC solvers

    NASA Astrophysics Data System (ADS)

    Casseau, V.; Scanlon, T. J.; John, B.; Emerson, D. R.; Brown, R. E.

    2016-11-01

    Hypersonic hybrid hydrodynamic-molecular gas flow solvers are required to satisfy the two essential requirements of any high-speed reacting code, these being physical accuracy and computational efficiency. The James Weir Fluids Laboratory at the University of Strathclyde is currently developing an open-source hybrid code which will eventually reconcile the direct simulation Monte-Carlo method, making use of the OpenFOAM application called dsmcFoam, and the newly coded open-source two-temperature computational fluid dynamics solver named hy2Foam. In conjunction with employing the CVDV chemistry-vibration model in hy2Foam, novel use is made of the QK rates in a CFD solver. In this paper, further testing is performed, in particular with the CFD solver, to ensure its efficacy before considering more advanced test cases. The hy2Foam and dsmcFoam codes have shown to compare reasonably well, thus providing a useful basis for other codes to compare against.

  13. LDPC-PPM Coding Scheme for Optical Communication

    NASA Technical Reports Server (NTRS)

    Barsoum, Maged; Moision, Bruce; Divsalar, Dariush; Fitz, Michael

    2009-01-01

    In a proposed coding-and-modulation/demodulation-and-decoding scheme for a free-space optical communication system, an error-correcting code of the low-density parity-check (LDPC) type would be concatenated with a modulation code that consists of a mapping of bits to pulse-position-modulation (PPM) symbols. Hence, the scheme is denoted LDPC-PPM. This scheme could be considered a competitor of a related prior scheme in which an outer convolutional error-correcting code is concatenated with an interleaving operation, a bit-accumulation operation, and a PPM inner code. Both the prior and present schemes can be characterized as serially concatenated pulse-position modulation (SCPPM) coding schemes. Figure 1 represents a free-space optical communication system based on either the present LDPC-PPM scheme or the prior SCPPM scheme. At the transmitting terminal, the original data (u) are processed by an encoder into blocks of bits (a), and the encoded data are mapped to PPM of an optical signal (c). For the purpose of design and analysis, the optical channel in which the PPM signal propagates is modeled as a Poisson point process. At the receiving terminal, the arriving optical signal (y) is demodulated to obtain an estimate (a^) of the coded data, which is then processed by a decoder to obtain an estimate (u^) of the original data.

  14. Fan Noise Prediction System Development: Source/Radiation Field Coupling and Workstation Conversion for the Acoustic Radiation Code

    NASA Technical Reports Server (NTRS)

    Meyer, H. D.

    1993-01-01

    The Acoustic Radiation Code (ARC) is a finite element program used on the IBM mainframe to predict far-field acoustic radiation from a turbofan engine inlet. In this report, requirements for developers of internal aerodynamic codes regarding use of their program output an input for the ARC are discussed. More specifically, the particular input needed from the Bolt, Beranek and Newman/Pratt and Whitney (turbofan source noise generation) Code (BBN/PWC) is described. In a separate analysis, a method of coupling the source and radiation models, that recognizes waves crossing the interface in both directions, has been derived. A preliminary version of the coupled code has been developed and used for initial evaluation of coupling issues. Results thus far have shown that reflection from the inlet is sufficient to indicate that full coupling of the source and radiation fields is needed for accurate noise predictions ' Also, for this contract, the ARC has been modified for use on the Sun and Silicon Graphics Iris UNIX workstations. Changes and additions involved in this effort are described in an appendix.

  15. Experimental benchmark of kinetic simulations of capacitively coupled plasmas in molecular gases

    NASA Astrophysics Data System (ADS)

    Donkó, Z.; Derzsi, A.; Korolov, I.; Hartmann, P.; Brandt, S.; Schulze, J.; Berger, B.; Koepke, M.; Bruneau, B.; Johnson, E.; Lafleur, T.; Booth, J.-P.; Gibson, A. R.; O'Connell, D.; Gans, T.

    2018-01-01

    We discuss the origin of uncertainties in the results of numerical simulations of low-temperature plasma sources, focusing on capacitively coupled plasmas. These sources can be operated in various gases/gas mixtures, over a wide domain of excitation frequency, voltage, and gas pressure. At low pressures, the non-equilibrium character of the charged particle transport prevails and particle-based simulations become the primary tools for their numerical description. The particle-in-cell method, complemented with Monte Carlo type description of collision processes, is a well-established approach for this purpose. Codes based on this technique have been developed by several authors/groups, and have been benchmarked with each other in some cases. Such benchmarking demonstrates the correctness of the codes, but the underlying physical model remains unvalidated. This is a key point, as this model should ideally account for all important plasma chemical reactions as well as for the plasma-surface interaction via including specific surface reaction coefficients (electron yields, sticking coefficients, etc). In order to test the models rigorously, comparison with experimental ‘benchmark data’ is necessary. Examples will be given regarding the studies of electron power absorption modes in O2, and CF4-Ar discharges, as well as on the effect of modifications of the parameters of certain elementary processes on the computed discharge characteristics in O2 capacitively coupled plasmas.

  16. Emission spectra of photoionized plasmas induced by intense EUV pulses: Experimental and theoretical investigations

    NASA Astrophysics Data System (ADS)

    Saber, Ismail; Bartnik, Andrzej; Skrzeczanowski, Wojciech; Wachulak, Przemysław; Jarocki, Roman; Fiedorowicz, Henryk

    2017-03-01

    Experimental measurements and numerical modeling of emission spectra in photoionized plasma in the ultraviolet and visible light (UV/Vis) range for noble gases have been investigated. The photoionized plasmas were created using laser-produced plasma (LPP) extreme ultraviolet (EUV) source. The source was based on a gas puff target; irradiated with 10ns/10J/10Hz Nd:YAG laser. The EUV radiation pulses were collected and focused using grazing incidence multifoil EUV collector. The laser pulses were focused on a gas stream, injected into a vacuum chamber synchronously with the EUV pulses. Irradiation of gases resulted in a formation of low temperature photoionized plasmas emitting radiation in the UV/Vis spectral range. Atomic photoionized plasmas produced this way consisted of atomic and ionic with various ionization states. The most dominated observed spectral lines originated from radiative transitions in singly charged ions. To assist in a theoretical interpretation of the measured spectra, an atomic code based on Cowan's programs and a collisional-radiative PrismSPECT code have been used to calculate the theoretical spectra. A comparison of the calculated spectral lines with experimentally obtained results is presented. Electron temperature in plasma is estimated using the Boltzmann plot method, by an assumption that a local thermodynamic equilibrium (LTE) condition in the plasma is validated in the first few ionization states. A brief discussion for the measured and computed spectra is given.

  17. Aeroacoustic Prediction Codes

    NASA Technical Reports Server (NTRS)

    Gliebe, P; Mani, R.; Shin, H.; Mitchell, B.; Ashford, G.; Salamah, S.; Connell, S.; Huff, Dennis (Technical Monitor)

    2000-01-01

    This report describes work performed on Contract NAS3-27720AoI 13 as part of the NASA Advanced Subsonic Transport (AST) Noise Reduction Technology effort. Computer codes were developed to provide quantitative prediction, design, and analysis capability for several aircraft engine noise sources. The objective was to provide improved, physics-based tools for exploration of noise-reduction concepts and understanding of experimental results. Methods and codes focused on fan broadband and 'buzz saw' noise and on low-emissions combustor noise and compliment work done by other contractors under the NASA AST program to develop methods and codes for fan harmonic tone noise and jet noise. The methods and codes developed and reported herein employ a wide range of approaches, from the strictly empirical to the completely computational, with some being semiempirical analytical, and/or analytical/computational. Emphasis was on capturing the essential physics while still considering method or code utility as a practical design and analysis tool for everyday engineering use. Codes and prediction models were developed for: (1) an improved empirical correlation model for fan rotor exit flow mean and turbulence properties, for use in predicting broadband noise generated by rotor exit flow turbulence interaction with downstream stator vanes: (2) fan broadband noise models for rotor and stator/turbulence interaction sources including 3D effects, noncompact-source effects. directivity modeling, and extensions to the rotor supersonic tip-speed regime; (3) fan multiple-pure-tone in-duct sound pressure prediction methodology based on computational fluid dynamics (CFD) analysis; and (4) low-emissions combustor prediction methodology and computer code based on CFD and actuator disk theory. In addition. the relative importance of dipole and quadrupole source mechanisms was studied using direct CFD source computation for a simple cascadeigust interaction problem, and an empirical combustor-noise correlation model was developed from engine acoustic test results. This work provided several insights on potential approaches to reducing aircraft engine noise. Code development is described in this report, and those insights are discussed.

  18. Astrophysics Source Code Library Enhancements

    NASA Astrophysics Data System (ADS)

    Hanisch, R. J.; Allen, A.; Berriman, G. B.; DuPrie, K.; Mink, J.; Nemiroff, R. J.; Schmidt, J.; Shamir, L.; Shortridge, K.; Taylor, M.; Teuben, P. J.; Wallin, J.

    2015-09-01

    The Astrophysics Source Code Library (ASCL)1 is a free online registry of codes used in astronomy research; it currently contains over 900 codes and is indexed by ADS. The ASCL has recently moved a new infrastructure into production. The new site provides a true database for the code entries and integrates the WordPress news and information pages and the discussion forum into one site. Previous capabilities are retained and permalinks to ascl.net continue to work. This improvement offers more functionality and flexibility than the previous site, is easier to maintain, and offers new possibilities for collaboration. This paper covers these recent changes to the ASCL.

  19. Scalable video transmission over Rayleigh fading channels using LDPC codes

    NASA Astrophysics Data System (ADS)

    Bansal, Manu; Kondi, Lisimachos P.

    2005-03-01

    In this paper, we investigate an important problem of efficiently utilizing the available resources for video transmission over wireless channels while maintaining a good decoded video quality and resilience to channel impairments. Our system consists of the video codec based on 3-D set partitioning in hierarchical trees (3-D SPIHT) algorithm and employs two different schemes using low-density parity check (LDPC) codes for channel error protection. The first method uses the serial concatenation of the constant-rate LDPC code and rate-compatible punctured convolutional (RCPC) codes. Cyclic redundancy check (CRC) is used to detect transmission errors. In the other scheme, we use the product code structure consisting of a constant rate LDPC/CRC code across the rows of the `blocks' of source data and an erasure-correction systematic Reed-Solomon (RS) code as the column code. In both the schemes introduced here, we use fixed-length source packets protected with unequal forward error correction coding ensuring a strictly decreasing protection across the bitstream. A Rayleigh flat-fading channel with additive white Gaussian noise (AWGN) is modeled for the transmission. The rate-distortion optimization algorithm is developed and carried out for the selection of source coding and channel coding rates using Lagrangian optimization. The experimental results demonstrate the effectiveness of this system under different wireless channel conditions and both the proposed methods (LDPC+RCPC/CRC and RS+LDPC/CRC) outperform the more conventional schemes such as those employing RCPC/CRC.

  20. Methodology of decreasing software complexity using ontology

    NASA Astrophysics Data System (ADS)

    DÄ browska-Kubik, Katarzyna

    2015-09-01

    In this paper a model of web application`s source code, based on the OSD ontology (Ontology for Software Development), is proposed. This model is applied to implementation and maintenance phase of software development process through the DevOntoCreator tool [5]. The aim of this solution is decreasing software complexity of that source code, using many different maintenance techniques, like creation of documentation, elimination dead code, cloned code or bugs, which were known before [1][2]. Due to this approach saving on software maintenance costs of web applications will be possible.

  1. Bit-wise arithmetic coding for data compression

    NASA Technical Reports Server (NTRS)

    Kiely, A. B.

    1994-01-01

    This article examines the problem of compressing a uniformly quantized independent and identically distributed (IID) source. We present a new compression technique, bit-wise arithmetic coding, that assigns fixed-length codewords to the quantizer output and uses arithmetic coding to compress the codewords, treating the codeword bits as independent. We examine the performance of this method and evaluate the overhead required when used block-adaptively. Simulation results are presented for Gaussian and Laplacian sources. This new technique could be used as the entropy coder in a transform or subband coding system.

  2. Astrophysics Source Code Library -- Now even better!

    NASA Astrophysics Data System (ADS)

    Allen, Alice; Schmidt, Judy; Berriman, Bruce; DuPrie, Kimberly; Hanisch, Robert J.; Mink, Jessica D.; Nemiroff, Robert J.; Shamir, Lior; Shortridge, Keith; Taylor, Mark B.; Teuben, Peter J.; Wallin, John F.

    2015-01-01

    The Astrophysics Source Code Library (ASCL, ascl.net) is a free online registry of codes used in astronomy research. Indexed by ADS, it now contains nearly 1,000 codes and with recent major changes, is better than ever! The resource has a new infrastructure that offers greater flexibility and functionality for users, including an easier submission process, better browsing, one-click author search, and an RSS feeder for news. The new database structure is easier to maintain and offers new possibilities for collaboration. Come see what we've done!

  3. Study of statistical coding for digital TV

    NASA Technical Reports Server (NTRS)

    Gardenhire, L. W.

    1972-01-01

    The results are presented for a detailed study to determine a pseudo-optimum statistical code to be installed in a digital TV demonstration test set. Studies of source encoding were undertaken, using redundancy removal techniques in which the picture is reproduced within a preset tolerance. A method of source encoding, which preliminary studies show to be encouraging, is statistical encoding. A pseudo-optimum code was defined and the associated performance of the code was determined. The format was fixed at 525 lines per frame, 30 frames per second, as per commercial standards.

  4. A progressive data compression scheme based upon adaptive transform coding: Mixture block coding of natural images

    NASA Technical Reports Server (NTRS)

    Rost, Martin C.; Sayood, Khalid

    1991-01-01

    A method for efficiently coding natural images using a vector-quantized variable-blocksized transform source coder is presented. The method, mixture block coding (MBC), incorporates variable-rate coding by using a mixture of discrete cosine transform (DCT) source coders. Which coders are selected to code any given image region is made through a threshold driven distortion criterion. In this paper, MBC is used in two different applications. The base method is concerned with single-pass low-rate image data compression. The second is a natural extension of the base method which allows for low-rate progressive transmission (PT). Since the base method adapts easily to progressive coding, it offers the aesthetic advantage of progressive coding without incorporating extensive channel overhead. Image compression rates of approximately 0.5 bit/pel are demonstrated for both monochrome and color images.

  5. Death Certification Errors and the Effect on Mortality Statistics.

    PubMed

    McGivern, Lauri; Shulman, Leanne; Carney, Jan K; Shapiro, Steven; Bundock, Elizabeth

    Errors in cause and manner of death on death certificates are common and affect families, mortality statistics, and public health research. The primary objective of this study was to characterize errors in the cause and manner of death on death certificates completed by non-Medical Examiners. A secondary objective was to determine the effects of errors on national mortality statistics. We retrospectively compared 601 death certificates completed between July 1, 2015, and January 31, 2016, from the Vermont Electronic Death Registration System with clinical summaries from medical records. Medical Examiners, blinded to original certificates, reviewed summaries, generated mock certificates, and compared mock certificates with original certificates. They then graded errors using a scale from 1 to 4 (higher numbers indicated increased impact on interpretation of the cause) to determine the prevalence of minor and major errors. They also compared International Classification of Diseases, 10th Revision (ICD-10) codes on original certificates with those on mock certificates. Of 601 original death certificates, 319 (53%) had errors; 305 (51%) had major errors; and 59 (10%) had minor errors. We found no significant differences by certifier type (physician vs nonphysician). We did find significant differences in major errors in place of death ( P < .001). Certificates for deaths occurring in hospitals were more likely to have major errors than certificates for deaths occurring at a private residence (59% vs 39%, P < .001). A total of 580 (93%) death certificates had a change in ICD-10 codes between the original and mock certificates, of which 348 (60%) had a change in the underlying cause-of-death code. Error rates on death certificates in Vermont are high and extend to ICD-10 coding, thereby affecting national mortality statistics. Surveillance and certifier education must expand beyond local and state efforts. Simplifying and standardizing underlying literal text for cause of death may improve accuracy, decrease coding errors, and improve national mortality statistics.

  6. Acta Aeronautica et Astronautica Sinica.

    DTIC Science & Technology

    1982-07-28

    AERONAUTICA ET ASTRONAUTICA SINICA - <,y English pages: 212 _r Source : Acta Aeronautica et Astronautica Sinica, Vol. 2, Nr. 4, December 1981 , . pp. 1...ADVOCATED OR IMPLIED ARE THOSE OP THE SOURCE ANDDO NOT NECESSARILY REFLECT THE POSITION TRANSLATION DIVISION OR OPINION OF THE FOREnjN TECHNOLOGY DI...axial) solution section code 2 Lower Corner Symbols i code of sectional cylindrical coordinate system j,k radial and peripheral codes of solution

  7. Are procedures codes in claims data a reliable indicator of intraoperative splenic injury compared with clinical registry data?

    PubMed

    Stey, Anne M; Ko, Clifford Y; Hall, Bruce Lee; Louie, Rachel; Lawson, Elise H; Gibbons, Melinda M; Zingmond, David S; Russell, Marcia M

    2014-08-01

    Identifying iatrogenic injuries using existing data sources is important for improved transparency in the occurrence of intraoperative events. There is evidence that procedure codes are reliably recorded in claims data. The objective of this study was to assess whether concurrent splenic procedure codes in patients undergoing colectomy procedures are reliably coded in claims data as compared with clinical registry data. Patients who underwent colectomy procedures in the absence of neoplastic diagnosis codes were identified from American College of Surgeons (ACS) NSQIP data linked with Medicare inpatient claims data file (2005 to 2008). A κ statistic was used to assess coding concordance between ACS NSQIP and Medicare inpatient claims, with ACS NSQIP serving as the reference standard. A total of 11,367 colectomy patients were identified from 212 hospitals. There were 114 patients (1%) who had a concurrent splenic procedure code recorded in either ACS NSQIP or Medicare inpatient claims. There were 7 patients who had a splenic injury diagnosis code recorded in either data source. Agreement of splenic procedure codes between the data sources was substantial (κ statistic 0.72; 95% CI, 0.64-0.79). Medicare inpatient claims identified 81% of the splenic procedure codes recorded in ACS NSQIP, and 99% of the patients without a splenic procedure code. It is feasible to use Medicare claims data to identify splenic injuries occurring during colectomy procedures, as claims data have moderate sensitivity and excellent specificity for capturing concurrent splenic procedure codes compared with ACS NSQIP. Copyright © 2014 American College of Surgeons. Published by Elsevier Inc. All rights reserved.

  8. Evaluation and utilization of beam simulation codes for the SNS ion source and low energy beam transport developmenta)

    NASA Astrophysics Data System (ADS)

    Han, B. X.; Welton, R. F.; Stockli, M. P.; Luciano, N. P.; Carmichael, J. R.

    2008-02-01

    Beam simulation codes PBGUNS, SIMION, and LORENTZ-3D were evaluated by modeling the well-diagnosed SNS base line ion source and low energy beam transport (LEBT) system. Then, an investigation was conducted using these codes to assist our ion source and LEBT development effort which is directed at meeting the SNS operational and also the power-upgrade project goals. A high-efficiency H- extraction system as well as magnetic and electrostatic LEBT configurations capable of transporting up to 100mA is studied using these simulation tools.

  9. OFFSET - RAY TRACING OPTICAL ANALYSIS OF OFFSET SOLAR COLLECTOR FOR SPACE STATION SOLAR DYNAMIC POWER SYSTEM

    NASA Technical Reports Server (NTRS)

    Jefferies, K.

    1994-01-01

    OFFSET is a ray tracing computer code for optical analysis of a solar collector. The code models the flux distributions within the receiver cavity produced by reflections from the solar collector. It was developed to model the offset solar collector of the solar dynamic electric power system being developed for Space Station Freedom. OFFSET has been used to improve the understanding of the collector-receiver interface and to guide the efforts of NASA contractors also researching the optical components of the power system. The collector for Space Station Freedom consists of 19 hexagonal panels each containing 24 triangular, reflective facets. Current research is geared toward optimizing flux distribution inside the receiver via changes in collector design and receiver orientation. OFFSET offers many options for experimenting with the design of the system. The offset parabolic collector model configuration is determined by an input file of facet corner coordinates. The user may choose other configurations by changing this file, but to simulate collectors that have other than 19 groups of 24 triangular facets would require modification of the FORTRAN code. Each of the roughly 500 facets in the assembled collector may be independently aimed to smooth out, or tailor, the flux distribution on the receiver's wall. OFFSET simulates the effects of design changes such as in receiver aperture location, tilt angle, and collector facet contour. Unique features of OFFSET include: 1) equations developed to pseudo-randomly select ray originating sources on the Sun which appear evenly distributed and include solar limb darkening; 2) Cone-optics technique used to add surface specular error to the ray originating sources to determine the apparent ray sources of the reflected sun; 3) choice of facet reflective surface contour -- spherical, ideal parabolic, or toroidal; 4) Gaussian distributions of radial and tangential components of surface slope error added to the surface normals at the ten nodal points on each facet; and 5) color contour plots of receiver incident flux distribution generated by PATRAN processing of FORTRAN computer code output. OFFSET output includes a file of input data for confirmation, a PATRAN results file containing the values necessary to plot the flux distribution at the receiver surface, a PATRAN results file containing the intensity distribution on a 40 x 40 cm area of the receiver aperture plane, a data file containing calculated information on the system configuration, a file including the X-Y coordinates of the target points of each collector facet on the aperture opening, and twelve P/PLOT input data files to allow X-Y plotting of various results data. OFFSET is written in FORTRAN (70%) for the IBM VM operating system. The code contains PATRAN statements (12%) and P/PLOT statements (18%) for generating plots. Once the program has been run on VM (or an equivalent system), the PATRAN and P/PLOT files may be transferred to a DEC VAX (or equivalent system) with access to PATRAN for PATRAN post processing. OFFSET was written in 1988 and last updated in 1989. PATRAN is a registered trademark of PDA Engineering. IBM is a registered trademark of International Business Machines Corporation. DEC VAX is a registered trademark of Digital Equipment Corporation.

  10. MARE2DEM: a 2-D inversion code for controlled-source electromagnetic and magnetotelluric data

    NASA Astrophysics Data System (ADS)

    Key, Kerry

    2016-10-01

    This work presents MARE2DEM, a freely available code for 2-D anisotropic inversion of magnetotelluric (MT) data and frequency-domain controlled-source electromagnetic (CSEM) data from onshore and offshore surveys. MARE2DEM parametrizes the inverse model using a grid of arbitrarily shaped polygons, where unstructured triangular or quadrilateral grids are typically used due to their ease of construction. Unstructured grids provide significantly more geometric flexibility and parameter efficiency than the structured rectangular grids commonly used by most other inversion codes. Transmitter and receiver components located on topographic slopes can be tilted parallel to the boundary so that the simulated electromagnetic fields accurately reproduce the real survey geometry. The forward solution is implemented with a goal-oriented adaptive finite-element method that automatically generates and refines unstructured triangular element grids that conform to the inversion parameter grid, ensuring accurate responses as the model conductivity changes. This dual-grid approach is significantly more efficient than the conventional use of a single grid for both the forward and inverse meshes since the more detailed finite-element meshes required for accurate responses do not increase the memory requirements of the inverse problem. Forward solutions are computed in parallel with a highly efficient scaling by partitioning the data into smaller independent modeling tasks consisting of subsets of the input frequencies, transmitters and receivers. Non-linear inversion is carried out with a new Occam inversion approach that requires fewer forward calls. Dense matrix operations are optimized for memory and parallel scalability using the ScaLAPACK parallel library. Free parameters can be bounded using a new non-linear transformation that leaves the transformed parameters nearly the same as the original parameters within the bounds, thereby reducing non-linear smoothing effects. Data balancing normalization weights for the joint inversion of two or more data sets encourages the inversion to fit each data type equally well. A synthetic joint inversion of marine CSEM and MT data illustrates the algorithm's performance and parallel scaling on up to 480 processing cores. CSEM inversion of data from the Middle America Trench offshore Nicaragua demonstrates a real world application. The source code and MATLAB interface tools are freely available at http://mare2dem.ucsd.edu.

  11. Compressive Sampling based Image Coding for Resource-deficient Visual Communication.

    PubMed

    Liu, Xianming; Zhai, Deming; Zhou, Jiantao; Zhang, Xinfeng; Zhao, Debin; Gao, Wen

    2016-04-14

    In this paper, a new compressive sampling based image coding scheme is developed to achieve competitive coding efficiency at lower encoder computational complexity, while supporting error resilience. This technique is particularly suitable for visual communication with resource-deficient devices. At the encoder, compact image representation is produced, which is a polyphase down-sampled version of the input image; but the conventional low-pass filter prior to down-sampling is replaced by a local random binary convolution kernel. The pixels of the resulting down-sampled pre-filtered image are local random measurements and placed in the original spatial configuration. The advantages of local random measurements are two folds: 1) preserve high-frequency image features that are otherwise discarded by low-pass filtering; 2) remain a conventional image and can therefore be coded by any standardized codec to remove statistical redundancy of larger scales. Moreover, measurements generated by different kernels can be considered as multiple descriptions of the original image and therefore the proposed scheme has the advantage of multiple description coding. At the decoder, a unified sparsity-based soft-decoding technique is developed to recover the original image from received measurements in a framework of compressive sensing. Experimental results demonstrate that the proposed scheme is competitive compared with existing methods, with a unique strength of recovering fine details and sharp edges at low bit-rates.

  12. Particle-in-cell code library for numerical simulation of the ECR source plasma

    NASA Astrophysics Data System (ADS)

    Shirkov, G.; Alexandrov, V.; Preisendorf, V.; Shevtsov, V.; Filippov, A.; Komissarov, R.; Mironov, V.; Shirkova, E.; Strekalovsky, O.; Tokareva, N.; Tuzikov, A.; Vatulin, V.; Vasina, E.; Fomin, V.; Anisimov, A.; Veselov, R.; Golubev, A.; Grushin, S.; Povyshev, V.; Sadovoi, A.; Donskoi, E.; Nakagawa, T.; Yano, Y.

    2003-05-01

    The project ;Numerical simulation and optimization of ion accumulation and production in multicharged ion sources; is funded by the International Science and Technology Center (ISTC). A summary of recent project development and the first version of a computer code library for simulation of electron-cyclotron resonance (ECR) source plasmas based on the particle-in-cell method are presented.

  13. Approaches in highly parameterized inversion - PEST++, a Parameter ESTimation code optimized for large environmental models

    USGS Publications Warehouse

    Welter, David E.; Doherty, John E.; Hunt, Randall J.; Muffels, Christopher T.; Tonkin, Matthew J.; Schreuder, Willem A.

    2012-01-01

    An object-oriented parameter estimation code was developed to incorporate benefits of object-oriented programming techniques for solving large parameter estimation modeling problems. The code is written in C++ and is a formulation and expansion of the algorithms included in PEST, a widely used parameter estimation code written in Fortran. The new code is called PEST++ and is designed to lower the barriers of entry for users and developers while providing efficient algorithms that can accommodate large, highly parameterized problems. This effort has focused on (1) implementing the most popular features of PEST in a fashion that is easy for novice or experienced modelers to use and (2) creating a software design that is easy to extend; that is, this effort provides a documented object-oriented framework designed from the ground up to be modular and extensible. In addition, all PEST++ source code and its associated libraries, as well as the general run manager source code, have been integrated in the Microsoft Visual Studio® 2010 integrated development environment. The PEST++ code is designed to provide a foundation for an open-source development environment capable of producing robust and efficient parameter estimation tools for the environmental modeling community into the future.

  14. A new software for deformation source optimization, the Bayesian Earthquake Analysis Tool (BEAT)

    NASA Astrophysics Data System (ADS)

    Vasyura-Bathke, H.; Dutta, R.; Jonsson, S.; Mai, P. M.

    2017-12-01

    Modern studies of crustal deformation and the related source estimation, including magmatic and tectonic sources, increasingly use non-linear optimization strategies to estimate geometric and/or kinematic source parameters and often consider both jointly, geodetic and seismic data. Bayesian inference is increasingly being used for estimating posterior distributions of deformation source model parameters, given measured/estimated/assumed data and model uncertainties. For instance, some studies consider uncertainties of a layered medium and propagate these into source parameter uncertainties, while others use informative priors to reduce the model parameter space. In addition, innovative sampling algorithms have been developed to efficiently explore the high-dimensional parameter spaces. Compared to earlier studies, these improvements have resulted in overall more robust source model parameter estimates that include uncertainties. However, the computational burden of these methods is high and estimation codes are rarely made available along with the published results. Even if the codes are accessible, it is usually challenging to assemble them into a single optimization framework as they are typically coded in different programing languages. Therefore, further progress and future applications of these methods/codes are hampered, while reproducibility and validation of results has become essentially impossible. In the spirit of providing open-access and modular codes to facilitate progress and reproducible research in deformation source estimations, we undertook the effort of developing BEAT, a python package that comprises all the above-mentioned features in one single programing environment. The package builds on the pyrocko seismological toolbox (www.pyrocko.org), and uses the pymc3 module for Bayesian statistical model fitting. BEAT is an open-source package (https://github.com/hvasbath/beat), and we encourage and solicit contributions to the project. Here, we present our strategy for developing BEAT and show application examples; especially the effect of including the model prediction uncertainty of the velocity model in following source optimizations: full moment tensor, Mogi source, moderate strike-slip earth-quake.

  15. Parallelization of ARC3D with Computer-Aided Tools

    NASA Technical Reports Server (NTRS)

    Jin, Haoqiang; Hribar, Michelle; Yan, Jerry; Saini, Subhash (Technical Monitor)

    1998-01-01

    A series of efforts have been devoted to investigating methods of porting and parallelizing applications quickly and efficiently for new architectures, such as the SCSI Origin 2000 and Cray T3E. This report presents the parallelization of a CFD application, ARC3D, using the computer-aided tools, Cesspools. Steps of parallelizing this code and requirements of achieving better performance are discussed. The generated parallel version has achieved reasonably well performance, for example, having a speedup of 30 for 36 Cray T3E processors. However, this performance could not be obtained without modification of the original serial code. It is suggested that in many cases improving serial code and performing necessary code transformations are important parts for the automated parallelization process although user intervention in many of these parts are still necessary. Nevertheless, development and improvement of useful software tools, such as Cesspools, can help trim down many tedious parallelization details and improve the processing efficiency.

  16. Modification of LAMPF's magnet-mapping code for offsets of center coordinates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hurd, J.W.; Gomulka, S.; Merrill, F.

    1991-01-01

    One of the magnet measurements performed at LAMPF is the determination of the cylindrical harmonics of a quadrupole magnet using a rotating coil. The data are analyzed with the code HARMAL to derive the amplitudes of the harmonics. Initially, the origin of the polar coordinate system is the axis of the rotating coil. A new coordinate system is found by a simple translation of the old system such that the dipole moment in the new system is zero. The origin of this translated system is referred to as the magnetic center. Given this translation, the code calculates the coefficients ofmore » the cylindrical harmonics in the new system. The code has been modified to use an analytical calculation to determine these new coefficients. The method of calculation is described and some implications of this formulation are presented. 8 refs., 2 figs.« less

  17. Influence of Earthquake Parameters on Tsunami Wave Height and Inundation

    NASA Astrophysics Data System (ADS)

    Kulangara Madham Subrahmanian, D.; Sri Ganesh, J.; Venkata Ramana Murthy, M.; V, R. M.

    2014-12-01

    After Indian Ocean Tsunami (IOT) on 26th December, 2004, attempts are being made to assess the threat of tsunami originating from different sources for different parts of India. The Andaman - Sumatra trench is segmented by transcurrent faults and differences in the rate of subduction which is low in the north and increases southward. Therefore key board model with initial deformation calculated using different strike directions, slip rates, are used. This results in uncertainties in the earthquake parameters. This study is made to identify the location of origin of most destructive tsunami for Southeast coast of India and to infer the influence of the earthquake parameters in tsunami wave height travel time in deep ocean as well as in the shelf and inundation in the coast. Five tsunamigenic sources were considered in the Andaman - Sumatra trench taking into consideration the tectonic characters of the trench described by various authors and the modeling was carried out using TUNAMI N2 code. The model results were validated using the travel time and runup in the coastal areas and comparing the water elevation along Jason - 1's satellite track. The inundation results are compared from the field data. The assessment of the tsunami threat for the area south of Chennai city the metropolitan city of South India shows that a tsunami originating in Car Nicobar segment of the Andaman - Sumatra subduction zone can generate the most destructive tsunami. Sensitivity analysis in the modelling indicates that fault length influences the results significantly and the tsunami reaches early and with higher amplitude. Strike angle is also modifying the tsunami followed by amount of slip.

  18. Optimal bit allocation for hybrid scalable/multiple-description video transmission over wireless channels

    NASA Astrophysics Data System (ADS)

    Jubran, Mohammad K.; Bansal, Manu; Kondi, Lisimachos P.

    2006-01-01

    In this paper, we consider the problem of optimal bit allocation for wireless video transmission over fading channels. We use a newly developed hybrid scalable/multiple-description codec that combines the functionality of both scalable and multiple-description codecs. It produces a base layer and multiple-description enhancement layers. Any of the enhancement layers can be decoded (in a non-hierarchical manner) with the base layer to improve the reconstructed video quality. Two different channel coding schemes (Rate-Compatible Punctured Convolutional (RCPC)/Cyclic Redundancy Check (CRC) coding and, product code Reed Solomon (RS)+RCPC/CRC coding) are used for unequal error protection of the layered bitstream. Optimal allocation of the bitrate between source and channel coding is performed for discrete sets of source coding rates and channel coding rates. Experimental results are presented for a wide range of channel conditions. Also, comparisons with classical scalable coding show the effectiveness of using hybrid scalable/multiple-description coding for wireless transmission.

  19. Energy information data base: report number codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    1979-09-01

    Each report processed by the US DOE Technical Information Center is identified by a unique report number consisting of a code plus a sequential number. In most cases, the code identifies the originating installation. In some cases, it identifies a specific program or a type of publication. Listed in this publication are all codes that have been used by DOE in cataloging reports. This compilation consists of two parts. Part I is an alphabetical listing of report codes identified with the issuing installations that have used the codes. Part II is an alphabetical listing of installations identified with codes eachmore » has used. (RWR)« less

  20. Sharing the Code.

    ERIC Educational Resources Information Center

    Olsen, Florence

    2003-01-01

    Colleges and universities are beginning to consider collaborating on open-source-code projects as a way to meet critical software and computing needs. Points out the attractive features of noncommercial open-source software and describes some examples in use now, especially for the creation of Web infrastructure. (SLD)

  1. Professional Practice and Innovation: Level of Agreement between Coding Sources of Percentage Total Body Surface Area Burnt (%TBSA).

    PubMed

    Watterson, Dina; Cleland, Heather; Picton, Natalie; Simpson, Pam M; Gabbe, Belinda J

    2011-03-01

    The percentage of total body surface area burnt (%TBSA) is a critical measure of burn injury severity and a key predictor of burn injury outcome. This study evaluated the level of agreement between four sources of %TBSA using 120 cases identified through the Victorian State Trauma Registry. Expert clinician, ICD-10-AM, Abbreviated Injury Scale, and burns registry coding were compared using measures of agreement. There was near-perfect agreement (weighted Kappa statistic 0.81-1) between all sources of data, suggesting that ICD-10-AM is a valid source of %TBSA and use of ICD-10-AM codes could reduce the resource used by trauma and burns registries capturing this information.

  2. RF Simulation of the 187 MHz CW Photo-RF Gun Cavity at LBNL

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Tong-Ming

    2008-12-01

    A 187 MHz normal conducting Photo-RF gun cavity is designed for the next generation light sources. The cavity is capable of operating in CW mode. As high as 750 kV gap voltage can be achieved with a 20 MV/m acceleration gradient. The original cavity optimization is conducted using Superfish code (2D) by Staples. 104 vacuum pumping slots are added and evenly spaced over the cavity equator in order to achieve better than 10 -10-Tor of vacuum. Two loop couplers will be used to feed RF power into the cavity. 3D simulations are necessary to study effects from the vacuum pumpingmore » slots, couplers and possible multipactoring. The cavity geometry is optimized to minimize the power density and avoid multipactoring at operating field level. The vacuum slot dimensions are carefully chosen in consideration of both the vacuum conduction, local power density enhancement and the power attenuation at the getter pumps. This technical note gives a summary of 3D RF simulation results, multipactoring simulations (2D) and preliminary electromagnetic-thermal analysis using ANSYS code.« less

  3. PARAVT: Parallel Voronoi tessellation code

    NASA Astrophysics Data System (ADS)

    González, R. E.

    2016-10-01

    In this study, we present a new open source code for massive parallel computation of Voronoi tessellations (VT hereafter) in large data sets. The code is focused for astrophysical purposes where VT densities and neighbors are widely used. There are several serial Voronoi tessellation codes, however no open source and parallel implementations are available to handle the large number of particles/galaxies in current N-body simulations and sky surveys. Parallelization is implemented under MPI and VT using Qhull library. Domain decomposition takes into account consistent boundary computation between tasks, and includes periodic conditions. In addition, the code computes neighbors list, Voronoi density, Voronoi cell volume, density gradient for each particle, and densities on a regular grid. Code implementation and user guide are publicly available at https://github.com/regonzar/paravt.

  4. egs_brachy: a versatile and fast Monte Carlo code for brachytherapy

    NASA Astrophysics Data System (ADS)

    Chamberland, Marc J. P.; Taylor, Randle E. P.; Rogers, D. W. O.; Thomson, Rowan M.

    2016-12-01

    egs_brachy is a versatile and fast Monte Carlo (MC) code for brachytherapy applications. It is based on the EGSnrc code system, enabling simulation of photons and electrons. Complex geometries are modelled using the EGSnrc C++ class library and egs_brachy includes a library of geometry models for many brachytherapy sources, in addition to eye plaques and applicators. Several simulation efficiency enhancing features are implemented in the code. egs_brachy is benchmarked by comparing TG-43 source parameters of three source models to previously published values. 3D dose distributions calculated with egs_brachy are also compared to ones obtained with the BrachyDose code. Well-defined simulations are used to characterize the effectiveness of many efficiency improving techniques, both as an indication of the usefulness of each technique and to find optimal strategies. Efficiencies and calculation times are characterized through single source simulations and simulations of idealized and typical treatments using various efficiency improving techniques. In general, egs_brachy shows agreement within uncertainties with previously published TG-43 source parameter values. 3D dose distributions from egs_brachy and BrachyDose agree at the sub-percent level. Efficiencies vary with radionuclide and source type, number of sources, phantom media, and voxel size. The combined effects of efficiency-improving techniques in egs_brachy lead to short calculation times: simulations approximating prostate and breast permanent implant (both with (2 mm)3 voxels) and eye plaque (with (1 mm)3 voxels) treatments take between 13 and 39 s, on a single 2.5 GHz Intel Xeon E5-2680 v3 processor core, to achieve 2% average statistical uncertainty on doses within the PTV. egs_brachy will be released as free and open source software to the research community.

  5. egs_brachy: a versatile and fast Monte Carlo code for brachytherapy.

    PubMed

    Chamberland, Marc J P; Taylor, Randle E P; Rogers, D W O; Thomson, Rowan M

    2016-12-07

    egs_brachy is a versatile and fast Monte Carlo (MC) code for brachytherapy applications. It is based on the EGSnrc code system, enabling simulation of photons and electrons. Complex geometries are modelled using the EGSnrc C++ class library and egs_brachy includes a library of geometry models for many brachytherapy sources, in addition to eye plaques and applicators. Several simulation efficiency enhancing features are implemented in the code. egs_brachy is benchmarked by comparing TG-43 source parameters of three source models to previously published values. 3D dose distributions calculated with egs_brachy are also compared to ones obtained with the BrachyDose code. Well-defined simulations are used to characterize the effectiveness of many efficiency improving techniques, both as an indication of the usefulness of each technique and to find optimal strategies. Efficiencies and calculation times are characterized through single source simulations and simulations of idealized and typical treatments using various efficiency improving techniques. In general, egs_brachy shows agreement within uncertainties with previously published TG-43 source parameter values. 3D dose distributions from egs_brachy and BrachyDose agree at the sub-percent level. Efficiencies vary with radionuclide and source type, number of sources, phantom media, and voxel size. The combined effects of efficiency-improving techniques in egs_brachy lead to short calculation times: simulations approximating prostate and breast permanent implant (both with (2 mm) 3 voxels) and eye plaque (with (1 mm) 3 voxels) treatments take between 13 and 39 s, on a single 2.5 GHz Intel Xeon E5-2680 v3 processor core, to achieve 2% average statistical uncertainty on doses within the PTV. egs_brachy will be released as free and open source software to the research community.

  6. Maximum aposteriori joint source/channel coding

    NASA Technical Reports Server (NTRS)

    Sayood, Khalid; Gibson, Jerry D.

    1991-01-01

    A maximum aposteriori probability (MAP) approach to joint source/channel coder design is presented in this paper. This method attempts to explore a technique for designing joint source/channel codes, rather than ways of distributing bits between source coders and channel coders. For a nonideal source coder, MAP arguments are used to design a decoder which takes advantage of redundancy in the source coder output to perform error correction. Once the decoder is obtained, it is analyzed with the purpose of obtaining 'desirable properties' of the channel input sequence for improving overall system performance. Finally, an encoder design which incorporates these properties is proposed.

  7. Simulations of linear and Hamming codes using SageMath

    NASA Astrophysics Data System (ADS)

    Timur, Tahta D.; Adzkiya, Dieky; Soleha

    2018-03-01

    Digital data transmission over a noisy channel could distort the message being transmitted. The goal of coding theory is to ensure data integrity, that is, to find out if and where this noise has distorted the message and what the original message was. Data transmission consists of three stages: encoding, transmission, and decoding. Linear and Hamming codes are codes that we discussed in this work, where encoding algorithms are parity check and generator matrix, and decoding algorithms are nearest neighbor and syndrome. We aim to show that we can simulate these processes using SageMath software, which has built-in class of coding theory in general and linear codes in particular. First we consider the message as a binary vector of size k. This message then will be encoded to a vector with size n using given algorithms. And then a noisy channel with particular value of error probability will be created where the transmission will took place. The last task would be decoding, which will correct and revert the received message back to the original message whenever possible, that is, if the number of error occurred is smaller or equal to the correcting radius of the code. In this paper we will use two types of data for simulations, namely vector and text data.

  8. 15 CFR 740.7 - Computers (APP).

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 4A003. (2) Technology and software. License Exception APP authorizes exports of technology and software... programmability. (ii) Technology and source code. Technology and source code eligible for License Exception APP..., reexports and transfers (in-country) for nuclear, chemical, biological, or missile end-users and end-uses...

  9. Spread Spectrum Visual Sensor Network Resource Management Using an End-to-End Cross-Layer Design

    DTIC Science & Technology

    2011-02-01

    Coding In this work, we use rate compatible punctured convolutional (RCPC) codes for channel coding [11]. Using RCPC codes al- lows us to utilize Viterbi’s...11] J. Hagenauer, “ Rate - compatible punctured convolutional codes (RCPC codes ) and their applications,” IEEE Trans. Commun., vol. 36, no. 4, pp. 389...source coding rate , a channel coding rate , and a power level to all nodes in the

  10. A Lossless hybrid wavelet-fractal compression for welding radiographic images.

    PubMed

    Mekhalfa, Faiza; Avanaki, Mohammad R N; Berkani, Daoud

    2016-01-01

    In this work a lossless wavelet-fractal image coder is proposed. The process starts by compressing and decompressing the original image using wavelet transformation and fractal coding algorithm. The decompressed image is removed from the original one to obtain a residual image which is coded by using Huffman algorithm. Simulation results show that with the proposed scheme, we achieve an infinite peak signal to noise ratio (PSNR) with higher compression ratio compared to typical lossless method. Moreover, the use of wavelet transform speeds up the fractal compression algorithm by reducing the size of the domain pool. The compression results of several welding radiographic images using the proposed scheme are evaluated quantitatively and compared with the results of Huffman coding algorithm.

  11. Code CUGEL: A code to unfold Ge(Li) spectrometer polyenergetic gamma photon experimental distributions

    NASA Technical Reports Server (NTRS)

    Steyn, J. J.; Born, U.

    1970-01-01

    A FORTRAN code was developed for the Univac 1108 digital computer to unfold lithium-drifted germanium semiconductor spectrometers, polyenergetic gamma photon experimental distributions. It was designed to analyze the combination continuous and monoenergetic gamma radiation field of radioisotope volumetric sources. The code generates the detector system response matrix function and applies it to monoenergetic spectral components discretely and to the continuum iteratively. It corrects for system drift, source decay, background, and detection efficiency. Results are presented in digital form for differential and integrated photon number and energy distributions, and for exposure dose.

  12. Power optimization of wireless media systems with space-time block codes.

    PubMed

    Yousefi'zadeh, Homayoun; Jafarkhani, Hamid; Moshfeghi, Mehran

    2004-07-01

    We present analytical and numerical solutions to the problem of power control in wireless media systems with multiple antennas. We formulate a set of optimization problems aimed at minimizing total power consumption of wireless media systems subject to a given level of QoS and an available bit rate. Our formulation takes into consideration the power consumption related to source coding, channel coding, and transmission of multiple-transmit antennas. In our study, we consider Gauss-Markov and video source models, Rayleigh fading channels along with the Bernoulli/Gilbert-Elliott loss models, and space-time block codes.

  13. 76 FR 64931 - Building Energy Codes Cost Analysis

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-10-19

    ... Cost Analysis published in the Federal Register on September 13, 2011. 76 FR 56413. The original...-0121. Telephone: (202) 586-2945. Please submit one signed original paper copy. Hand Delivery/Courier...., 6th Floor, Washington, DC 20024. Please submit one signed original paper copy. Docket: For access to...

  14. Top ten reasons to register your code with the Astrophysics Source Code Library

    NASA Astrophysics Data System (ADS)

    Allen, Alice; DuPrie, Kimberly; Berriman, G. Bruce; Mink, Jessica D.; Nemiroff, Robert J.; Robitaille, Thomas; Schmidt, Judy; Shamir, Lior; Shortridge, Keith; Teuben, Peter J.; Wallin, John F.; Warmels, Rein

    2017-01-01

    With 1,400 codes, the Astrophysics Source Code Library (ASCL, ascl.net) is the largest indexed resource for codes used in astronomy research in existence. This free online registry was established in 1999, is indexed by Web of Science and ADS, and is citable, with citations to its entries tracked by ADS. Registering your code with the ASCL is easy with our online submissions system. Making your software available for examination shows confidence in your research and makes your research more transparent, reproducible, and falsifiable. ASCL registration allows your software to be cited on its own merits and provides a citation that is trackable and accepted by all astronomy journals and journals such as Science and Nature. Registration also allows others to find your code more easily. This presentation covers the benefits of registering astronomy research software with the ASCL.

  15. Effect Coding as a Mechanism for Improving the Accuracy of Measuring Students Who Self-Identify with More than One Race

    ERIC Educational Resources Information Center

    Mayhew, Matthew J.; Simonoff, Jeffrey S.

    2015-01-01

    The purpose of this paper is to describe effect coding as an alternative quantitative practice for analyzing and interpreting categorical, multi-raced independent variables in higher education research. Not only may effect coding enable researchers to get closer to respondents' original intentions, it allows for more accurate analyses of all race…

  16. Performance Analysis and Optimization on the UCLA Parallel Atmospheric General Circulation Model Code

    NASA Technical Reports Server (NTRS)

    Lou, John; Ferraro, Robert; Farrara, John; Mechoso, Carlos

    1996-01-01

    An analysis is presented of several factors influencing the performance of a parallel implementation of the UCLA atmospheric general circulation model (AGCM) on massively parallel computer systems. Several modificaitons to the original parallel AGCM code aimed at improving its numerical efficiency, interprocessor communication cost, load-balance and issues affecting single-node code performance are discussed.

  17. Numerical Analysis of Dusty-Gas Flows

    NASA Astrophysics Data System (ADS)

    Saito, T.

    2002-02-01

    This paper presents the development of a numerical code for simulating unsteady dusty-gas flows including shock and rarefaction waves. The numerical results obtained for a shock tube problem are used for validating the accuracy and performance of the code. The code is then extended for simulating two-dimensional problems. Since the interactions between the gas and particle phases are calculated with the operator splitting technique, we can choose numerical schemes independently for the different phases. A semi-analytical method is developed for the dust phase, while the TVD scheme of Harten and Yee is chosen for the gas phase. Throughout this study, computations are carried out on SGI Origin2000, a parallel computer with multiple of RISC based processors. The efficient use of the parallel computer system is an important issue and the code implementation on Origin2000 is also described. Flow profiles of both the gas and solid particles behind the steady shock wave are calculated by integrating the steady conservation equations. The good agreement between the pseudo-stationary solutions and those from the current numerical code validates the numerical approach and the actual coding. The pseudo-stationary shock profiles can also be used as initial conditions of unsteady multidimensional simulations.

  18. A graphically oriented specification language for automatic code generation. GRASP/Ada: A Graphical Representation of Algorithms, Structure, and Processes for Ada, phase 1

    NASA Technical Reports Server (NTRS)

    Cross, James H., II; Morrison, Kelly I.; May, Charles H., Jr.; Waddel, Kathryn C.

    1989-01-01

    The first phase of a three-phase effort to develop a new graphically oriented specification language which will facilitate the reverse engineering of Ada source code into graphical representations (GRs) as well as the automatic generation of Ada source code is described. A simplified view of the three phases of Graphical Representations for Algorithms, Structure, and Processes for Ada (GRASP/Ada) with respect to three basic classes of GRs is presented. Phase 1 concentrated on the derivation of an algorithmic diagram, the control structure diagram (CSD) (CRO88a) from Ada source code or Ada PDL. Phase 2 includes the generation of architectural and system level diagrams such as structure charts and data flow diagrams and should result in a requirements specification for a graphically oriented language able to support automatic code generation. Phase 3 will concentrate on the development of a prototype to demonstrate the feasibility of this new specification language.

  19. Numerical analysis of the spatial nonuniformity in a Cs-seeded H{sup -} ion source

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Takado, N.; Hanatani, J.; Mizuno, T.

    The H{sup -} ion production and transport processes are numerically simulated to clarify the origin of H{sup -} beam nonuniformity. The three-dimensional transport code using the Monte Carlo method has been applied to H{sup 0} atoms and H{sup -} ions in the large 'JAERI 10A negative ion source' under the Cs-seeded condition, in which negative ions are dominantly produced by the surface production process. The results show that a large fraction of hydrogen atoms is produced in the region with high electron temperature. This leads to a spatial nonuniformity of H{sup 0} atom flux to the plasma grid and themore » resultant H{sup -} ion surface production. In addition, most surface-produced H{sup -} ions are extracted even through the high T{sub e} region without destruction. These results indicate a correlation between the production process of the H{sup -} ion and the spatial nonuniformity of the H{sup -} ion beam.« less

  20. Cosmic ray electrons, positrons and the synchrotron emission of the Galaxy: consistent analysis and implications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bernardo, Giuseppe Di; Evoli, Carmelo; Gaggero, Daniele

    2013-03-01

    A multichannel analysis of cosmic ray electron and positron spectra and of the diffuse synchrotron emission of the Galaxy is performed by using the DRAGON code. This study is aimed at probing the interstellar electron source spectrum down to E ∼< 1GeV and at constraining several propagation parameters. We find that above 4GeV the e{sup −} source spectrum is compatible with a power-law of index ∼ 2.5. Below 4GeV instead it must be significantly suppressed and the total lepton spectrum is dominated by secondary particles. The positron spectrum and fraction measured below a few GeV are consistently reproduced only withinmore » low reacceleration models. We also constrain the scale-height z{sub t} of the cosmic-ray distribution using three independent (and, in two cases, original) arguments, showing that values of z{sub t} ∼< 2kpc are excluded. This result may have strong implications for particle dark matter searches.« less

  1. Some practical universal noiseless coding techniques

    NASA Technical Reports Server (NTRS)

    Rice, R. F.

    1979-01-01

    Some practical adaptive techniques for the efficient noiseless coding of a broad class of such data sources are developed and analyzed. Algorithms are designed for coding discrete memoryless sources which have a known symbol probability ordering but unknown probability values. A general applicability of these algorithms to solving practical problems is obtained because most real data sources can be simply transformed into this form by appropriate preprocessing. These algorithms have exhibited performance only slightly above all entropy values when applied to real data with stationary characteristics over the measurement span. Performance considerably under a measured average data entropy may be observed when data characteristics are changing over the measurement span.

  2. An Object-Oriented Computer Code for Aircraft Engine Weight Estimation

    NASA Technical Reports Server (NTRS)

    Tong, Michael T.; Naylor, Bret A.

    2009-01-01

    Reliable engine-weight estimation at the conceptual design stage is critical to the development of new aircraft engines. It helps to identify the best engine concept amongst several candidates. At NASA Glenn Research Center (GRC), the Weight Analysis of Turbine Engines (WATE) computer code, originally developed by Boeing Aircraft, has been used to estimate the engine weight of various conceptual engine designs. The code, written in FORTRAN, was originally developed for NASA in 1979. Since then, substantial improvements have been made to the code to improve the weight calculations for most of the engine components. Most recently, to improve the maintainability and extensibility of WATE, the FORTRAN code has been converted into an object-oriented version. The conversion was done within the NASA's NPSS (Numerical Propulsion System Simulation) framework. This enables WATE to interact seamlessly with the thermodynamic cycle model which provides component flow data such as airflows, temperatures, and pressures, etc., that are required for sizing the components and weight calculations. The tighter integration between the NPSS and WATE would greatly enhance system-level analysis and optimization capabilities. It also would facilitate the enhancement of the WATE code for next-generation aircraft and space propulsion systems. In this paper, the architecture of the object-oriented WATE code (or WATE++) is described. Both the FORTRAN and object-oriented versions of the code are employed to compute the dimensions and weight of a 300-passenger aircraft engine (GE90 class). Both versions of the code produce essentially identical results as should be the case.

  3. 15 CFR 740.7 - Computers (APP).

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 4A003. (2) Technology and software. License Exception APP authorizes exports of technology and software... License Exception. (2) Access and release restrictions. (i)[Reserved] (ii) Technology and source code. Technology and source code eligible for License Exception APP may not be released to nationals of Cuba, Iran...

  4. A Model for the Sources of the Slow Solar Wind

    NASA Technical Reports Server (NTRS)

    Antiochos, Spiro K.; Mikic, Z.; Titov, V. S.; Lionello, R.; Linker, J. A.

    2010-01-01

    Models for the origin of the slow solar wind must account for two seemingly contradictory observations: The slow wind has the composition of the closed-field corona, implying that it originates from the continuous opening and closing of flux at the boundary between open and closed field. On the other hand, the slow wind has large angular width, up to approximately 60 degrees, suggesting that its source extends far from the open-closed boundary. We propose a model that can explain both observations. The key idea is that the source of the slow wind at the Sun is a network of narrow (possibly singular) open-field corridors that map to a web of separatrices and quasi-separatrix layers in the heliosphere. We compute analytically the topology of an open-field corridor and show that it produces a quasi-separatrix layer in the heliosphere that extends to angles far front the heliospheric current sheet. We then use an MHD code and MIDI/SOHO observations of the photospheric magnetic field to calculate numerically, with high spatial resolution, the quasi-steady solar wind and magnetic field for a time period preceding the August 1, 2008 total solar eclipse. Our numerical results imply that, at least for this time period, a web of separatrices (which we term an S-web) forms with sufficient density and extent in the heliosphere to account for the observed properties of the slow wind. We discuss the implications of our S-web model for the structure and dynamics of the corona and heliosphere, and propose further tests of the model.

  5. A Model for the Sources of the Slow Solar Wind

    NASA Astrophysics Data System (ADS)

    Antiochos, S. K.; Mikić, Z.; Titov, V. S.; Lionello, R.; Linker, J. A.

    2011-04-01

    Models for the origin of the slow solar wind must account for two seemingly contradictory observations: the slow wind has the composition of the closed-field corona, implying that it originates from the continuous opening and closing of flux at the boundary between open and closed field. On the other hand, the slow wind also has large angular width, up to ~60°, suggesting that its source extends far from the open-closed boundary. We propose a model that can explain both observations. The key idea is that the source of the slow wind at the Sun is a network of narrow (possibly singular) open-field corridors that map to a web of separatrices and quasi-separatrix layers in the heliosphere. We compute analytically the topology of an open-field corridor and show that it produces a quasi-separatrix layer in the heliosphere that extends to angles far from the heliospheric current sheet. We then use an MHD code and MDI/SOHO observations of the photospheric magnetic field to calculate numerically, with high spatial resolution, the quasi-steady solar wind, and magnetic field for a time period preceding the 2008 August 1 total solar eclipse. Our numerical results imply that, at least for this time period, a web of separatrices (which we term an S-web) forms with sufficient density and extent in the heliosphere to account for the observed properties of the slow wind. We discuss the implications of our S-web model for the structure and dynamics of the corona and heliosphere and propose further tests of the model.

  6. Computing travel time when the exact address is unknown: a comparison of point and polygon ZIP code approximation methods.

    PubMed

    Berke, Ethan M; Shi, Xun

    2009-04-29

    Travel time is an important metric of geographic access to health care. We compared strategies of estimating travel times when only subject ZIP code data were available. Using simulated data from New Hampshire and Arizona, we estimated travel times to nearest cancer centers by using: 1) geometric centroid of ZIP code polygons as origins, 2) population centroids as origin, 3) service area rings around each cancer center, assigning subjects to rings by assuming they are evenly distributed within their ZIP code, 4) service area rings around each center, assuming the subjects follow the population distribution within the ZIP code. We used travel times based on street addresses as true values to validate estimates. Population-based methods have smaller errors than geometry-based methods. Within categories (geometry or population), centroid and service area methods have similar errors. Errors are smaller in urban areas than in rural areas. Population-based methods are superior to the geometry-based methods, with the population centroid method appearing to be the best choice for estimating travel time. Estimates in rural areas are less reliable.

  7. Conversion of HSPF Legacy Model to a Platform-Independent, Open-Source Language

    NASA Astrophysics Data System (ADS)

    Heaphy, R. T.; Burke, M. P.; Love, J. T.

    2015-12-01

    Since its initial development over 30 years ago, the Hydrologic Simulation Program - FORTAN (HSPF) model has been used worldwide to support water quality planning and management. In the United States, HSPF receives widespread endorsement as a regulatory tool at all levels of government and is a core component of the EPA's Better Assessment Science Integrating Point and Nonpoint Sources (BASINS) system, which was developed to support nationwide Total Maximum Daily Load (TMDL) analysis. However, the model's legacy code and data management systems have limitations in their ability to integrate with modern software, hardware, and leverage parallel computing, which have left voids in optimization, pre-, and post-processing tools. Advances in technology and our scientific understanding of environmental processes that have occurred over the last 30 years mandate that upgrades be made to HSPF to allow it to evolve and continue to be a premiere tool for water resource planners. This work aims to mitigate the challenges currently facing HSPF through two primary tasks: (1) convert code to a modern widely accepted, open-source, high-performance computing (hpc) code; and (2) convert model input and output files to modern widely accepted, open-source, data model, library, and binary file format. Python was chosen as the new language for the code conversion. It is an interpreted, object-oriented, hpc code with dynamic semantics that has become one of the most popular open-source languages. While python code execution can be slow compared to compiled, statically typed programming languages, such as C and FORTRAN, the integration of Numba (a just-in-time specializing compiler) has allowed this challenge to be overcome. For the legacy model data management conversion, HDF5 was chosen to store the model input and output. The code conversion for HSPF's hydrologic and hydraulic modules has been completed. The converted code has been tested against HSPF's suite of "test" runs and shown good agreement and similar execution times while using the Numba compiler. Continued verification of the accuracy of the converted code against more complex legacy applications and improvement upon execution times by incorporating an intelligent network change detection tool is currently underway, and preliminary results will be presented.

  8. Origin of sphinx, a young chimeric RNA gene in Drosophila melanogaster

    PubMed Central

    Wang, Wen; Brunet, Frédéric G.; Nevo, Eviatar; Long, Manyuan

    2002-01-01

    Non-protein-coding RNA genes play an important role in various biological processes. How new RNA genes originated and whether this process is controlled by similar evolutionary mechanisms for the origin of protein-coding genes remains unclear. A young chimeric RNA gene that we term sphinx (spx) provides the first insight into the early stage of evolution of RNA genes. spx originated as an insertion of a retroposed sequence of the ATP synthase chain F gene at the cytological region 60DB since the divergence of Drosophila melanogaster from its sibling species 2–3 million years ago. This retrosequence, which is located at 102F on the fourth chromosome, recruited a nearby exon and intron, thereby evolving a chimeric gene structure. This molecular process suggests that the mechanism of exon shuffling, which can generate protein-coding genes, also plays a role in the origin of RNA genes. The subsequent evolutionary process of spx has been associated with a high nucleotide substitution rate, possibly driven by a continuous positive Darwinian selection for a novel function, as is shown in its sex- and development-specific alternative splicing. To test whether spx has adapted to different environments, we investigated its population genetic structure in the unique “Evolution Canyon” in Israel, revealing a similar haplotype structure in spx, and thus similar evolutionary forces operating on spx between environments. PMID:11904380

  9. Enhancements to the MCNP6 background source

    DOE PAGES

    McMath, Garrett E.; McKinney, Gregg W.

    2015-10-19

    The particle transport code MCNP has been used to produce a background radiation data file on a worldwide grid that can easily be sampled as a source in the code. Location-dependent cosmic showers were modeled by Monte Carlo methods to produce the resulting neutron and photon background flux at 2054 locations around Earth. An improved galactic-cosmic-ray feature was used to model the source term as well as data from multiple sources to model the transport environment through atmosphere, soil, and seawater. A new elevation scaling feature was also added to the code to increase the accuracy of the cosmic neutronmore » background for user locations with off-grid elevations. Furthermore, benchmarking has shown the neutron integral flux values to be within experimental error.« less

  10. On the optimality of code options for a universal noiseless coder

    NASA Technical Reports Server (NTRS)

    Yeh, Pen-Shu; Rice, Robert F.; Miller, Warner

    1991-01-01

    A universal noiseless coding structure was developed that provides efficient performance over an extremely broad range of source entropy. This is accomplished by adaptively selecting the best of several easily implemented variable length coding algorithms. Custom VLSI coder and decoder modules capable of processing over 20 million samples per second are currently under development. The first of the code options used in this module development is shown to be equivalent to a class of Huffman code under the Humblet condition, other options are shown to be equivalent to the Huffman codes of a modified Laplacian symbol set, at specified symbol entropy values. Simulation results are obtained on actual aerial imagery, and they confirm the optimality of the scheme. On sources having Gaussian or Poisson distributions, coder performance is also projected through analysis and simulation.

  11. Toward Intelligent Software Defect Detection

    NASA Technical Reports Server (NTRS)

    Benson, Markland J.

    2011-01-01

    Source code level software defect detection has gone from state of the art to a software engineering best practice. Automated code analysis tools streamline many of the aspects of formal code inspections but have the drawback of being difficult to construct and either prone to false positives or severely limited in the set of defects that can be detected. Machine learning technology provides the promise of learning software defects by example, easing construction of detectors and broadening the range of defects that can be found. Pinpointing software defects with the same level of granularity as prominent source code analysis tools distinguishes this research from past efforts, which focused on analyzing software engineering metrics data with granularity limited to that of a particular function rather than a line of code.

  12. Memory-Efficient Onboard Rock Segmentation

    NASA Technical Reports Server (NTRS)

    Burl, Michael C.; Thompson, David R.; Bornstein, Benjamin J.; deGranville, Charles K.

    2013-01-01

    Rockster-MER is an autonomous perception capability that was uploaded to the Mars Exploration Rover Opportunity in December 2009. This software provides the vision front end for a larger software system known as AEGIS (Autonomous Exploration for Gathering Increased Science), which was recently named 2011 NASA Software of the Year. As the first step in AEGIS, Rockster-MER analyzes an image captured by the rover, and detects and automatically identifies the boundary contours of rocks and regions of outcrop present in the scene. This initial segmentation step reduces the data volume from millions of pixels into hundreds (or fewer) of rock contours. Subsequent stages of AEGIS then prioritize the best rocks according to scientist- defined preferences and take high-resolution, follow-up observations. Rockster-MER has performed robustly from the outset on the Mars surface under challenging conditions. Rockster-MER is a specially adapted, embedded version of the original Rockster algorithm ("Rock Segmentation Through Edge Regrouping," (NPO- 44417) Software Tech Briefs, September 2008, p. 25). Although the new version performs the same basic task as the original code, the software has been (1) significantly upgraded to overcome the severe onboard re source limitations (CPU, memory, power, time) and (2) "bulletproofed" through code reviews and extensive testing and profiling to avoid the occurrence of faults. Because of the limited computational power of the RAD6000 flight processor on Opportunity (roughly two orders of magnitude slower than a modern workstation), the algorithm was heavily tuned to improve its speed. Several functional elements of the original algorithm were removed as a result of an extensive cost/benefit analysis conducted on a large set of archived rover images. The algorithm was also required to operate below a stringent 4MB high-water memory ceiling; hence, numerous tricks and strategies were introduced to reduce the memory footprint. Local filtering operations were re-coded to operate on horizontal data stripes across the image. Data types were reduced to smaller sizes where possible. Binary- valued intermediate results were squeezed into a more compact, one-bit-per-pixel representation through bit packing and bit manipulation macros. An estimated 16-fold reduction in memory footprint relative to the original Rockster algorithm was achieved. The resulting memory footprint is less than four times the base image size. Also, memory allocation calls were modified to draw from a static pool and consolidated to reduce memory management overhead and fragmentation. Rockster-MER has now been run onboard Opportunity numerous times as part of AEGIS with exceptional performance. Sample results are available on the AEGIS website at http://aegis.jpl.nasa.gov.

  13. Open-source framework for documentation of scientific software written on MATLAB-compatible programming languages

    NASA Astrophysics Data System (ADS)

    Konnik, Mikhail V.; Welsh, James

    2012-09-01

    Numerical simulators for adaptive optics systems have become an essential tool for the research and development of the future advanced astronomical instruments. However, growing software code of the numerical simulator makes it difficult to continue to support the code itself. The problem of adequate documentation of the astronomical software for adaptive optics simulators may complicate the development since the documentation must contain up-to-date schemes and mathematical descriptions implemented in the software code. Although most modern programming environments like MATLAB or Octave have in-built documentation abilities, they are often insufficient for the description of a typical adaptive optics simulator code. This paper describes a general cross-platform framework for the documentation of scientific software using open-source tools such as LATEX, mercurial, Doxygen, and Perl. Using the Perl script that translates M-files MATLAB comments into C-like, one can use Doxygen to generate and update the documentation for the scientific source code. The documentation generated by this framework contains the current code description with mathematical formulas, images, and bibliographical references. A detailed description of the framework components is presented as well as the guidelines for the framework deployment. Examples of the code documentation for the scripts and functions of a MATLAB-based adaptive optics simulator are provided.

  14. Supporting Source Code Comprehension during Software Evolution and Maintenance

    ERIC Educational Resources Information Center

    Alhindawi, Nouh

    2013-01-01

    This dissertation addresses the problems of program comprehension to support the evolution of large-scale software systems. The research concerns how software engineers locate features and concepts along with categorizing changes within very large bodies of source code along with their versioned histories. More specifically, advanced Information…

  15. Automating RPM Creation from a Source Code Repository

    DTIC Science & Technology

    2012-02-01

    apps/usr --with- libpq=/apps/ postgres make rm -rf $RPM_BUILD_ROOT umask 0077 mkdir -p $RPM_BUILD_ROOT/usr/local/bin mkdir -p $RPM_BUILD_ROOT...from a source code repository. %pre %prep %setup %build ./autogen.sh ; ./configure --with-db=/apps/db --with-libpq=/apps/ postgres make

  16. Toward Optimal Manifold Hashing via Discrete Locally Linear Embedding.

    PubMed

    Rongrong Ji; Hong Liu; Liujuan Cao; Di Liu; Yongjian Wu; Feiyue Huang

    2017-11-01

    Binary code learning, also known as hashing, has received increasing attention in large-scale visual search. By transforming high-dimensional features to binary codes, the original Euclidean distance is approximated via Hamming distance. More recently, it is advocated that it is the manifold distance, rather than the Euclidean distance, that should be preserved in the Hamming space. However, it retains as an open problem to directly preserve the manifold structure by hashing. In particular, it first needs to build the local linear embedding in the original feature space, and then quantize such embedding to binary codes. Such a two-step coding is problematic and less optimized. Besides, the off-line learning is extremely time and memory consuming, which needs to calculate the similarity matrix of the original data. In this paper, we propose a novel hashing algorithm, termed discrete locality linear embedding hashing (DLLH), which well addresses the above challenges. The DLLH directly reconstructs the manifold structure in the Hamming space, which learns optimal hash codes to maintain the local linear relationship of data points. To learn discrete locally linear embeddingcodes, we further propose a discrete optimization algorithm with an iterative parameters updating scheme. Moreover, an anchor-based acceleration scheme, termed Anchor-DLLH, is further introduced, which approximates the large similarity matrix by the product of two low-rank matrices. Experimental results on three widely used benchmark data sets, i.e., CIFAR10, NUS-WIDE, and YouTube Face, have shown superior performance of the proposed DLLH over the state-of-the-art approaches.

  17. Heavy metals in fish from the Red Sea, Arabian Sea, and Indian Ocean: effect of origin, fish species and size and correlation among the metals.

    PubMed

    Obaidat, Mohammad M; Massadeh, Adnan M; Al-Athamneh, Ahmad M; Jaradat, Qasem M

    2015-04-01

    This study determined the levels of As, Cu, Pb, and Cd in fish from Red Sea, Arabian Sea, and Indian Ocean by graphite furnace atomic absorption spectrophotometry. Metal levels were compared with international standards. The levels among fish types and origin, the relationship among metals, and the correlation between the levels and fish size were statistically tested. Fish type and origin significantly affected the levels. None of the fish contained As, Cu, and Pb above the FAO and EU codes. However, Cd exceeded the Jordanian, FAO, and EC codes from the three origins. As and Cd positively correlated with each other in Arabian Sea fish. As and Pb correlated negatively, but Cu and Cd did not correlate with fish size. This study indicates that Cd is common in fish from the three origins regardless the fish size. This warrants continuous monitoring for heavy metals, especially Cd, in internationally traded fish.

  18. 14 CFR 217.10 - Instructions.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... and the other pertaining to on-flight markets. For example, the routing (A-B-C-D) consists of three..., Singapore A-3—Airport code Origin A-4—Airport code Destination A-5—Service class (mark an X) F G L P Q By aircraft type— B-1—Aircraft type code B-2—Revenue aircraft departures B-3—Revenue passengers transported B...

  19. 14 CFR 217.10 - Instructions.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... and the other pertaining to on-flight markets. For example, the routing (A-B-C-D) consists of three..., Singapore A-3—Airport code Origin A-4—Airport code Destination A-5—Service class (mark an X) F G L P Q By aircraft type— B-1—Aircraft type code B-2—Revenue aircraft departures B-3—Revenue passengers transported B...

  20. 14 CFR 217.10 - Instructions.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... and the other pertaining to on-flight markets. For example, the routing (A-B-C-D) consists of three..., Singapore A-3—Airport code Origin A-4—Airport code Destination A-5—Service class (mark an X) F G L P Q By aircraft type— B-1—Aircraft type code B-2—Revenue aircraft departures B-3—Revenue passengers transported B...

  1. 39 CFR 121.2 - Periodicals.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... accepted before the established and published day-zero Critical Entry Time at origin, where the origin P&DC... is the sum of the applicable (1-to-3-day) First-Class Mail service standard plus one day, for each 3-digit ZIP Code origin-destination pair for which Periodicals are accepted before the day zero Critical...

  2. An approach to the origin of self-replicating system. I - Intermolecular interactions

    NASA Technical Reports Server (NTRS)

    Macelroy, R. D.; Coeckelenbergh, Y.; Rein, R.

    1978-01-01

    The present paper deals with the characteristics and potentialities of a recently developed computer-based molecular modeling system. Some characteristics of current coding systems are examined and are extrapolated to the apparent requirements of primitive prebiological coding systems.

  3. Topics in quantum cryptography, quantum error correction, and channel simulation

    NASA Astrophysics Data System (ADS)

    Luo, Zhicheng

    In this thesis, we mainly investigate four different topics: efficiently implementable codes for quantum key expansion [51], quantum error-correcting codes based on privacy amplification [48], private classical capacity of quantum channels [44], and classical channel simulation with quantum side information [49, 50]. For the first topic, we propose an efficiently implementable quantum key expansion protocol, capable of increasing the size of a pre-shared secret key by a constant factor. Previously, the Shor-Preskill proof [64] of the security of the Bennett-Brassard 1984 (BB84) [6] quantum key distribution protocol relied on the theoretical existence of good classical error-correcting codes with the "dual-containing" property. But the explicit and efficiently decodable construction of such codes is unknown. We show that we can lift the dual-containing constraint by employing the non-dual-containing codes with excellent performance and efficient decoding algorithms. For the second topic, we propose a construction of Calderbank-Shor-Steane (CSS) [19, 68] quantum error-correcting codes, which are originally based on pairs of mutually dual-containing classical codes, by combining a classical code with a two-universal hash function. We show, using the results of Renner and Koenig [57], that the communication rates of such codes approach the hashing bound on tensor powers of Pauli channels in the limit of large block-length. For the third topic, we prove a regularized formula for the secret key assisted capacity region of a quantum channel for transmitting private classical information. This result parallels the work of Devetak on entanglement assisted quantum communication capacity. This formula provides a new family protocol, the private father protocol, under the resource inequality framework that includes the private classical communication without the assisted secret keys as a child protocol. For the fourth topic, we study and solve the problem of classical channel simulation with quantum side information at the receiver. Our main theorem has two important corollaries: rate-distortion theory with quantum side information and common randomness distillation. Simple proofs of achievability of classical multi-terminal source coding problems can be made via a unified approach using the channel simulation theorem as building blocks. The fully quantum generalization of the problem is also conjectured with outer and inner bounds on the achievable rate pairs.

  4. Rubus: A compiler for seamless and extensible parallelism.

    PubMed

    Adnan, Muhammad; Aslam, Faisal; Nawaz, Zubair; Sarwar, Syed Mansoor

    2017-01-01

    Nowadays, a typical processor may have multiple processing cores on a single chip. Furthermore, a special purpose processing unit called Graphic Processing Unit (GPU), originally designed for 2D/3D games, is now available for general purpose use in computers and mobile devices. However, the traditional programming languages which were designed to work with machines having single core CPUs, cannot utilize the parallelism available on multi-core processors efficiently. Therefore, to exploit the extraordinary processing power of multi-core processors, researchers are working on new tools and techniques to facilitate parallel programming. To this end, languages like CUDA and OpenCL have been introduced, which can be used to write code with parallelism. The main shortcoming of these languages is that programmer needs to specify all the complex details manually in order to parallelize the code across multiple cores. Therefore, the code written in these languages is difficult to understand, debug and maintain. Furthermore, to parallelize legacy code can require rewriting a significant portion of code in CUDA or OpenCL, which can consume significant time and resources. Thus, the amount of parallelism achieved is proportional to the skills of the programmer and the time spent in code optimizations. This paper proposes a new open source compiler, Rubus, to achieve seamless parallelism. The Rubus compiler relieves the programmer from manually specifying the low-level details. It analyses and transforms a sequential program into a parallel program automatically, without any user intervention. This achieves massive speedup and better utilization of the underlying hardware without a programmer's expertise in parallel programming. For five different benchmarks, on average a speedup of 34.54 times has been achieved by Rubus as compared to Java on a basic GPU having only 96 cores. Whereas, for a matrix multiplication benchmark the average execution speedup of 84 times has been achieved by Rubus on the same GPU. Moreover, Rubus achieves this performance without drastically increasing the memory footprint of a program.

  5. Rubus: A compiler for seamless and extensible parallelism

    PubMed Central

    Adnan, Muhammad; Aslam, Faisal; Sarwar, Syed Mansoor

    2017-01-01

    Nowadays, a typical processor may have multiple processing cores on a single chip. Furthermore, a special purpose processing unit called Graphic Processing Unit (GPU), originally designed for 2D/3D games, is now available for general purpose use in computers and mobile devices. However, the traditional programming languages which were designed to work with machines having single core CPUs, cannot utilize the parallelism available on multi-core processors efficiently. Therefore, to exploit the extraordinary processing power of multi-core processors, researchers are working on new tools and techniques to facilitate parallel programming. To this end, languages like CUDA and OpenCL have been introduced, which can be used to write code with parallelism. The main shortcoming of these languages is that programmer needs to specify all the complex details manually in order to parallelize the code across multiple cores. Therefore, the code written in these languages is difficult to understand, debug and maintain. Furthermore, to parallelize legacy code can require rewriting a significant portion of code in CUDA or OpenCL, which can consume significant time and resources. Thus, the amount of parallelism achieved is proportional to the skills of the programmer and the time spent in code optimizations. This paper proposes a new open source compiler, Rubus, to achieve seamless parallelism. The Rubus compiler relieves the programmer from manually specifying the low-level details. It analyses and transforms a sequential program into a parallel program automatically, without any user intervention. This achieves massive speedup and better utilization of the underlying hardware without a programmer’s expertise in parallel programming. For five different benchmarks, on average a speedup of 34.54 times has been achieved by Rubus as compared to Java on a basic GPU having only 96 cores. Whereas, for a matrix multiplication benchmark the average execution speedup of 84 times has been achieved by Rubus on the same GPU. Moreover, Rubus achieves this performance without drastically increasing the memory footprint of a program. PMID:29211758

  6. Parallel community climate model: Description and user`s guide

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Drake, J.B.; Flanery, R.E.; Semeraro, B.D.

    This report gives an overview of a parallel version of the NCAR Community Climate Model, CCM2, implemented for MIMD massively parallel computers using a message-passing programming paradigm. The parallel implementation was developed on an Intel iPSC/860 with 128 processors and on the Intel Delta with 512 processors, and the initial target platform for the production version of the code is the Intel Paragon with 2048 processors. Because the implementation uses a standard, portable message-passing libraries, the code has been easily ported to other multiprocessors supporting a message-passing programming paradigm. The parallelization strategy used is to decompose the problem domain intomore » geographical patches and assign each processor the computation associated with a distinct subset of the patches. With this decomposition, the physics calculations involve only grid points and data local to a processor and are performed in parallel. Using parallel algorithms developed for the semi-Lagrangian transport, the fast Fourier transform and the Legendre transform, both physics and dynamics are computed in parallel with minimal data movement and modest change to the original CCM2 source code. Sequential or parallel history tapes are written and input files (in history tape format) are read sequentially by the parallel code to promote compatibility with production use of the model on other computer systems. A validation exercise has been performed with the parallel code and is detailed along with some performance numbers on the Intel Paragon and the IBM SP2. A discussion of reproducibility of results is included. A user`s guide for the PCCM2 version 2.1 on the various parallel machines completes the report. Procedures for compilation, setup and execution are given. A discussion of code internals is included for those who may wish to modify and use the program in their own research.« less

  7. An adaptable binary entropy coder

    NASA Technical Reports Server (NTRS)

    Kiely, A.; Klimesh, M.

    2001-01-01

    We present a novel entropy coding technique which is based on recursive interleaving of variable-to-variable length binary source codes. We discuss code design and performance estimation methods, as well as practical encoding and decoding algorithms.

  8. THE SMALL BODY GEOPHYSICAL ANALYSIS TOOL

    NASA Astrophysics Data System (ADS)

    Bercovici, Benjamin; McMahon, Jay

    2017-10-01

    The Small Body Geophysical Analysis Tool (SBGAT) that we are developing aims at providing scientists and mission designers with a comprehensive, easy to use, open-source analysis tool. SBGAT is meant for seamless generation of valuable simulated data originating from small bodies shape models, combined with advanced shape-modification properties.The current status of SBGAT is as follows:The modular software architecture that was specified in the original SBGAT proposal was implemented in the form of two distinct packages: a dynamic library SBGAT Core containing the data structure and algorithm backbone of SBGAT, and SBGAT Gui which wraps the former inside a VTK, Qt user interface to facilitate user/data interaction. This modular development facilitates maintenance and addi- tion of new features. Note that SBGAT Core can be utilized independently from SBGAT Gui.SBGAT is presently being hosted on a GitHub repository owned by SBGAT’s main developer. This repository is public and can be accessed at https://github.com/bbercovici/SBGAT. Along with the commented code, one can find the code documentation at https://bbercovici.github.io/sbgat-doc/index.html. This code documentation is constently updated in order to reflect new functionalities.SBGAT’s user’s manual is available at https://github.com/bbercovici/SBGAT/wiki. This document contains a comprehensive tutorial indicating how to retrieve, compile and run SBGAT from scratch.Some of the upcoming development goals are listed hereafter. First, SBGAT's dynamics module will be extented: the PGM algorithm is the only type of analysis method currently implemented. Future work will therefore consists in broadening SBGAT’s capabilities with the Spherical Harmonics Expansion of the gravity field and the calculation of YORP coefficients. Second, synthetic measurements will soon be available within SBGAT. The software should be able to generate synthetic observations of different type (radar, lightcurve, point clouds,...) from the shape model currently manipulated. Finally, shape interaction capabilities will be added to SBGAT GUI, as it will be augmented with these functionalities using built-in VTK interaction methods.

  9. A DCM study of spectral asymmetries in feedforward and feedback connections between visual areas V1 and V4 in the monkey.

    PubMed

    Bastos, A M; Litvak, V; Moran, R; Bosman, C A; Fries, P; Friston, K J

    2015-03-01

    This paper reports a dynamic causal modeling study of electrocorticographic (ECoG) data that addresses functional asymmetries between forward and backward connections in the visual cortical hierarchy. Specifically, we ask whether forward connections employ gamma-band frequencies, while backward connections preferentially use lower (beta-band) frequencies. We addressed this question by modeling empirical cross spectra using a neural mass model equipped with superficial and deep pyramidal cell populations-that model the source of forward and backward connections, respectively. This enabled us to reconstruct the transfer functions and associated spectra of specific subpopulations within cortical sources. We first established that Bayesian model comparison was able to discriminate between forward and backward connections, defined in terms of their cells of origin. We then confirmed that model selection was able to identify extrastriate (V4) sources as being hierarchically higher than early visual (V1) sources. Finally, an examination of the auto spectra and transfer functions associated with superficial and deep pyramidal cells confirmed that forward connections employed predominantly higher (gamma) frequencies, while backward connections were mediated by lower (alpha/beta) frequencies. We discuss these findings in relation to current views about alpha, beta, and gamma oscillations and predictive coding in the brain. Copyright © 2015. Published by Elsevier Inc.

  10. Numerical uncertainty in computational engineering and physics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hemez, Francois M

    2009-01-01

    Obtaining a solution that approximates ordinary or partial differential equations on a computational mesh or grid does not necessarily mean that the solution is accurate or even 'correct'. Unfortunately assessing the quality of discrete solutions by questioning the role played by spatial and temporal discretizations generally comes as a distant third to test-analysis comparison and model calibration. This publication is contributed to raise awareness of the fact that discrete solutions introduce numerical uncertainty. This uncertainty may, in some cases, overwhelm in complexity and magnitude other sources of uncertainty that include experimental variability, parametric uncertainty and modeling assumptions. The concepts ofmore » consistency, convergence and truncation error are overviewed to explain the articulation between the exact solution of continuous equations, the solution of modified equations and discrete solutions computed by a code. The current state-of-the-practice of code and solution verification activities is discussed. An example in the discipline of hydro-dynamics illustrates the significant effect that meshing can have on the quality of code predictions. A simple method is proposed to derive bounds of solution uncertainty in cases where the exact solution of the continuous equations, or its modified equations, is unknown. It is argued that numerical uncertainty originating from mesh discretization should always be quantified and accounted for in the overall uncertainty 'budget' that supports decision-making for applications in computational physics and engineering.« less

  11. PHITS simulations of the Matroshka experiment

    NASA Astrophysics Data System (ADS)

    Gustafsson, Katarina; Sihver, Lembit; Mancusi, Davide; Sato, Tatsuhiko

    In order to design a more secure space exploration, radiation exposure estimations are necessary; the radiation environment in space is very different from the one on Earth and it is harmful for humans and for electronic equipments. The threat origins from two sources: Galactic Cosmic Rays and Solar Particle Events. It is important to understand what happens when these particles strike matter such as space vehicle walls, human organs and electronics. We are therefore developing a tool able to estimate the radiation exposure to both humans and electronics. The tool will be based on PHITS, the Particle and Heavy-Ion Transport code System, a three dimensional Monte Carlo code which can calculate interactions and transport of particles and heavy ions in matter. PHITS is developed by a collaboration between RIST (Research Organization for Information Science & Technology), JAEA (Japan Atomic Energy Agency), KEK (High Energy Accelerator Research Organization), Japan and Chalmers University of Technology, Sweden. A method for benchmarking and developing the code is to simulate experiments performed in space or on Earth. We have carried out simulations of the Matroshka experiment which focus on determining the radiation load on astronauts inside and outside the International Space Station by using a torso of a tissue equivalent human phantom, filled with active and passive detectors located in the positions of critical tissues and organs. We will present status and results of our simulations.

  12. Plasma separation process. Betacell (BCELL) code, user's manual

    NASA Astrophysics Data System (ADS)

    Taherzadeh, M.

    1987-11-01

    The emergence of clearly defined applications for (small or large) amounts of long-life and reliable power sources has given the design and production of betavoltaic systems a new life. Moreover, because of the availability of the Plasma Separation Program, (PSP) at TRW, it is now possible to separate the most desirable radioisotopes for betacell power generating devices. A computer code, named BCELL, has been developed to model the betavoltaic concept by utilizing the available up-to-date source/cell parameters. In this program, attempts have been made to determine the betacell energy device maximum efficiency, degradation due to the emitting source radiation and source/cell lifetime power reduction processes. Additionally, comparison is made between the Schottky and PN junction devices for betacell battery design purposes. Certain computer code runs have been made to determine the JV distribution function and the upper limit of the betacell generated power for specified energy sources. A Ni beta emitting radioisotope was used for the energy source and certain semiconductors were used for the converter subsystem of the betacell system. Some results for a Promethium source are also given here for comparison.

  13. Low-energy electron dose-point kernel simulations using new physics models implemented in Geant4-DNA

    NASA Astrophysics Data System (ADS)

    Bordes, Julien; Incerti, Sébastien; Lampe, Nathanael; Bardiès, Manuel; Bordage, Marie-Claude

    2017-05-01

    When low-energy electrons, such as Auger electrons, interact with liquid water, they induce highly localized ionizing energy depositions over ranges comparable to cell diameters. Monte Carlo track structure (MCTS) codes are suitable tools for performing dosimetry at this level. One of the main MCTS codes, Geant4-DNA, is equipped with only two sets of cross section models for low-energy electron interactions in liquid water (;option 2; and its improved version, ;option 4;). To provide Geant4-DNA users with new alternative physics models, a set of cross sections, extracted from CPA100 MCTS code, have been added to Geant4-DNA. This new version is hereafter referred to as ;Geant4-DNA-CPA100;. In this study, ;Geant4-DNA-CPA100; was used to calculate low-energy electron dose-point kernels (DPKs) between 1 keV and 200 keV. Such kernels represent the radial energy deposited by an isotropic point source, a parameter that is useful for dosimetry calculations in nuclear medicine. In order to assess the influence of different physics models on DPK calculations, DPKs were calculated using the existing Geant4-DNA models (;option 2; and ;option 4;), newly integrated CPA100 models, and the PENELOPE Monte Carlo code used in step-by-step mode for monoenergetic electrons. Additionally, a comparison was performed of two sets of DPKs that were simulated with ;Geant4-DNA-CPA100; - the first set using Geant4‧s default settings, and the second using CPA100‧s original code default settings. A maximum difference of 9.4% was found between the Geant4-DNA-CPA100 and PENELOPE DPKs. Between the two Geant4-DNA existing models, slight differences, between 1 keV and 10 keV were observed. It was highlighted that the DPKs simulated with the two Geant4-DNA's existing models were always broader than those generated with ;Geant4-DNA-CPA100;. The discrepancies observed between the DPKs generated using Geant4-DNA's existing models and ;Geant4-DNA-CPA100; were caused solely by their different cross sections. The different scoring and interpolation methods used in CPA100 and Geant4 to calculate DPKs showed differences close to 3.0% near the source.

  14. Coding Issues in Grounded Theory

    ERIC Educational Resources Information Center

    Moghaddam, Alireza

    2006-01-01

    This paper discusses grounded theory as one of the qualitative research designs. It describes how grounded theory generates from data. Three phases of grounded theory--open coding, axial coding, and selective coding--are discussed, along with some of the issues which are the source of debate among grounded theorists, especially between its…

  15. Code-division multiple-access multiuser demodulator by using quantum fluctuations.

    PubMed

    Otsubo, Yosuke; Inoue, Jun-Ichi; Nagata, Kenji; Okada, Masato

    2014-07-01

    We examine the average-case performance of a code-division multiple-access (CDMA) multiuser demodulator in which quantum fluctuations are utilized to demodulate the original message within the context of Bayesian inference. The quantum fluctuations are built into the system as a transverse field in the infinite-range Ising spin glass model. We evaluate the performance measurements by using statistical mechanics. We confirm that the CDMA multiuser modulator using quantum fluctuations achieve roughly the same performance as the conventional CDMA multiuser modulator through thermal fluctuations on average. We also find that the relationship between the quality of the original information retrieval and the amplitude of the transverse field is somehow a "universal feature" in typical probabilistic information processing, viz., in image restoration, error-correcting codes, and CDMA multiuser demodulation.

  16. Code-division multiple-access multiuser demodulator by using quantum fluctuations

    NASA Astrophysics Data System (ADS)

    Otsubo, Yosuke; Inoue, Jun-ichi; Nagata, Kenji; Okada, Masato

    2014-07-01

    We examine the average-case performance of a code-division multiple-access (CDMA) multiuser demodulator in which quantum fluctuations are utilized to demodulate the original message within the context of Bayesian inference. The quantum fluctuations are built into the system as a transverse field in the infinite-range Ising spin glass model. We evaluate the performance measurements by using statistical mechanics. We confirm that the CDMA multiuser modulator using quantum fluctuations achieve roughly the same performance as the conventional CDMA multiuser modulator through thermal fluctuations on average. We also find that the relationship between the quality of the original information retrieval and the amplitude of the transverse field is somehow a "universal feature" in typical probabilistic information processing, viz., in image restoration, error-correcting codes, and CDMA multiuser demodulation.

  17. Technical Support Document for Version 3.6.1 of the COMcheck Software

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bartlett, Rosemarie; Connell, Linda M.; Gowri, Krishnan

    2009-09-29

    This technical support document (TSD) is designed to explain the technical basis for the COMcheck software as originally developed based on the ANSI/ASHRAE/IES Standard 90.1-1989 (Standard 90.1-1989). Documentation for other national model codes and standards and specific state energy codes supported in COMcheck has been added to this report as appendices. These appendices are intended to provide technical documentation for features specific to the supported codes and for any changes made for state-specific codes that differ from the standard features that support compliance with the national model codes and standards.

  18. Coding For Compression Of Low-Entropy Data

    NASA Technical Reports Server (NTRS)

    Yeh, Pen-Shu

    1994-01-01

    Improved method of encoding digital data provides for efficient lossless compression of partially or even mostly redundant data from low-information-content source. Method of coding implemented in relatively simple, high-speed arithmetic and logic circuits. Also increases coding efficiency beyond that of established Huffman coding method in that average number of bits per code symbol can be less than 1, which is the lower bound for Huffman code.

  19. Study of an External Neutron Source for an Accelerator-Driven System using the PHITS Code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sugawara, Takanori; Iwasaki, Tomohiko; Chiba, Takashi

    A code system for the Accelerator Driven System (ADS) has been under development for analyzing dynamic behaviors of a subcritical core coupled with an accelerator. This code system named DSE (Dynamics calculation code system for a Subcritical system with an External neutron source) consists of an accelerator part and a reactor part. The accelerator part employs a database, which is calculated by using PHITS, for investigating the effect related to the accelerator such as the changes of beam energy, beam diameter, void generation, and target level. This analysis method using the database may introduce some errors into dynamics calculations sincemore » the neutron source data derived from the database has some errors in fitting or interpolating procedures. In this study, the effects of various events are investigated to confirm that the method based on the database is appropriate.« less

  20. Correlation estimation and performance optimization for distributed image compression

    NASA Astrophysics Data System (ADS)

    He, Zhihai; Cao, Lei; Cheng, Hui

    2006-01-01

    Correlation estimation plays a critical role in resource allocation and rate control for distributed data compression. A Wyner-Ziv encoder for distributed image compression is often considered as a lossy source encoder followed by a lossless Slepian-Wolf encoder. The source encoder consists of spatial transform, quantization, and bit plane extraction. In this work, we find that Gray code, which has been extensively used in digital modulation, is able to significantly improve the correlation between the source data and its side information. Theoretically, we analyze the behavior of Gray code within the context of distributed image compression. Using this theoretical model, we are able to efficiently allocate the bit budget and determine the code rate of the Slepian-Wolf encoder. Our experimental results demonstrate that the Gray code, coupled with accurate correlation estimation and rate control, significantly improves the picture quality, by up to 4 dB, over the existing methods for distributed image compression.

  1. Examining Lead Exposures in California through State-Issued Health Alerts for Food Contamination and an Exposure-Based Candy Testing Program.

    PubMed

    Handley, Margaret A; Nelson, Kali; Sanford, Eric; Clarity, Cassidy; Emmons-Bell, Sophia; Gorukanti, Anuhandra; Kennelly, Patrick

    2017-10-26

    In California, the annual number of children under age 6 y of age with blood lead levels (BLL) ≥10μg/dL is estimated at over 1,000 cases, and up to 10,000 cases when BLL between 4.5 and 9.5 μg/dL are included. State-issued health alerts for food contamination provide one strategy for tracking sources of food-related lead exposures. As well, California passed legislation in 2006 for the Food and Drug Branch (FDB) of the state health department to test and identify lead in candy. This report presents health alert data from California over a 14-y period, compares data before and after the candy testing program began, and examines country of origin, ZIP code data, and time from candy testing to release of health alerts for lead-contaminated candies for 2011-2012. After 2007, health alerts issued for lead in candy and food increased significantly. Analysis of candy-testing data indicated that multiple counties and ZIP codes were affected. Seventeen candies with high lead concentrations were identified, resulting in rapid dissemination (<2wk) of health alerts to local health departments and community clinicians and to the public. Surveillance of lead exposures from state-based food and candy testing programs provides an opportunity to identify and immediately act to remove nonpaint sources of lead affecting children. https://doi.org/10.1289/EHP2582.

  2. Code for Calculating Regional Seismic Travel Time

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    BALLARD, SANFORD; HIPP, JAMES; & BARKER, GLENN

    The RSTT software computes predictions of the travel time of seismic energy traveling from a source to a receiver through 2.5D models of the seismic velocity distribution within the Earth. The two primary applications for the RSTT library are tomographic inversion studies and seismic event location calculations. In tomographic inversions studies, a seismologist begins with number of source-receiver travel time observations and an initial starting model of the velocity distribution within the Earth. A forward travel time calculator, such as the RSTT library, is used to compute predictions of each observed travel time and all of the residuals (observed minusmore » predicted travel time) are calculated. The Earth model is then modified in some systematic way with the goal of minimizing the residuals. The Earth model obtained in this way is assumed to be a better model than the starting model if it has lower residuals. The other major application for the RSTT library is seismic event location. Given an Earth model, an initial estimate of the location of a seismic event, and some number of observations of seismic travel time thought to have originated from that event, location codes systematically modify the estimate of the location of the event with the goal of minimizing the difference between the observed and predicted travel times. The second application, seismic event location, is routinely implemented by the military as part of its effort to monitor the Earth for nuclear tests conducted by foreign countries.« less

  3. Guidance and Control Software Project Data - Volume 2: Development Documents

    NASA Technical Reports Server (NTRS)

    Hayhurst, Kelly J. (Editor)

    2008-01-01

    The Guidance and Control Software (GCS) project was the last in a series of software reliability studies conducted at Langley Research Center between 1977 and 1994. The technical results of the GCS project were recorded after the experiment was completed. Some of the support documentation produced as part of the experiment, however, is serving an unexpected role far beyond its original project context. Some of the software used as part of the GCS project was developed to conform to the RTCA/DO-178B software standard, "Software Considerations in Airborne Systems and Equipment Certification," used in the civil aviation industry. That standard requires extensive documentation throughout the software development life cycle, including plans, software requirements, design and source code, verification cases and results, and configuration management and quality control data. The project documentation that includes this information is open for public scrutiny without the legal or safety implications associated with comparable data from an avionics manufacturer. This public availability has afforded an opportunity to use the GCS project documents for DO-178B training. This report provides a brief overview of the GCS project, describes the 4-volume set of documents and the role they are playing in training, and includes the development documents from the GCS project. Volume 2 contains three appendices: A. Guidance and Control Software Development Specification; B. Design Description for the Pluto Implementation of the Guidance and Control Software; and C. Source Code for the Pluto Implementation of the Guidance and Control Software

  4. Development and Use of Health-Related Technologies in Indigenous Communities: Critical Review

    PubMed Central

    Jacklin, Kristen; O'Connell, Megan E

    2017-01-01

    Background Older Indigenous adults encounter multiple challenges as their age intersects with health inequities. Research suggests that a majority of older Indigenous adults prefer to age in place, and they will need culturally safe assistive technologies to do so. Objective The aim of this critical review was to examine literature concerning use, adaptation, and development of assistive technologies for health purposes by Indigenous peoples. Methods Working within Indigenous research methodologies and from a decolonizing approach, searches of peer-reviewed academic and gray literature dated to February 2016 were conducted using keywords related to assistive technology and Indigenous peoples. Sources were reviewed and coded thematically. Results Of the 34 sources captured, only 2 concerned technology specifically for older Indigenous adults. Studies detailing technology with Indigenous populations of all ages originated primarily from Canada (n=12), Australia (n=10), and the United States (n=9) and were coded to four themes: meaningful user involvement and community-based processes in development, the digital divide, Indigenous innovation in technology, and health technology needs as holistic and interdependent. Conclusions A key finding is the necessity of meaningful user involvement in technology development, especially in communities struggling with the digital divide. In spite of, or perhaps because of this divide, Indigenous communities are enthusiastically adapting mobile technologies to suit their needs in creative, culturally specific ways. This enthusiasm and creativity, coupled with the extensive experience many Indigenous communities have with telehealth technologies, presents opportunity for meaningful, culturally safe development processes. PMID:28729237

  5. SKYSHINEIII. Calculating Effects of Structure Design on Neutron Dose Rates in Air

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lampley, C.M.; Andrews, C.M.; Wells, M.B.

    1988-12-01

    SKYSHINE was designed to aid in the evaluation of the effects of structure geometry on the gamma-ray dose rate at given detector positions outside of a building housing gamma-ray sources. The program considers a rectangular structure enclosed by four walls and a roof. Each of the walls and the roof of the building may be subdivided into up to nine different areas, representing different materials or different thicknesses of the same material for those positions of the wall or roof. Basic sets of iron and concrete slab transmission and reflection data for 6.2 MeV gamma-rays are part of the SKYSHINEmore » block data. These data, as well as parametric air transport data for line-beam sources at a number of energies between 0.6 MeV and 6.2 MeV and ranges to 3750 ft, are used to estimate the various components of the gamma-ray dose rate at positions outside of the building. The gamma-ray source is assumed to be a 6.2 MeV point-isotropic source. SKYSHINE-III provides an increase in versatility over the original SKYSHINE code in that it addresses both neutron and gamma-ray point sources. In addition, the emitted radiation may be characterized by an energy emission spectrum defined by the user. A new SKYSHINE data base is also included.« less

  6. Spatially Resolved Observations of the Galactic Center Source IRS 21

    NASA Astrophysics Data System (ADS)

    Tanner, A.; Ghez, A. M.; Morris, M.; Becklin, E. E.; Cotera, A.; Ressler, M.; Werner, M.; Wizinowich, P.

    2002-08-01

    We present diffraction-limited 2-25 μm images obtained with the W. M. Keck 10 m telescopes that spatially resolve the cool source IRS 21, one of a small group of enigmatic objects in the central parsec of our Galaxy that have eluded classification. Modeled as a Gaussian, the azimuthally averaged intensity profile of IRS 21 has a half-width at half-maximum (HWHM) size of 650+/-80 AU at 2.2 μm and an average HWHM size of 1600+/-200 AU at mid-infrared wavelengths. These large apparent sizes imply an extended distribution of dust. The mid-infrared color map indicates that IRS 21 is a self-luminous source rather than an externally heated dust clump as originally suggested. The spectral energy distribution has distinct near- and mid-infrared components. A simple radiative transfer code, which simultaneously fits the near- and mid-infrared photometry and intensity profiles, supports a model in which the near-infrared radiation is scattered and extincted light from an embedded central source, while the mid-infrared emission is from thermally reradiating silicate dust. We argue that IRS 21 (and by analogy, the other luminous sources along the Northern Arm) is a massive star experiencing rapid mass loss and plowing through the Northern Arm, thereby generating a bow shock, which is spatially resolved in our observations.

  7. 7 CFR 4274.337 - Other regulatory requirements.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ....337 Agriculture Regulations of the Department of Agriculture (Continued) RURAL BUSINESS-COOPERATIVE... recipient on the basis of sex, marital status, race, color, religion, national origin, age, physical or... of one of the following model building codes or the latest edition of that code providing an...

  8. COMPUTATION OF GLOBAL PHOTOCHEMISTRY WITH SMVGEAR II (R823186)

    EPA Science Inventory

    A computer model was developed to simulate global gas-phase photochemistry. The model solves chemical equations with SMVGEAR II, a sparse-matrix, vectorized Gear-type code. To obtain SMVGEAR II, the original SMVGEAR code was modified to allow computation of different sets of chem...

  9. FEDEF: A High Level Architecture Federate Development Framework

    DTIC Science & Technology

    2010-09-01

    require code changes for operability between HLA specifications. Configuration of federate requirements such as publications, subscriptions, time ... management , and management protocol should occur outside of federate source code, allowing for federate reusability without code modification and re

  10. Quaternionic representation of the genetic code.

    PubMed

    Carlevaro, C Manuel; Irastorza, Ramiro M; Vericat, Fernando

    2016-03-01

    A heuristic diagram of the evolution of the standard genetic code is presented. It incorporates, in a way that resembles the energy levels of an atom, the physical notion of broken symmetry and it is consistent with original ideas by Crick on the origin and evolution of the code as well as with the chronological order of appearance of the amino acids along the evolution as inferred from work that mixtures known experimental results with theoretical speculations. Suggested by the diagram we propose a Hamilton quaternions based mathematical representation of the code as it stands now-a-days. The central object in the description is a codon function that assigns to each amino acid an integer quaternion in such a way that the observed code degeneration is preserved. We emphasize the advantages of a quaternionic representation of amino acids taking as an example the folding of proteins. With this aim we propose an algorithm to go from the quaternions sequence to the protein three dimensional structure which can be compared with the corresponding experimental one stored at the Protein Data Bank. In our criterion the mathematical representation of the genetic code in terms of quaternions merits to be taken into account because it describes not only most of the known properties of the genetic code but also opens new perspectives that are mainly derived from the close relationship between quaternions and rotations. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  11. Simulation study on ion extraction from electron cyclotron resonance ion sources

    NASA Astrophysics Data System (ADS)

    Fu, S.; Kitagawa, A.; Yamada, S.

    1994-04-01

    In order to study beam optics of NIRS-ECR ion source used in the HIMAC project, the EGUN code has been modified to make it capable of modeling ion extraction from a plasma. Two versions of the modified code are worked out with two different methods in which 1D and 2D sheath theories are used, respectively. Convergence problem of the strong nonlinear self-consistent equations is investigated. Simulations on NIRS-ECR ion source and HYPER-ECR ion source are presented in this paper, exhibiting an agreement with the experiment results.

  12. Plagiarism Detection Algorithm for Source Code in Computer Science Education

    ERIC Educational Resources Information Center

    Liu, Xin; Xu, Chan; Ouyang, Boyu

    2015-01-01

    Nowadays, computer programming is getting more necessary in the course of program design in college education. However, the trick of plagiarizing plus a little modification exists among some students' home works. It's not easy for teachers to judge if there's plagiarizing in source code or not. Traditional detection algorithms cannot fit this…

  13. Automated Source-Code-Based Testing of Object-Oriented Software

    NASA Astrophysics Data System (ADS)

    Gerlich, Ralf; Gerlich, Rainer; Dietrich, Carsten

    2014-08-01

    With the advent of languages such as C++ and Java in mission- and safety-critical space on-board software, new challenges for testing and specifically automated testing arise. In this paper we discuss some of these challenges, consequences and solutions based on an experiment in automated source- code-based testing for C++.

  14. Particle model of a cylindrical inductively coupled ion source

    NASA Astrophysics Data System (ADS)

    Ippolito, N. D.; Taccogna, F.; Minelli, P.; Cavenago, M.; Veltri, P.

    2017-08-01

    In spite of the wide use of RF sources, a complete understanding of the mechanisms regulating the RF-coupling of the plasma is still lacking so self-consistent simulations of the involved physics are highly desirable. For this reason we are developing a 2.5D fully kinetic Particle-In-Cell Monte-Carlo-Collision (PIC-MCC) model of a cylindrical ICP-RF source, keeping the time step of the simulation small enough to resolve the plasma frequency scale. The grid cell dimension is now about seven times larger than the average Debye length, because of the large computational demand of the code. It will be scaled down in the next phase of the development of the code. The filling gas is Xenon, in order to minimize the time lost by the MCC collision module in the first stage of development of the code. The results presented here are preliminary, with the code already showing a good robustness. The final goal will be the modeling of the NIO1 (Negative Ion Optimization phase 1) source, operating in Padua at Consorzio RFX.

  15. An Object-oriented Computer Code for Aircraft Engine Weight Estimation

    NASA Technical Reports Server (NTRS)

    Tong, Michael T.; Naylor, Bret A.

    2008-01-01

    Reliable engine-weight estimation at the conceptual design stage is critical to the development of new aircraft engines. It helps to identify the best engine concept amongst several candidates. At NASA Glenn (GRC), the Weight Analysis of Turbine Engines (WATE) computer code, originally developed by Boeing Aircraft, has been used to estimate the engine weight of various conceptual engine designs. The code, written in FORTRAN, was originally developed for NASA in 1979. Since then, substantial improvements have been made to the code to improve the weight calculations for most of the engine components. Most recently, to improve the maintainability and extensibility of WATE, the FORTRAN code has been converted into an object-oriented version. The conversion was done within the NASA s NPSS (Numerical Propulsion System Simulation) framework. This enables WATE to interact seamlessly with the thermodynamic cycle model which provides component flow data such as airflows, temperatures, and pressures, etc. that are required for sizing the components and weight calculations. The tighter integration between the NPSS and WATE would greatly enhance system-level analysis and optimization capabilities. It also would facilitate the enhancement of the WATE code for next-generation aircraft and space propulsion systems. In this paper, the architecture of the object-oriented WATE code (or WATE++) is described. Both the FORTRAN and object-oriented versions of the code are employed to compute the dimensions and weight of a 300- passenger aircraft engine (GE90 class). Both versions of the code produce essentially identical results as should be the case. Keywords: NASA, aircraft engine, weight, object-oriented

  16. Adaptive variable-length coding for efficient compression of spacecraft television data.

    NASA Technical Reports Server (NTRS)

    Rice, R. F.; Plaunt, J. R.

    1971-01-01

    An adaptive variable length coding system is presented. Although developed primarily for the proposed Grand Tour missions, many features of this system clearly indicate a much wider applicability. Using sample to sample prediction, the coding system produces output rates within 0.25 bit/picture element (pixel) of the one-dimensional difference entropy for entropy values ranging from 0 to 8 bit/pixel. This is accomplished without the necessity of storing any code words. Performance improvements of 0.5 bit/pixel can be simply achieved by utilizing previous line correlation. A Basic Compressor, using concatenated codes, adapts to rapid changes in source statistics by automatically selecting one of three codes to use for each block of 21 pixels. The system adapts to less frequent, but more dramatic, changes in source statistics by adjusting the mode in which the Basic Compressor operates on a line-to-line basis. Furthermore, the compression system is independent of the quantization requirements of the pulse-code modulation system.

  17. A comparison of skyshine computational methods.

    PubMed

    Hertel, Nolan E; Sweezy, Jeremy E; Shultis, J Kenneth; Warkentin, J Karl; Rose, Zachary J

    2005-01-01

    A variety of methods employing radiation transport and point-kernel codes have been used to model two skyshine problems. The first problem is a 1 MeV point source of photons on the surface of the earth inside a 2 m tall and 1 m radius silo having black walls. The skyshine radiation downfield from the point source was estimated with and without a 30-cm-thick concrete lid on the silo. The second benchmark problem is to estimate the skyshine radiation downfield from 12 cylindrical canisters emplaced in a low-level radioactive waste trench. The canisters are filled with ion-exchange resin with a representative radionuclide loading, largely 60Co, 134Cs and 137Cs. The solution methods include use of the MCNP code to solve the problem by directly employing variance reduction techniques, the single-scatter point kernel code GGG-GP, the QADMOD-GP point kernel code, the COHORT Monte Carlo code, the NAC International version of the SKYSHINE-III code, the KSU hybrid method and the associated KSU skyshine codes.

  18. Guide to AERO2S and WINGDES Computer Codes for Prediction and Minimization of Drag Due to Lift

    NASA Technical Reports Server (NTRS)

    Carlson, Harry W.; Chu, Julio; Ozoroski, Lori P.; McCullers, L. Arnold

    1997-01-01

    The computer codes, AER02S and WINGDES, are now widely used for the analysis and design of airplane lifting surfaces under conditions that tend to induce flow separation. These codes have undergone continued development to provide additional capabilities since the introduction of the original versions over a decade ago. This code development has been reported in a variety of publications (NASA technical papers, NASA contractor reports, and society journals). Some modifications have not been publicized at all. Users of these codes have suggested the desirability of combining in a single document the descriptions of the code development, an outline of the features of each code, and suggestions for effective code usage. This report is intended to supply that need.

  19. Fast ITTBC using pattern code on subband segmentation

    NASA Astrophysics Data System (ADS)

    Koh, Sung S.; Kim, Hanchil; Lee, Kooyoung; Kim, Hongbin; Jeong, Hun; Cho, Gangseok; Kim, Chunghwa

    2000-06-01

    Iterated Transformation Theory-Based Coding suffers from very high computational complexity in encoding phase. This is due to its exhaustive search. In this paper, our proposed image coding algorithm preprocess an original image to subband segmentation image by wavelet transform before image coding to reduce encoding complexity. A similar block is searched by using the 24 block pattern codes which are coded by the edge information in the image block on the domain pool of the subband segmentation. As a result, numerical data shows that the encoding time of the proposed coding method can be reduced to 98.82% of that of Joaquin's method, while the loss in quality relative to the Jacquin's is about 0.28 dB in PSNR, which is visually negligible.

  20. Multidimensional modulation for next-generation transmission systems

    NASA Astrophysics Data System (ADS)

    Millar, David S.; Koike-Akino, Toshiaki; Kojima, Keisuke; Parsons, Kieran

    2017-01-01

    Recent research in multidimensional modulation has shown great promise in long reach applications. In this work, we will investigate the origins of this gain, the different approaches to multidimensional constellation design, and different performance metrics for coded modulation. We will also discuss the reason that such coded modulation schemes seem to have limited application at shorter distances, and the potential for other coded modulation schemes in future transmission systems.

  1. The origins and evolutionary history of human non-coding RNA regulatory networks.

    PubMed

    Sherafatian, Masih; Mowla, Seyed Javad

    2017-04-01

    The evolutionary history and origin of the regulatory function of animal non-coding RNAs are not well understood. Lack of conservation of long non-coding RNAs and small sizes of microRNAs has been major obstacles in their phylogenetic analysis. In this study, we tried to shed more light on the evolution of ncRNA regulatory networks by changing our phylogenetic strategy to focus on the evolutionary pattern of their protein coding targets. We used available target databases of miRNAs and lncRNAs to find their protein coding targets in human. We were able to recognize evolutionary hallmarks of ncRNA targets by phylostratigraphic analysis. We found the conventional 3'-UTR and lesser known 5'-UTR targets of miRNAs to be enriched at three consecutive phylostrata. Firstly, in eukaryata phylostratum corresponding to the emergence of miRNAs, our study revealed that miRNA targets function primarily in cell cycle processes. Moreover, the same overrepresentation of the targets observed in the next two consecutive phylostrata, opisthokonta and eumetazoa, corresponded to the expansion periods of miRNAs in animals evolution. Coding sequence targets of miRNAs showed a delayed rise at opisthokonta phylostratum, compared to the 3' and 5' UTR targets of miRNAs. LncRNA regulatory network was the latest to evolve at eumetazoa.

  2. Physical and numerical sources of computational inefficiency in integration of chemical kinetic rate equations: Etiology, treatment and prognosis

    NASA Technical Reports Server (NTRS)

    Pratt, D. T.; Radhakrishnan, K.

    1986-01-01

    The design of a very fast, automatic black-box code for homogeneous, gas-phase chemical kinetics problems requires an understanding of the physical and numerical sources of computational inefficiency. Some major sources reviewed in this report are stiffness of the governing ordinary differential equations (ODE's) and its detection, choice of appropriate method (i.e., integration algorithm plus step-size control strategy), nonphysical initial conditions, and too frequent evaluation of thermochemical and kinetic properties. Specific techniques are recommended (and some advised against) for improving or overcoming the identified problem areas. It is argued that, because reactive species increase exponentially with time during induction, and all species exhibit asymptotic, exponential decay with time during equilibration, exponential-fitted integration algorithms are inherently more accurate for kinetics modeling than classical, polynomial-interpolant methods for the same computational work. But current codes using the exponential-fitted method lack the sophisticated stepsize-control logic of existing black-box ODE solver codes, such as EPISODE and LSODE. The ultimate chemical kinetics code does not exist yet, but the general characteristics of such a code are becoming apparent.

  3. The Astrophysics Source Code Library: Supporting software publication and citation

    NASA Astrophysics Data System (ADS)

    Allen, Alice; Teuben, Peter

    2018-01-01

    The Astrophysics Source Code Library (ASCL, ascl.net), established in 1999, is a free online registry for source codes used in research that has appeared in, or been submitted to, peer-reviewed publications. The ASCL is indexed by the SAO/NASA Astrophysics Data System (ADS) and Web of Science and is citable by using the unique ascl ID assigned to each code. In addition to registering codes, the ASCL can house archive files for download and assign them DOIs. The ASCL advocations for software citation on par with article citation, participates in multidiscipinary events such as Force11, OpenCon, and the annual Workshop on Sustainable Software for Science, works with journal publishers, and organizes Special Sessions and Birds of a Feather meetings at national and international conferences such as Astronomical Data Analysis Software and Systems (ADASS), European Week of Astronomy and Space Science, and AAS meetings. In this presentation, I will discuss some of the challenges of gathering credit for publishing software and ideas and efforts from other disciplines that may be useful to astronomy.

  4. Self-consistent modeling of electron cyclotron resonance ion sources

    NASA Astrophysics Data System (ADS)

    Girard, A.; Hitz, D.; Melin, G.; Serebrennikov, K.; Lécot, C.

    2004-05-01

    In order to predict the performances of electron cyclotron resonance ion source (ECRIS), it is necessary to perfectly model the different parts of these sources: (i) magnetic configuration; (ii) plasma characteristics; (iii) extraction system. The magnetic configuration is easily calculated via commercial codes; different codes also simulate the ion extraction, either in two dimension, or even in three dimension (to take into account the shape of the plasma at the extraction influenced by the hexapole). However the characteristics of the plasma are not always mastered. This article describes the self-consistent modeling of ECRIS: we have developed a code which takes into account the most important construction parameters: the size of the plasma (length, diameter), the mirror ratio and axial magnetic profile, whether a biased probe is installed or not. These input parameters are used to feed a self-consistent code, which calculates the characteristics of the plasma: electron density and energy, charge state distribution, plasma potential. The code is briefly described, and some of its most interesting results are presented. Comparisons are made between the calculations and the results obtained experimentally.

  5. Some practical universal noiseless coding techniques, part 3, module PSl14,K+

    NASA Technical Reports Server (NTRS)

    Rice, Robert F.

    1991-01-01

    The algorithmic definitions, performance characterizations, and application notes for a high-performance adaptive noiseless coding module are provided. Subsets of these algorithms are currently under development in custom very large scale integration (VLSI) at three NASA centers. The generality of coding algorithms recently reported is extended. The module incorporates a powerful adaptive noiseless coder for Standard Data Sources (i.e., sources whose symbols can be represented by uncorrelated non-negative integers, where smaller integers are more likely than the larger ones). Coders can be specified to provide performance close to the data entropy over any desired dynamic range (of entropy) above 0.75 bit/sample. This is accomplished by adaptively choosing the best of many efficient variable-length coding options to use on each short block of data (e.g., 16 samples) All code options used for entropies above 1.5 bits/sample are 'Huffman Equivalent', but they require no table lookups to implement. The coding can be performed directly on data that have been preprocessed to exhibit the characteristics of a standard source. Alternatively, a built-in predictive preprocessor can be used where applicable. This built-in preprocessor includes the familiar 1-D predictor followed by a function that maps the prediction error sequences into the desired standard form. Additionally, an external prediction can be substituted if desired. A broad range of issues dealing with the interface between the coding module and the data systems it might serve are further addressed. These issues include: multidimensional prediction, archival access, sensor noise, rate control, code rate improvements outside the module, and the optimality of certain internal code options.

  6. Entangled cloning of stabilizer codes and free fermions

    NASA Astrophysics Data System (ADS)

    Hsieh, Timothy H.

    2016-10-01

    Though the no-cloning theorem [Wooters and Zurek, Nature (London) 299, 802 (1982), 10.1038/299802a0] prohibits exact replication of arbitrary quantum states, there are many instances in quantum information processing and entanglement measurement in which a weaker form of cloning may be useful. Here, I provide a construction for generating an "entangled clone" for a particular but rather expansive and rich class of states. Given a stabilizer code or free fermion Hamiltonian, this construction generates an exact entangled clone of the original ground state, in the sense that the entanglement between the original and the exact copy can be tuned to be arbitrarily small but finite, or large, and the relation between the original and the copy can also be modified to some extent. For example, this Rapid Communication focuses on generating time-reversed copies of stabilizer codes and particle-hole transformed ground states of free fermion systems, although untransformed clones can also be generated. The protocol leverages entanglement to simulate a transformed copy of the Hamiltonian without having to physically implement it and can potentially be realized in superconducting qubits or ultracold atomic systems.

  7. From chemical metabolism to life: the origin of the genetic coding process

    PubMed Central

    2017-01-01

    Looking for origins is so much rooted in ideology that most studies reflect opinions that fail to explore the first realistic scenarios. To be sure, trying to understand the origins of life should be based on what we know of current chemistry in the solar system and beyond. There, amino acids and very small compounds such as carbon dioxide, dihydrogen or dinitrogen and their immediate derivatives are ubiquitous. Surface-based chemical metabolism using these basic chemicals is the most likely beginning in which amino acids, coenzymes and phosphate-based small carbon molecules were built up. Nucleotides, and of course RNAs, must have come to being much later. As a consequence, the key question to account for life is to understand how chemical metabolism that began with amino acids progressively shaped into a coding process involving RNAs. Here I explore the role of building up complementarity rules as the first information-based process that allowed for the genetic code to emerge, after RNAs were substituted to surfaces to carry over the basic metabolic pathways that drive the pursuit of life. PMID:28684991

  8. Patent and product piracy

    NASA Astrophysics Data System (ADS)

    Ignat, V.

    2016-08-01

    Advanced industrial countries are affected by technology theft. German industry annually loses more than 50 billion euros. The main causes are industrial espionage and fraudulent copying patents and industrial products. Many Asian countries are profiteering saving up to 65% of production costs. Most affected are small medium enterprises, who do not have sufficient economic power to assert themselves against some powerful countries. International organizations, such as Interpol and World Customs Organization - WCO - work together to combat international economic crime. Several methods of protection can be achieved by registering patents or specific technical methods for recognition of product originality. They have developed more suitable protection, like Hologram, magnetic stripe, barcode, CE marking, digital watermarks, DNA or Nano-technologies, security labels, radio frequency identification, micro color codes, matrix code, cryptographic encodings. The automotive industry has developed the method “Manufactures against Product Piracy”. A sticker on the package features original products and it uses a Data Matrix verifiable barcode. The code can be recorded with a smartphone camera. The smartphone is connected via Internet to a database, where the identification numbers of the original parts are stored.

  9. 39 CFR 121.3 - Standard Mail.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... Facility (SCF) turnaround Standard Mail® pieces accepted at origin before the day-zero Critical Entry Time... origin before the day-zero Critical Entry Time is 4 days when the OPD&C/F and the ADC are the same... before the day-zero Critical Entry Time is 5 days for each remaining 3-digit ZIP Code origin-destination...

  10. The multidimensional Self-Adaptive Grid code, SAGE, version 2

    NASA Technical Reports Server (NTRS)

    Davies, Carol B.; Venkatapathy, Ethiraj

    1995-01-01

    This new report on Version 2 of the SAGE code includes all the information in the original publication plus all upgrades and changes to the SAGE code since that time. The two most significant upgrades are the inclusion of a finite-volume option and the ability to adapt and manipulate zonal-matching multiple-grid files. In addition, the original SAGE code has been upgraded to Version 1.1 and includes all options mentioned in this report, with the exception of the multiple grid option and its associated features. Since Version 2 is a larger and more complex code, it is suggested (but not required) that Version 1.1 be used for single-grid applications. This document contains all the information required to run both versions of SAGE. The formulation of the adaption method is described in the first section of this document. The second section is presented in the form of a user guide that explains the input and execution of the code. The third section provides many examples. Successful application of the SAGE code in both two and three dimensions for the solution of various flow problems has proven the code to be robust, portable, and simple to use. Although the basic formulation follows the method of Nakahashi and Deiwert, many modifications have been made to facilitate the use of the self-adaptive grid method for complex grid structures. Modifications to the method and the simple but extensive input options make this a flexible and user-friendly code. The SAGE code can accommodate two-dimensional and three-dimensional, finite-difference and finite-volume, single grid, and zonal-matching multiple grid flow problems.

  11. Trident: A Universal Tool for Generating Synthetic Absorption Spectra from Astrophysical Simulations

    NASA Astrophysics Data System (ADS)

    Hummels, Cameron B.; Smith, Britton D.; Silvia, Devin W.

    2017-09-01

    Hydrodynamical simulations are increasingly able to accurately model physical systems on stellar, galactic, and cosmological scales; however, the utility of these simulations is often limited by our ability to directly compare them with the data sets produced by observers: spectra, photometry, etc. To address this problem, we have created trident, a Python-based open-source tool for post-processing hydrodynamical simulations to produce synthetic absorption spectra and related data. trident can (I) create absorption-line spectra for any trajectory through a simulated data set mimicking both background quasar and down-the-barrel configurations; (II) reproduce the spectral characteristics of common instruments like the Cosmic Origins Spectrograph; (III) operate across the ultraviolet, optical, and infrared using customizable absorption-line lists; (IV) trace simulated physical structures directly to spectral features; (v) approximate the presence of ion species absent from the simulation outputs; (VI) generate column density maps for any ion; and (vii) provide support for all major astrophysical hydrodynamical codes. trident was originally developed to aid in the interpretation of observations of the circumgalactic medium and intergalactic medium, but it remains a general tool applicable in other contexts.

  12. Adjoint acceleration of Monte Carlo simulations using TORT/MCNP coupling approach: a case study on the shielding improvement for the cyclotron room of the Buddhist Tzu Chi General Hospital.

    PubMed

    Sheu, R J; Sheu, R D; Jiang, S H; Kao, C H

    2005-01-01

    Full-scale Monte Carlo simulations of the cyclotron room of the Buddhist Tzu Chi General Hospital were carried out to improve the original inadequate maze design. Variance reduction techniques are indispensable in this study to facilitate the simulations for testing a variety of configurations of shielding modification. The TORT/MCNP manual coupling approach based on the Consistent Adjoint Driven Importance Sampling (CADIS) methodology has been used throughout this study. The CADIS utilises the source and transport biasing in a consistent manner. With this method, the computational efficiency was increased significantly by more than two orders of magnitude and the statistical convergence was also improved compared to the unbiased Monte Carlo run. This paper describes the shielding problem encountered, the procedure for coupling the TORT and MCNP codes to accelerate the calculations and the calculation results for the original and improved shielding designs. In order to verify the calculation results and seek additional accelerations, sensitivity studies on the space-dependent and energy-dependent parameters were also conducted.

  13. Mosaic Origins of a Complex Chimeric Mitochondrial Gene in Silene vulgaris

    PubMed Central

    Storchova, Helena; Müller, Karel; Lau, Steffen; Olson, Matthew S.

    2012-01-01

    Chimeric genes are significant sources of evolutionary innovation that are normally created when portions of two or more protein coding regions fuse to form a new open reading frame. In plant mitochondria astonishingly high numbers of different novel chimeric genes have been reported, where they are generated through processes of rearrangement and recombination. Nonetheless, because most studies do not find or report nucleotide variation within the same chimeric gene, evolution after the origination of these chimeric genes remains unstudied. Here we identify two alleles of a complex chimera in Silene vulgaris that are divergent in nucleotide sequence, genomic position relative to other mitochondrial genes, and expression patterns. Structural patterns suggest a history partially influenced by gene conversion between the chimeric gene and functional copies of subunit 1 of the mitochondrial ATP synthase gene (atp1). We identified small repeat structures within the chimeras that are likely recombination sites allowing generation of the chimera. These results establish the potential for chimeric gene divergence in different plant mitochondrial lineages within the same species. This result contrasts with the absence of diversity within mitochondrial chimeras found in crop species. PMID:22383961

  14. Prioritized LT Codes

    NASA Technical Reports Server (NTRS)

    Woo, Simon S.; Cheng, Michael K.

    2011-01-01

    The original Luby Transform (LT) coding scheme is extended to account for data transmissions where some information symbols in a message block are more important than others. Prioritized LT codes provide unequal error protection (UEP) of data on an erasure channel by modifying the original LT encoder. The prioritized algorithm improves high-priority data protection without penalizing low-priority data recovery. Moreover, low-latency decoding is also obtained for high-priority data due to fast encoding. Prioritized LT codes only require a slight change in the original encoding algorithm, and no changes at all at the decoder. Hence, with a small complexity increase in the LT encoder, an improved UEP and low-decoding latency performance for high-priority data can be achieved. LT encoding partitions a data stream into fixed-sized message blocks each with a constant number of information symbols. To generate a code symbol from the information symbols in a message, the Robust-Soliton probability distribution is first applied in order to determine the number of information symbols to be used to compute the code symbol. Then, the specific information symbols are chosen uniform randomly from the message block. Finally, the selected information symbols are XORed to form the code symbol. The Prioritized LT code construction includes an additional restriction that code symbols formed by a relatively small number of XORed information symbols select some of these information symbols from the pool of high-priority data. Once high-priority data are fully covered, encoding continues with the conventional LT approach where code symbols are generated by selecting information symbols from the entire message block including all different priorities. Therefore, if code symbols derived from high-priority data experience an unusual high number of erasures, Prioritized LT codes can still reliably recover both high- and low-priority data. This hybrid approach decides not only "how to encode" but also "what to encode" to achieve UEP. Another advantage of the priority encoding process is that the majority of high-priority data can be decoded sooner since only a small number of code symbols are required to reconstruct high-priority data. This approach increases the likelihood that high-priority data is decoded first over low-priority data. The Prioritized LT code scheme achieves an improvement in high-priority data decoding performance as well as overall information recovery without penalizing the decoding of low-priority data, assuming high-priority data is no more than half of a message block. The cost is in the additional complexity required in the encoder. If extra computation resource is available at the transmitter, image, voice, and video transmission quality in terrestrial and space communications can benefit from accurate use of redundancy in protecting data with varying priorities.

  15. From Pinholes to Black Holes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fenimore, Edward E.

    2014-10-06

    Pinhole photography has made major contributions to astrophysics through the use of “coded apertures”. Coded apertures were instrumental in locating gamma-ray bursts and proving that they originate in faraway galaxies, some from the birth of black holes from the first stars that formed just after the big bang.

  16. Author Correction: Recognition of RNA N6-methyladenosine by IGF2BP proteins enhances mRNA stability and translation.

    PubMed

    Huang, Huilin; Weng, Hengyou; Sun, Wenju; Qin, Xi; Shi, Hailing; Wu, Huizhe; Zhao, Boxuan Simen; Mesquita, Ana; Liu, Chang; Yuan, Celvie L; Hu, Yueh-Chiang; Hüttelmaier, Stefan; Skibbe, Jennifer R; Su, Rui; Deng, Xiaolan; Dong, Lei; Sun, Miao; Li, Chenying; Nachtergaele, Sigrid; Wang, Yungui; Hu, Chao; Ferchen, Kyle; Greis, Kenneth D; Jiang, Xi; Wei, Minjie; Qu, Lianghu; Guan, Jun-Lin; He, Chuan; Yang, Jianhua; Chen, Jianjun

    2018-06-07

    In the version of this Article originally published, the authors incorrectly listed an accession code as GES90642. The correct code is GSE90642 . This has now been amended in all online versions of the Article.

  17. Analysis of Complete Nucleotide Sequences of 12 Gossypium Chloroplast Genomes: Origin and Evolution of Allotetraploids

    PubMed Central

    Xu, Qin; Xiong, Guanjun; Li, Pengbo; He, Fei; Huang, Yi; Wang, Kunbo; Li, Zhaohu; Hua, Jinping

    2012-01-01

    Background Cotton (Gossypium spp.) is a model system for the analysis of polyploidization. Although ascertaining the donor species of allotetraploid cotton has been intensively studied, sequence comparison of Gossypium chloroplast genomes is still of interest to understand the mechanisms underlining the evolution of Gossypium allotetraploids, while it is generally accepted that the parents were A- and D-genome containing species. Here we performed a comparative analysis of 13 Gossypium chloroplast genomes, twelve of which are presented here for the first time. Methodology/Principal Findings The size of 12 chloroplast genomes under study varied from 159,959 bp to 160,433 bp. The chromosomes were highly similar having >98% sequence identity. They encoded the same set of 112 unique genes which occurred in a uniform order with only slightly different boundary junctions. Divergence due to indels as well as substitutions was examined separately for genome, coding and noncoding sequences. The genome divergence was estimated as 0.374% to 0.583% between allotetraploid species and A-genome, and 0.159% to 0.454% within allotetraploids. Forty protein-coding genes were completely identical at the protein level, and 20 intergenic sequences were completely conserved. The 9 allotetraploids shared 5 insertions and 9 deletions in whole genome, and 7-bp substitutions in protein-coding genes. The phylogenetic tree confirmed a close relationship between allotetraploids and the ancestor of A-genome, and the allotetraploids were divided into four separate groups. Progenitor allotetraploid cotton originated 0.43–0.68 million years ago (MYA). Conclusion Despite high degree of conservation between the Gossypium chloroplast genomes, sequence variations among species could still be detected. Gossypium chloroplast genomes preferred for 5-bp indels and 1–3-bp indels are mainly attributed to the SSR polymorphisms. This study supports that the common ancestor of diploid A-genome species in Gossypium is the maternal source of extant allotetraploid species and allotetraploids have a monophyletic origin. G. hirsutum AD1 lineages have experienced more sequence variations than other allotetraploids in intergenic regions. The available complete nucleotide sequences of 12 Gossypium chloroplast genomes should facilitate studies to uncover the molecular mechanisms of compartmental co-evolution and speciation of Gossypium allotetraploids. PMID:22876273

  18. A Spectroscopically Confirmed Double Source Plane Lens System in the Hyper Suprime-Cam Subaru Strategic Program

    NASA Astrophysics Data System (ADS)

    Tanaka, Masayuki; Wong, Kenneth C.; More, Anupreeta; Dezuka, Arsha; Egami, Eiichi; Oguri, Masamune; Suyu, Sherry H.; Sonnenfeld, Alessandro; Higuchi, Ryo; Komiyama, Yutaka; Miyazaki, Satoshi; Onoue, Masafusa; Oyamada, Shuri; Utsumi, Yousuke

    2016-08-01

    We report the serendipitous discovery of HSC J142449-005322, a double source plane lens system in the Hyper Suprime-Cam Subaru Strategic Program. We dub the system Eye of Horus. The lens galaxy is a very massive early-type galaxy with stellar mass of ˜ 7× {10}11 {M}⊙ located at {z}{{L}}=0.795. The system exhibits two arcs/rings with clearly different colors, including several knots. We have performed spectroscopic follow-up observations of the system with FIRE on Magellan. The outer ring is confirmed at {z}{{S}2}=1.988 with multiple emission lines, while the inner arc and counterimage is confirmed at {z}{{S}1}=1.302. This makes it the first double source plane system with spectroscopic redshifts of both sources. Interestingly, redshifts of two of the knots embedded in the outer ring are found to be offset by {{Δ }}z=0.002 from the other knots, suggesting that the outer ring consists of at least two distinct components in the source plane. We perform lens modeling with two independent codes and successfully reproduce the main features of the system. However, two of the lensed sources separated by ˜0.7 arcsec cannot be reproduced by a smooth potential, and the addition of substructure to the lens potential is required to reproduce them. Higher-resolution imaging of the system will help decipher the origin of this lensing feature and potentially detect the substructure.

  19. Safe, Multiphase Bounds Check Elimination in Java

    DTIC Science & Technology

    2010-01-28

    production of mobile code from source code, JIT compilation in the virtual ma- chine, and application code execution. The code producer uses...invariants, and inequality constraint analysis) to identify and prove redundancy of bounds checks. During class-loading and JIT compilation, the virtual...unoptimized code if the speculated invariants do not hold. The combined effect of the multiple phases is to shift the effort as- sociated with bounds

  20. Cross-Layer Design for Video Transmission over Wireless Rician Slow-Fading Channels Using an Adaptive Multiresolution Modulation and Coding Scheme

    NASA Astrophysics Data System (ADS)

    Pei, Yong; Modestino, James W.

    2007-12-01

    We describe a multilayered video transport scheme for wireless channels capable of adapting to channel conditions in order to maximize end-to-end quality of service (QoS). This scheme combines a scalable H.263+ video source coder with unequal error protection (UEP) across layers. The UEP is achieved by employing different channel codes together with a multiresolution modulation approach to transport the different priority layers. Adaptivity to channel conditions is provided through a joint source-channel coding (JSCC) approach which attempts to jointly optimize the source and channel coding rates together with the modulation parameters to obtain the maximum achievable end-to-end QoS for the prevailing channel conditions. In this work, we model the wireless links as slow-fading Rician channel where the channel conditions can be described in terms of the channel signal-to-noise ratio (SNR) and the ratio of specular-to-diffuse energy[InlineEquation not available: see fulltext.]. The multiresolution modulation/coding scheme consists of binary rate-compatible punctured convolutional (RCPC) codes used together with nonuniform phase-shift keyed (PSK) signaling constellations. Results indicate that this adaptive JSCC scheme employing scalable video encoding together with a multiresolution modulation/coding approach leads to significant improvements in delivered video quality for specified channel conditions. In particular, the approach results in considerably improved graceful degradation properties for decreasing channel SNR.

Top