Sample records for object class codes

  1. Fundamental differences between optimization code test problems in engineering applications

    NASA Technical Reports Server (NTRS)

    Eason, E. D.

    1984-01-01

    The purpose here is to suggest that there is at least one fundamental difference between the problems used for testing optimization codes and the problems that engineers often need to solve; in particular, the level of precision that can be practically achieved in the numerical evaluation of the objective function, derivatives, and constraints. This difference affects the performance of optimization codes, as illustrated by two examples. Two classes of optimization problem were defined. Class One functions and constraints can be evaluated to a high precision that depends primarily on the word length of the computer. Class Two functions and/or constraints can only be evaluated to a moderate or a low level of precision for economic or modeling reasons, regardless of the computer word length. Optimization codes have not been adequately tested on Class Two problems. There are very few Class Two test problems in the literature, while there are literally hundreds of Class One test problems. The relative performance of two codes may be markedly different for Class One and Class Two problems. Less sophisticated direct search type codes may be less likely to be confused or to waste many function evaluations on Class Two problems. The analysis accuracy and minimization performance are related in a complex way that probably varies from code to code. On a problem where the analysis precision was varied over a range, the simple Hooke and Jeeves code was more efficient at low precision while the Powell code was more efficient at high precision.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Viktor K. Decyk

    The UCLA work on this grant was to design and help implement an object-oriented version of the GTC code, which is written in Fortran90. The GTC code is the main global gyrokinetic code used in this project, and over the years multiple, incompatible versions have evolved. The reason for this effort is to allow multiple authors to work together on GTC and to simplify future enhancements to GTC. The effort was designed to proceed incrementally. Initially, an upper layer of classes (derived types and methods) was implemented which called the original GTC code 'under the hood.' The derived types pointedmore » to data in the original GTC code, and the methods called the original GTC subroutines. The original GTC code was modified only very slightly. This allowed one to define (and refine) a set of classes which described the important features of the GTC code in a new, more abstract way, with a minimum of implementation. Furthermore, classes could be added one at a time, and at the end of the each day, the code continued to work correctly. This work was done in close collaboration with Y. Nishimura from UC Irvine and Stefan Ethier from PPPL. Ten classes were ultimately defined and implemented: gyrokinetic and drift kinetic particles, scalar and vector fields, a mesh, jacobian, FLR, equilibrium, interpolation, and particles species descriptors. In the second state of this development, some of the scaffolding was removed. The constructors in the class objects now allocated the data and the array data in the original GTC code was removed. This isolated the components and now allowed multiple instantiations of the objects to be created, in particular, multiple ion species. Again, the work was done incrementally, one class at a time, so that the code was always working properly. This work was done in close collaboration with Y. Nishimura and W. Zhang from UC Irvine and Stefan Ethier from PPPL. The third stage of this work was to integrate the capabilities of the various versions of the GTC code into one flexible and extensible version. To do this, we developed a methodology to implement Design Patterns in Fortran90. Design Patterns are abstract solutions to generic programming problems, which allow one to handle increased complexity. This work was done in collaboration with Henry Gardner, a computer scientist (and former plasma physicist) from the Australian National University. As an example, the Strategy Pattern is being used in GTC to support multiple solvers. This new code is currently being used in the study of energetic particles. A document describing the evolution of the GTC code to this new object-oriented version is available to users of GTC.« less

  3. Interfacing with Legacy using Remote Method Invocation

    NASA Technical Reports Server (NTRS)

    Howard, Scott M.

    1998-01-01

    The assignment described was enough to make a neophyte Java developer bolt for the door: provide a remote method for use by an applet which invokes a native method that wraps a function in an existing legacy library. The purpose of the remote method is to return an instance of a class object whose contents reflect the data structure returned by the legacy function. While embroiled in implementation, I would have spent the time wading through their JNI use group archive as well, but I couldn't seem to locate one. Subsequently, I made the decision to try to document my findings in order to assist others. Before we start on the class design, let's look at what the existing legacy code does. The C function to be called, Get-Legacy-Data, consists of two steps: an ASII file is read from the local disk and its contents are parsed into a Legacy_Type structure whose address is passed as an argument by the caller. The legacy code was compiled into a shared object library, legacy. so, using the IRIX 6.2 compiler and then loaded onto the Web server, a Silicon Graphics Indy station loaded with the IRIX 6.4 operating system. As far as the class design is concerned, the first thing required is a class to act as a template for the data structure returned by the legacy function. This class, JLegacy, declares a series of public instance variables which correspond to the members of Legacy_Type and provides a parameterless constructor. This constructor is never called, not even by the native method which allocates the object for return to the remote method. Next, the remote interface declaration for the remote object must be defined. In order for JLegacyRO to implement getJLegacy, JLegacyRO must interface with the existing legacy code through a native method, getn. getn is declared in the JLegacyRO class but implemented in C, just like the legacy code. getn returns a JLegacy instance and is declared static since its implementation is the same for all instances of the JLegacyRO class.

  4. Label consistent K-SVD: learning a discriminative dictionary for recognition.

    PubMed

    Jiang, Zhuolin; Lin, Zhe; Davis, Larry S

    2013-11-01

    A label consistent K-SVD (LC-KSVD) algorithm to learn a discriminative dictionary for sparse coding is presented. In addition to using class labels of training data, we also associate label information with each dictionary item (columns of the dictionary matrix) to enforce discriminability in sparse codes during the dictionary learning process. More specifically, we introduce a new label consistency constraint called "discriminative sparse-code error" and combine it with the reconstruction error and the classification error to form a unified objective function. The optimal solution is efficiently obtained using the K-SVD algorithm. Our algorithm learns a single overcomplete dictionary and an optimal linear classifier jointly. The incremental dictionary learning algorithm is presented for the situation of limited memory resources. It yields dictionaries so that feature points with the same class labels have similar sparse codes. Experimental results demonstrate that our algorithm outperforms many recently proposed sparse-coding techniques for face, action, scene, and object category recognition under the same learning conditions.

  5. Color Coded Cards for Student Behavior Management in Higher Education Environments

    ERIC Educational Resources Information Center

    Alhalabi, Wadee; Alhalabi, Mobeen

    2017-01-01

    The Color Coded Cards system as a possibly effective class management tool is the focus of this research. The Color Coded Cards system involves each student being given a card with a specific color based on his or her behavior. The main objective of the research is to find out whether this system effectively improves students' behavior, thus…

  6. Status of MAPA (Modular Accelerator Physics Analysis) and the Tech-X Object-Oriented Accelerator Library

    NASA Astrophysics Data System (ADS)

    Cary, J. R.; Shasharina, S.; Bruhwiler, D. L.

    1998-04-01

    The MAPA code is a fully interactive accelerator modeling and design tool consisting of a GUI and two object-oriented C++ libraries: a general library suitable for treatment of any dynamical system, and an accelerator library including many element types plus an accelerator class. The accelerator library inherits directly from the system library, which uses hash tables to store any relevant parameters or strings. The GUI can access these hash tables in a general way, allowing the user to invoke a window displaying all relevant parameters for a particular element type or for the accelerator class, with the option to change those parameters. The system library can advance an arbitrary number of dynamical variables through an arbitrary mapping. The accelerator class inherits this capability and overloads the relevant functions to advance the phase space variables of a charged particle through a string of elements. Among other things, the GUI makes phase space plots and finds fixed points of the map. We discuss the object hierarchy of the two libraries and use of the code.

  7. Qualitative Differences in the Representation of Spatial Relations for Different Object Classes

    ERIC Educational Resources Information Center

    Cooper, Eric E.; Brooks, Brian E.

    2004-01-01

    Two experiments investigated whether the representations used for animal, produce, and object recognition code spatial relations in a similar manner. Experiment 1 tested the effects of planar rotation on the recognition of animals and nonanimal objects. Response times for recognizing animals followed an inverted U-shaped function, whereas those…

  8. An Exploration and Analysis of the Relationships among Object Oriented Programming, Hypermedia, and Hypertalk.

    ERIC Educational Resources Information Center

    Milet, Lynn K.; Harvey, Francis A.

    Hypermedia and object oriented programming systems (OOPs) represent examples of "open" computer environments that allow the user access to parts of the code or operating system. Both systems share fundamental intellectual concepts (objects, messages, methods, classes, and inheritance), so that an understanding of hypermedia can help in…

  9. Comparison of Evolutionary (Genetic) Algorithm and Adjoint Methods for Multi-Objective Viscous Airfoil Optimizations

    NASA Technical Reports Server (NTRS)

    Pulliam, T. H.; Nemec, M.; Holst, T.; Zingg, D. W.; Kwak, Dochan (Technical Monitor)

    2002-01-01

    A comparison between an Evolutionary Algorithm (EA) and an Adjoint-Gradient (AG) Method applied to a two-dimensional Navier-Stokes code for airfoil design is presented. Both approaches use a common function evaluation code, the steady-state explicit part of the code,ARC2D. The parameterization of the design space is a common B-spline approach for an airfoil surface, which together with a common griding approach, restricts the AG and EA to the same design space. Results are presented for a class of viscous transonic airfoils in which the optimization tradeoff between drag minimization as one objective and lift maximization as another, produces the multi-objective design space. Comparisons are made for efficiency, accuracy and design consistency.

  10. An object oriented extension to CLIPS

    NASA Technical Reports Server (NTRS)

    Sobkowicz, Clifford

    1990-01-01

    A presentation of software sub-system developed to augment C Language Production Systems (CLIPS) with facilities for object oriented Knowledge representation. Functions are provided to define classes, instantiate objects, access attributes, and assert object related facts. This extension is implemented via the CLIPS user function interface and does not require modification of any CLIPS code. It does rely on internal CLIPS functions for memory management and symbol representation.

  11. Object-oriented approach for gas turbine engine simulation

    NASA Technical Reports Server (NTRS)

    Curlett, Brian P.; Felder, James L.

    1995-01-01

    An object-oriented gas turbine engine simulation program was developed. This program is a prototype for a more complete, commercial grade engine performance program now being proposed as part of the Numerical Propulsion System Simulator (NPSS). This report discusses architectural issues of this complex software system and the lessons learned from developing the prototype code. The prototype code is a fully functional, general purpose engine simulation program, however, only the component models necessary to model a transient compressor test rig have been written. The production system will be capable of steady state and transient modeling of almost any turbine engine configuration. Chief among the architectural considerations for this code was the framework in which the various software modules will interact. These modules include the equation solver, simulation code, data model, event handler, and user interface. Also documented in this report is the component based design of the simulation module and the inter-component communication paradigm. Object class hierarchies for some of the code modules are given.

  12. A genetic scale of reading frame coding.

    PubMed

    Michel, Christian J

    2014-08-21

    The reading frame coding (RFC) of codes (sets) of trinucleotides is a genetic concept which has been largely ignored during the last 50 years. A first objective is the definition of a new and simple statistical parameter PrRFC for analysing the probability (efficiency) of reading frame coding (RFC) of any trinucleotide code. A second objective is to reveal different classes and subclasses of trinucleotide codes involved in reading frame coding: the circular codes of 20 trinucleotides and the bijective genetic codes of 20 trinucleotides coding the 20 amino acids. This approach allows us to propose a genetic scale of reading frame coding which ranges from 1/3 with the random codes (RFC probability identical in the three frames) to 1 with the comma-free circular codes (RFC probability maximal in the reading frame and null in the two shifted frames). This genetic scale shows, in particular, the reading frame coding probabilities of the 12,964,440 circular codes (PrRFC=83.2% in average), the 216 C(3) self-complementary circular codes (PrRFC=84.1% in average) including the code X identified in eukaryotic and prokaryotic genes (PrRFC=81.3%) and the 339,738,624 bijective genetic codes (PrRFC=61.5% in average) including the 52 codes without permuted trinucleotides (PrRFC=66.0% in average). Otherwise, the reading frame coding probabilities of each trinucleotide code coding an amino acid with the universal genetic code are also determined. The four amino acids Gly, Lys, Phe and Pro are coded by codes (not circular) with RFC probabilities equal to 2/3, 1/2, 1/2 and 2/3, respectively. The amino acid Leu is coded by a circular code (not comma-free) with a RFC probability equal to 18/19. The 15 other amino acids are coded by comma-free circular codes, i.e. with RFC probabilities equal to 1. The identification of coding properties in some classes of trinucleotide codes studied here may bring new insights in the origin and evolution of the genetic code. Copyright © 2014 Elsevier Ltd. All rights reserved.

  13. An Object-oriented Computer Code for Aircraft Engine Weight Estimation

    NASA Technical Reports Server (NTRS)

    Tong, Michael T.; Naylor, Bret A.

    2008-01-01

    Reliable engine-weight estimation at the conceptual design stage is critical to the development of new aircraft engines. It helps to identify the best engine concept amongst several candidates. At NASA Glenn (GRC), the Weight Analysis of Turbine Engines (WATE) computer code, originally developed by Boeing Aircraft, has been used to estimate the engine weight of various conceptual engine designs. The code, written in FORTRAN, was originally developed for NASA in 1979. Since then, substantial improvements have been made to the code to improve the weight calculations for most of the engine components. Most recently, to improve the maintainability and extensibility of WATE, the FORTRAN code has been converted into an object-oriented version. The conversion was done within the NASA s NPSS (Numerical Propulsion System Simulation) framework. This enables WATE to interact seamlessly with the thermodynamic cycle model which provides component flow data such as airflows, temperatures, and pressures, etc. that are required for sizing the components and weight calculations. The tighter integration between the NPSS and WATE would greatly enhance system-level analysis and optimization capabilities. It also would facilitate the enhancement of the WATE code for next-generation aircraft and space propulsion systems. In this paper, the architecture of the object-oriented WATE code (or WATE++) is described. Both the FORTRAN and object-oriented versions of the code are employed to compute the dimensions and weight of a 300- passenger aircraft engine (GE90 class). Both versions of the code produce essentially identical results as should be the case. Keywords: NASA, aircraft engine, weight, object-oriented

  14. A color-coded vision scheme for robotics

    NASA Technical Reports Server (NTRS)

    Johnson, Kelley Tina

    1991-01-01

    Most vision systems for robotic applications rely entirely on the extraction of information from gray-level images. Humans, however, regularly depend on color to discriminate between objects. Therefore, the inclusion of color in a robot vision system seems a natural extension of the existing gray-level capabilities. A method for robot object recognition using a color-coding classification scheme is discussed. The scheme is based on an algebraic system in which a two-dimensional color image is represented as a polynomial of two variables. The system is then used to find the color contour of objects. In a controlled environment, such as that of the in-orbit space station, a particular class of objects can thus be quickly recognized by its color.

  15. An Object-Oriented Computer Code for Aircraft Engine Weight Estimation

    NASA Technical Reports Server (NTRS)

    Tong, Michael T.; Naylor, Bret A.

    2009-01-01

    Reliable engine-weight estimation at the conceptual design stage is critical to the development of new aircraft engines. It helps to identify the best engine concept amongst several candidates. At NASA Glenn Research Center (GRC), the Weight Analysis of Turbine Engines (WATE) computer code, originally developed by Boeing Aircraft, has been used to estimate the engine weight of various conceptual engine designs. The code, written in FORTRAN, was originally developed for NASA in 1979. Since then, substantial improvements have been made to the code to improve the weight calculations for most of the engine components. Most recently, to improve the maintainability and extensibility of WATE, the FORTRAN code has been converted into an object-oriented version. The conversion was done within the NASA's NPSS (Numerical Propulsion System Simulation) framework. This enables WATE to interact seamlessly with the thermodynamic cycle model which provides component flow data such as airflows, temperatures, and pressures, etc., that are required for sizing the components and weight calculations. The tighter integration between the NPSS and WATE would greatly enhance system-level analysis and optimization capabilities. It also would facilitate the enhancement of the WATE code for next-generation aircraft and space propulsion systems. In this paper, the architecture of the object-oriented WATE code (or WATE++) is described. Both the FORTRAN and object-oriented versions of the code are employed to compute the dimensions and weight of a 300-passenger aircraft engine (GE90 class). Both versions of the code produce essentially identical results as should be the case.

  16. Automatic Synthesis of UML Designs from Requirements in an Iterative Process

    NASA Technical Reports Server (NTRS)

    Schumann, Johann; Whittle, Jon; Clancy, Daniel (Technical Monitor)

    2001-01-01

    The Unified Modeling Language (UML) is gaining wide popularity for the design of object-oriented systems. UML combines various object-oriented graphical design notations under one common framework. A major factor for the broad acceptance of UML is that it can be conveniently used in a highly iterative, Use Case (or scenario-based) process (although the process is not a part of UML). Here, the (pre-) requirements for the software are specified rather informally as Use Cases and a set of scenarios. A scenario can be seen as an individual trace of a software artifact. Besides first sketches of a class diagram to illustrate the static system breakdown, scenarios are a favorite way of communication with the customer, because scenarios describe concrete interactions between entities and are thus easy to understand. Scenarios with a high level of detail are often expressed as sequence diagrams. Later in the design and implementation stage (elaboration and implementation phases), a design of the system's behavior is often developed as a set of statecharts. From there (and the full-fledged class diagram), actual code development is started. Current commercial UML tools support this phase by providing code generators for class diagrams and statecharts. In practice, it can be observed that the transition from requirements to design to code is a highly iterative process. In this talk, a set of algorithms is presented which perform reasonable synthesis and transformations between different UML notations (sequence diagrams, Object Constraint Language (OCL) constraints, statecharts). More specifically, we will discuss the following transformations: Statechart synthesis, introduction of hierarchy, consistency of modifications, and "design-debugging".

  17. Deep Hashing for Scalable Image Search.

    PubMed

    Lu, Jiwen; Liong, Venice Erin; Zhou, Jie

    2017-05-01

    In this paper, we propose a new deep hashing (DH) approach to learn compact binary codes for scalable image search. Unlike most existing binary codes learning methods, which usually seek a single linear projection to map each sample into a binary feature vector, we develop a deep neural network to seek multiple hierarchical non-linear transformations to learn these binary codes, so that the non-linear relationship of samples can be well exploited. Our model is learned under three constraints at the top layer of the developed deep network: 1) the loss between the compact real-valued code and the learned binary vector is minimized, 2) the binary codes distribute evenly on each bit, and 3) different bits are as independent as possible. To further improve the discriminative power of the learned binary codes, we extend DH into supervised DH (SDH) and multi-label SDH by including a discriminative term into the objective function of DH, which simultaneously maximizes the inter-class variations and minimizes the intra-class variations of the learned binary codes with the single-label and multi-label settings, respectively. Extensive experimental results on eight widely used image search data sets show that our proposed methods achieve very competitive results with the state-of-the-arts.

  18. Faunus: An object oriented framework for molecular simulation

    PubMed Central

    Lund, Mikael; Trulsson, Martin; Persson, Björn

    2008-01-01

    Background We present a C++ class library for Monte Carlo simulation of molecular systems, including proteins in solution. The design is generic and highly modular, enabling multiple developers to easily implement additional features. The statistical mechanical methods are documented by extensive use of code comments that – subsequently – are collected to automatically build a web-based manual. Results We show how an object oriented design can be used to create an intuitively appealing coding framework for molecular simulation. This is exemplified in a minimalistic C++ program that can calculate protein protonation states. We further discuss performance issues related to high level coding abstraction. Conclusion C++ and the Standard Template Library (STL) provide a high-performance platform for generic molecular modeling. Automatic generation of code documentation from inline comments has proven particularly useful in that no separate manual needs to be maintained. PMID:18241331

  19. An Object-Oriented Network-Centric Software Architecture for Physical Computing

    NASA Astrophysics Data System (ADS)

    Palmer, Richard

    1997-08-01

    Recent developments in object-oriented computer languages and infrastructure such as the Internet, Web browsers, and the like provide an opportunity to define a more productive computational environment for scientific programming that is based more closely on the underlying mathematics describing physics than traditional programming languages such as FORTRAN or C++. In this talk I describe an object-oriented software architecture for representing physical problems that includes classes for such common mathematical objects as geometry, boundary conditions, partial differential and integral equations, discretization and numerical solution methods, etc. In practice, a scientific program written using this architecture looks remarkably like the mathematics used to understand the problem, is typically an order of magnitude smaller than traditional FORTRAN or C++ codes, and hence easier to understand, debug, describe, etc. All objects in this architecture are ``network-enabled,'' which means that components of a software solution to a physical problem can be transparently loaded from anywhere on the Internet or other global network. The architecture is expressed as an ``API,'' or application programmers interface specification, with reference embeddings in Java, Python, and C++. A C++ class library for an early version of this API has been implemented for machines ranging from PC's to the IBM SP2, meaning that phidentical codes run on all architectures.

  20. The ENSDF Java Package

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sonzogni, A.A.

    2005-05-24

    A package of computer codes has been developed to process and display nuclear structure and decay data stored in the ENSDF (Evaluated Nuclear Structure Data File) library. The codes were written in an object-oriented fashion using the java language. This allows for an easy implementation across multiple platforms as well as deployment on web pages. The structure of the different java classes that make up the package is discussed as well as several different implementations.

  1. PFLOTRAN: Reactive Flow & Transport Code for Use on Laptops to Leadership-Class Supercomputers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hammond, Glenn E.; Lichtner, Peter C.; Lu, Chuan

    PFLOTRAN, a next-generation reactive flow and transport code for modeling subsurface processes, has been designed from the ground up to run efficiently on machines ranging from leadership-class supercomputers to laptops. Based on an object-oriented design, the code is easily extensible to incorporate additional processes. It can interface seamlessly with Fortran 9X, C and C++ codes. Domain decomposition parallelism is employed, with the PETSc parallel framework used to manage parallel solvers, data structures and communication. Features of the code include a modular input file, implementation of high-performance I/O using parallel HDF5, ability to perform multiple realization simulations with multiple processors permore » realization in a seamless manner, and multiple modes for multiphase flow and multicomponent geochemical transport. Chemical reactions currently implemented in the code include homogeneous aqueous complexing reactions and heterogeneous mineral precipitation/dissolution, ion exchange, surface complexation and a multirate kinetic sorption model. PFLOTRAN has demonstrated petascale performance using 2{sup 17} processor cores with over 2 billion degrees of freedom. Accomplishments achieved to date include applications to the Hanford 300 Area and modeling CO{sub 2} sequestration in deep geologic formations.« less

  2. Microsoft C#.NET program and electromagnetic depth sounding for large loop source

    NASA Astrophysics Data System (ADS)

    Prabhakar Rao, K.; Ashok Babu, G.

    2009-07-01

    A program, in the C# (C Sharp) language with Microsoft.NET Framework, is developed to compute the normalized vertical magnetic field of a horizontal rectangular loop source placed on the surface of an n-layered earth. The field can be calculated either inside or outside the loop. Five C# classes with member functions in each class are, designed to compute the kernel, Hankel transform integral, coefficients for cubic spline interpolation between computed values and the normalized vertical magnetic field. The program computes the vertical magnetic field in the frequency domain using the integral expressions evaluated by a combination of straightforward numerical integration and the digital filter technique. The code utilizes different object-oriented programming (OOP) features. It finally computes the amplitude and phase of the normalized vertical magnetic field. The computed results are presented for geometric and parametric soundings. The code is developed in Microsoft.NET visual studio 2003 and uses various system class libraries.

  3. PMD compensation in fiber-optic communication systems with direct detection using LDPC-coded OFDM.

    PubMed

    Djordjevic, Ivan B

    2007-04-02

    The possibility of polarization-mode dispersion (PMD) compensation in fiber-optic communication systems with direct detection using a simple channel estimation technique and low-density parity-check (LDPC)-coded orthogonal frequency division multiplexing (OFDM) is demonstrated. It is shown that even for differential group delay (DGD) of 4/BW (BW is the OFDM signal bandwidth), the degradation due to the first-order PMD can be completely compensated for. Two classes of LDPC codes designed based on two different combinatorial objects (difference systems and product of combinatorial designs) suitable for use in PMD compensation are introduced.

  4. Vertical Object Layout and Compression for Fixed Heaps

    NASA Astrophysics Data System (ADS)

    Titzer, Ben L.; Palsberg, Jens

    Research into embedded sensor networks has placed increased focus on the problem of developing reliable and flexible software for microcontroller-class devices. Languages such as nesC [10] and Virgil [20] have brought higher-level programming idioms to this lowest layer of software, thereby adding expressiveness. Both languages are marked by the absence of dynamic memory allocation, which removes the need for a runtime system to manage memory. While nesC offers code modules with statically allocated fields, arrays and structs, Virgil allows the application to allocate and initialize arbitrary objects during compilation, producing a fixed object heap for runtime. This paper explores techniques for compressing fixed object heaps with the goal of reducing the RAM footprint of a program. We explore table-based compression and introduce a novel form of object layout called vertical object layout. We provide experimental results that measure the impact on RAM size, code size, and execution time for a set of Virgil programs. Our results show that compressed vertical layout has better execution time and code size than table-based compression while achieving more than 20% heap reduction on 6 of 12 benchmark programs and 2-17% heap reduction on the remaining 6. We also present a formalization of vertical object layout and prove tight relationships between three styles of object layout.

  5. Teaching Quality Object-Oriented Programming

    ERIC Educational Resources Information Center

    Feldman, Yishai A.

    2005-01-01

    Computer science students need to learn how to write high-quality software. An important methodology for achieving quality is design-by-contract, in which code is developed together with its specification, which is given as class invariants and method pre- and postconditions. This paper describes practical experience in teaching design-by-contract…

  6. Communication Civility Codes: Positive Communication through the Students' Eyes

    ERIC Educational Resources Information Center

    Pawlowski, Donna R.

    2017-01-01

    Courses: Presentational courses such as Public Speaking, Interviewing, Business and Professional, Persuasion, Interpersonal; any course where civility may be promoted in the classroom. Objectives: At the end of this single-class activity, students will have an understanding of civility in order to: (1) identify civility and consequences of…

  7. Use of Systematic Methods to Improve Disease Identification in Administrative Data: The Case of Severe Sepsis.

    PubMed

    Shahraz, Saeid; Lagu, Tara; Ritter, Grant A; Liu, Xiadong; Tompkins, Christopher

    2017-03-01

    Selection of International Classification of Diseases (ICD)-based coded information for complex conditions such as severe sepsis is a subjective process and the results are sensitive to the codes selected. We use an innovative data exploration method to guide ICD-based case selection for severe sepsis. Using the Nationwide Inpatient Sample, we applied Latent Class Analysis (LCA) to determine if medical coders follow any uniform and sensible coding for observations with severe sepsis. We examined whether ICD-9 codes specific to sepsis (038.xx for septicemia, a subset of 995.9 codes representing Systemic Inflammatory Response syndrome, and 785.52 for septic shock) could all be members of the same latent class. Hospitalizations coded with sepsis-specific codes could be assigned to a latent class of their own. This class constituted 22.8% of all potential sepsis observations. The probability of an observation with any sepsis-specific codes being assigned to the residual class was near 0. The chance of an observation in the residual class having a sepsis-specific code as the principal diagnosis was close to 0. Validity of sepsis class assignment is supported by empirical results, which indicated that in-hospital deaths in the sepsis-specific class were around 4 times as likely as that in the residual class. The conventional methods of defining severe sepsis cases in observational data substantially misclassify sepsis cases. We suggest a methodology that helps reliable selection of ICD codes for conditions that require complex coding.

  8. A Technique for Removing an Important Class of Trojan Horses from High-Order Languages

    DTIC Science & Technology

    1988-01-01

    A Technique for Removing an Important Class of Trojan Horses from High Order Languages∗ John McDermott Center for Secure Information Technology...Ken Thompson described a sophisticated Trojan horse attack on a compiler, one that is undetectable by any search of the compiler source code. The...object of the compiler Trojan horse is to modify the semantics of the high order language in a way that breaks the security of a trusted system generated

  9. Shape Similarity, Better than Semantic Membership, Accounts for the Structure of Visual Object Representations in a Population of Monkey Inferotemporal Neurons

    PubMed Central

    DiCarlo, James J.; Zecchina, Riccardo; Zoccolan, Davide

    2013-01-01

    The anterior inferotemporal cortex (IT) is the highest stage along the hierarchy of visual areas that, in primates, processes visual objects. Although several lines of evidence suggest that IT primarily represents visual shape information, some recent studies have argued that neuronal ensembles in IT code the semantic membership of visual objects (i.e., represent conceptual classes such as animate and inanimate objects). In this study, we investigated to what extent semantic, rather than purely visual information, is represented in IT by performing a multivariate analysis of IT responses to a set of visual objects. By relying on a variety of machine-learning approaches (including a cutting-edge clustering algorithm that has been recently developed in the domain of statistical physics), we found that, in most instances, IT representation of visual objects is accounted for by their similarity at the level of shape or, more surprisingly, low-level visual properties. Only in a few cases we observed IT representations of semantic classes that were not explainable by the visual similarity of their members. Overall, these findings reassert the primary function of IT as a conveyor of explicit visual shape information, and reveal that low-level visual properties are represented in IT to a greater extent than previously appreciated. In addition, our work demonstrates how combining a variety of state-of-the-art multivariate approaches, and carefully estimating the contribution of shape similarity to the representation of object categories, can substantially advance our understanding of neuronal coding of visual objects in cortex. PMID:23950700

  10. Thyra Abstract Interface Package

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bartlett, Roscoe A.

    2005-09-01

    Thrya primarily defines a set of abstract C++ class interfaces needed for the development of abstract numerical atgorithms (ANAs) such as iterative linear solvers, transient solvers all the way up to optimization. At the foundation of these interfaces are abstract C++ classes for vectors, vector spaces, linear operators and multi-vectors. Also included in the Thyra package is C++ code for creating concrete vector, vector space, linear operator, and multi-vector subclasses as well as other utilities to aid in the development of ANAs. Currently, very general and efficient concrete subclass implementations exist for serial and SPMD in-core vectors and multi-vectors. Codemore » also currently exists for testing objects and providing composite objects such as product vectors.« less

  11. Protograph LDPC Codes Over Burst Erasure Channels

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush; Dolinar, Sam; Jones, Christopher

    2006-01-01

    In this paper we design high rate protograph based LDPC codes suitable for binary erasure channels. To simplify the encoder and decoder implementation for high data rate transmission, the structure of codes are based on protographs and circulants. These LDPC codes can improve data link and network layer protocols in support of communication networks. Two classes of codes were designed. One class is designed for large block sizes with an iterative decoding threshold that approaches capacity of binary erasure channels. The other class is designed for short block sizes based on maximizing minimum stopping set size. For high code rates and short blocks the second class outperforms the first class.

  12. Gamma-Ray Imaging Probes.

    NASA Astrophysics Data System (ADS)

    Wild, Walter James

    1988-12-01

    External nuclear medicine diagnostic imaging of early primary and metastatic lung cancer tumors is difficult due to the poor sensitivity and resolution of existing gamma cameras. Nonimaging counting detectors used for internal tumor detection give ambiguous results because distant background variations are difficult to discriminate from neighboring tumor sites. This suggests that an internal imaging nuclear medicine probe, particularly an esophageal probe, may be advantageously used to detect small tumors because of the ability to discriminate against background variations and the capability to get close to sites neighboring the esophagus. The design, theory of operation, preliminary bench tests, characterization of noise behavior and optimization of such an imaging probe is the central theme of this work. The central concept lies in the representation of the aperture shell by a sequence of binary digits. This, coupled with the mode of operation which is data encoding within an axial slice of space, leads to the fundamental imaging equation in which the coding operation is conveniently described by a circulant matrix operator. The coding/decoding process is a classic coded-aperture problem, and various estimators to achieve decoding are discussed. Some estimators require a priori information about the object (or object class) being imaged; the only unbiased estimator that does not impose this requirement is the simple inverse-matrix operator. The effects of noise on the estimate (or reconstruction) is discussed for general noise models and various codes/decoding operators. The choice of an optimal aperture for detector count times of clinical relevance is examined using a statistical class-separability formalism.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Spires, S.

    This code provides an application programming interface to the Macintosh OSX Carbon Databrowser from Macintosh Common Lisp. The Databrowser API is made available to Lisp via high level native CLOS classes and methods, obviating the need to write low-level Carbon code. This code is primarily ‘glue’ in that its job is to provide an interface between two extant software tools: Macintosh Common Lisp and the OSX Databrowser, both of which are COTS products from private vendors. The Databrowser is an extremely useful user interface widget that is provided with Apple’s OSX (and to some extent, OS9) operating systems. One Apple-sanctionedmore » method for using the Databrowser is via an API called Carbon, which is designed for C and C++ programmers. We have translated the low-level Carbon programming interface to the Databrowser into high-level object-oriented Common Lisp calls, functions, methods. and classes to enable MCL programmers to more readily take advantage of the Databrowser from Lisp programs.« less

  14. Benchmark Problems of the Geothermal Technologies Office Code Comparison Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    White, Mark D.; Podgorney, Robert; Kelkar, Sharad M.

    A diverse suite of numerical simulators is currently being applied to predict or understand the performance of enhanced geothermal systems (EGS). To build confidence and identify critical development needs for these analytical tools, the United States Department of Energy, Geothermal Technologies Office has sponsored a Code Comparison Study (GTO-CCS), with participants from universities, industry, and national laboratories. A principal objective for the study was to create a community forum for improvement and verification of numerical simulators for EGS modeling. Teams participating in the study were those representing U.S. national laboratories, universities, and industries, and each team brought unique numerical simulationmore » capabilities to bear on the problems. Two classes of problems were developed during the study, benchmark problems and challenge problems. The benchmark problems were structured to test the ability of the collection of numerical simulators to solve various combinations of coupled thermal, hydrologic, geomechanical, and geochemical processes. This class of problems was strictly defined in terms of properties, driving forces, initial conditions, and boundary conditions. Study participants submitted solutions to problems for which their simulation tools were deemed capable or nearly capable. Some participating codes were originally developed for EGS applications whereas some others were designed for different applications but can simulate processes similar to those in EGS. Solution submissions from both were encouraged. In some cases, participants made small incremental changes to their numerical simulation codes to address specific elements of the problem, and in other cases participants submitted solutions with existing simulation tools, acknowledging the limitations of the code. The challenge problems were based on the enhanced geothermal systems research conducted at Fenton Hill, near Los Alamos, New Mexico, between 1974 and 1995. The problems involved two phases of research, stimulation, development, and circulation in two separate reservoirs. The challenge problems had specific questions to be answered via numerical simulation in three topical areas: 1) reservoir creation/stimulation, 2) reactive and passive transport, and 3) thermal recovery. Whereas the benchmark class of problems were designed to test capabilities for modeling coupled processes under strictly specified conditions, the stated objective for the challenge class of problems was to demonstrate what new understanding of the Fenton Hill experiments could be realized via the application of modern numerical simulation tools by recognized expert practitioners.« less

  15. Applying data fusion techniques for benthic habitat mapping and monitoring in a coral reef ecosystem

    NASA Astrophysics Data System (ADS)

    Zhang, Caiyun

    2015-06-01

    Accurate mapping and effective monitoring of benthic habitat in the Florida Keys are critical in developing management strategies for this valuable coral reef ecosystem. For this study, a framework was designed for automated benthic habitat mapping by combining multiple data sources (hyperspectral, aerial photography, and bathymetry data) and four contemporary imagery processing techniques (data fusion, Object-based Image Analysis (OBIA), machine learning, and ensemble analysis). In the framework, 1-m digital aerial photograph was first merged with 17-m hyperspectral imagery and 10-m bathymetry data using a pixel/feature-level fusion strategy. The fused dataset was then preclassified by three machine learning algorithms (Random Forest, Support Vector Machines, and k-Nearest Neighbor). Final object-based habitat maps were produced through ensemble analysis of outcomes from three classifiers. The framework was tested for classifying a group-level (3-class) and code-level (9-class) habitats in a portion of the Florida Keys. Informative and accurate habitat maps were achieved with an overall accuracy of 88.5% and 83.5% for the group-level and code-level classifications, respectively.

  16. Advanced CLIPS capabilities

    NASA Technical Reports Server (NTRS)

    Riley, Gary

    1991-01-01

    The C Language Integrated Production System (CLIPS) is a forward chaining rule based language developed by NASA. CLIPS was designed specifically to provide high portability, low cost, and easy integration with external systems. The current release of CLIPS, version 4.3, is being used by over 2500 users throughout the public and private community. The primary addition to the next release of CLIPS, version 5.0, will be the CLIPS Object Oriented Language (COOL). The major capabilities of COOL are: class definition with multiple inheritance and no restrictions on the number, types, or cardinality of slots; message passing which allows procedural code bundled with an object to be executed; and query functions which allow groups of instances to be examined and manipulated. In addition to COOL, numerous other enhancements were added to CLIPS including: generic functions (which allow different pieces of procedural code to be executed depending upon the types or classes of the arguments); integer and double precision data type support; multiple conflict resolution strategies; global variables; logical dependencies; type checking on facts; full ANSI compiler support; and incremental reset for rules.

  17. HackaMol: An Object-Oriented Modern Perl Library for Molecular Hacking on Multiple Scales

    DOE PAGES

    Riccardi, Demian M.; Parks, Jerry M.; Johs, Alexander; ...

    2015-03-20

    HackaMol is an open source, object-oriented toolkit written in Modern Perl that organizes atoms within molecules and provides chemically intuitive attributes and methods. The library consists of two components: HackaMol, the core that contains classes for storing and manipulating molecular information, and HackaMol::X, the extensions that use the core. We tested the core; it is well-documented and easy to install across computational platforms. Our goal for the extensions is to provide a more flexible space for researchers to develop and share new methods. In this application note, we provide a description of the core classes and two extensions: HackaMol::X::Calculator, anmore » abstract calculator that uses code references to generalize interfaces with external programs, and HackaMol::X::Vina, a structured class that provides an interface with the AutoDock Vina docking program.« less

  18. HackaMol: An Object-Oriented Modern Perl Library for Molecular Hacking on Multiple Scales.

    PubMed

    Riccardi, Demian; Parks, Jerry M; Johs, Alexander; Smith, Jeremy C

    2015-04-27

    HackaMol is an open source, object-oriented toolkit written in Modern Perl that organizes atoms within molecules and provides chemically intuitive attributes and methods. The library consists of two components: HackaMol, the core that contains classes for storing and manipulating molecular information, and HackaMol::X, the extensions that use the core. The core is well-tested, well-documented, and easy to install across computational platforms. The goal of the extensions is to provide a more flexible space for researchers to develop and share new methods. In this application note, we provide a description of the core classes and two extensions: HackaMol::X::Calculator, an abstract calculator that uses code references to generalize interfaces with external programs, and HackaMol::X::Vina, a structured class that provides an interface with the AutoDock Vina docking program.

  19. HackaMol: An Object-Oriented Modern Perl Library for Molecular Hacking on Multiple Scales

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Riccardi, Demian M.; Parks, Jerry M.; Johs, Alexander

    HackaMol is an open source, object-oriented toolkit written in Modern Perl that organizes atoms within molecules and provides chemically intuitive attributes and methods. The library consists of two components: HackaMol, the core that contains classes for storing and manipulating molecular information, and HackaMol::X, the extensions that use the core. We tested the core; it is well-documented and easy to install across computational platforms. Our goal for the extensions is to provide a more flexible space for researchers to develop and share new methods. In this application note, we provide a description of the core classes and two extensions: HackaMol::X::Calculator, anmore » abstract calculator that uses code references to generalize interfaces with external programs, and HackaMol::X::Vina, a structured class that provides an interface with the AutoDock Vina docking program.« less

  20. GenASiS Basics: Object-oriented utilitarian functionality for large-scale physics simulations

    DOE PAGES

    Cardall, Christian Y.; Budiardja, Reuben D.

    2015-06-11

    Aside from numerical algorithms and problem setup, large-scale physics simulations on distributed-memory supercomputers require more basic utilitarian functionality, such as physical units and constants; display to the screen or standard output device; message passing; I/O to disk; and runtime parameter management and usage statistics. Here we describe and make available Fortran 2003 classes furnishing extensible object-oriented implementations of this sort of rudimentary functionality, along with individual `unit test' programs and larger example problems demonstrating their use. Lastly, these classes compose the Basics division of our developing astrophysics simulation code GenASiS (General Astrophysical Simulation System), but their fundamental nature makes themmore » useful for physics simulations in many fields.« less

  1. Innovative cardiopulmonary resuscitation and automated external defibrillator programs in schools: Results from the Student Program for Olympic Resuscitation Training in Schools (SPORTS) study.

    PubMed

    Vetter, Victoria L; Haley, Danielle M; Dugan, Noreen P; Iyer, V Ramesh; Shults, Justine

    2016-07-01

    Bystander cardiopulmonary resuscitation (CPR) rates are low. Our study objective was to encourage Philadelphia high school students to develop CPR/AED (automated external defibrillator) training programs and to assess their efficacy. The focus was on developing innovative ways to learn the skills of CPR/AED use, increasing willingness to respond in an emergency, and retention of effective psychomotor resuscitation skills. Health education classes in 15 Philadelphia School District high schools were selected, with one Control and one Study Class per school. Both completed CPR/AED pre- and post-tests to assess cognitive knowledge and psychomotor skills. After pre-tests, both were taught CPR skills and AED use by their health teacher. Study Classes developed innovative programs to learn, teach, and retain CPR/AED skills. The study culminated with Study Classes competing in multiple CPR/AED skills events at the CPR/AED Olympic event. Outcomes included post-tests, Mock Code, and presentation scores. All students' cognitive and psychomotor skills improved with standard classroom education (p<0.001). Competition with other schools at the CPR/AED Olympics and the development of their own student-directed education programs resulted in remarkable retention of psychomotor skill scores in the Study Class (88%) vs the Control Class (79%) (p<0.001). Olympic participants averaged 93.1% on the Mock Code with 10 of 12 schools ≥94%. Students who developed creative and novel methods of teaching and learning resuscitation skills showed outstanding application of these skills in a Mock Code with remarkable psychomotor skill retention, potentially empowering a new generation of effectively trained CPR bystanders. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  2. Teuchos C++ memory management classes, idioms, and related topics, the complete reference : a comprehensive strategy for safe and efficient memory management in C++ for high performance computing.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bartlett, Roscoe Ainsworth

    2010-05-01

    The ubiquitous use of raw pointers in higher-level code is the primary cause of all memory usage problems and memory leaks in C++ programs. This paper describes what might be considered a radical approach to the problem which is to encapsulate the use of all raw pointers and all raw calls to new and delete in higher-level C++ code. Instead, a set of cooperating template classes developed in the Trilinos package Teuchos are used to encapsulate every use of raw C++ pointers in every use case where it appears in high-level code. Included in the set of memory management classesmore » is the typical reference-counted smart pointer class similar to boost::shared ptr (and therefore C++0x std::shared ptr). However, what is missing in boost and the new standard library are non-reference counted classes for remaining use cases where raw C++ pointers would need to be used. These classes have a debug build mode where nearly all programmer errors are caught and gracefully reported at runtime. The default optimized build mode strips all runtime checks and allows the code to perform as efficiently as raw C++ pointers with reasonable usage. Also included is a novel approach for dealing with the circular references problem that imparts little extra overhead and is almost completely invisible to most of the code (unlike the boost and therefore C++0x approach). Rather than being a radical approach, encapsulating all raw C++ pointers is simply the logical progression of a trend in the C++ development and standards community that started with std::auto ptr and is continued (but not finished) with std::shared ptr in C++0x. Using the Teuchos reference-counted memory management classes allows one to remove unnecessary constraints in the use of objects by removing arbitrary lifetime ordering constraints which are a type of unnecessary coupling [23]. The code one writes with these classes will be more likely to be correct on first writing, will be less likely to contain silent (but deadly) memory usage errors, and will be much more robust to later refactoring and maintenance. The level of debug-mode runtime checking provided by the Teuchos memory management classes is stronger in many respects than what is provided by memory checking tools like Valgrind and Purify while being much less expensive. However, tools like Valgrind and Purify perform a number of types of checks (like usage of uninitialized memory) that makes these tools very valuable and therefore complement the Teuchos memory management debug-mode runtime checking. The Teuchos memory management classes and idioms largely address the technical issues in resolving the fragile built-in C++ memory management model (with the exception of circular references which has no easy solution but can be managed as discussed). All that remains is to teach these classes and idioms and expand their usage in C++ codes. The long-term viability of C++ as a usable and productive language depends on it. Otherwise, if C++ is no safer than C, then is the greater complexity of C++ worth what one gets as extra features? Given that C is smaller and easier to learn than C++ and since most programmers don't know object-orientation (or templates or X, Y, and Z features of C++) all that well anyway, then what really are most programmers getting extra out of C++ that would outweigh the extra complexity of C++ over C? C++ zealots will argue this point but the reality is that C++ popularity has peaked and is becoming less popular while the popularity of C has remained fairly stable over the last decade22. Idioms like are advocated in this paper can help to avert this trend but it will require wide community buy-in and a change in the way C++ is taught in order to have the greatest impact. To make these programs more secure, compiler vendors or static analysis tools (e.g. klocwork23) could implement a preprocessor-like language similar to OpenMP24 that would allow the programmer to declare (in comments) that certain blocks of code should be ''pointer-free'' or allow smaller blocks to be 'pointers allowed'. This would significantly improve the robustness of code that uses the memory management classes described here.« less

  3. Structured Kernel Dictionary Learning with Correlation Constraint for Object Recognition.

    PubMed

    Wang, Zhengjue; Wang, Yinghua; Liu, Hongwei; Zhang, Hao

    2017-06-21

    In this paper, we propose a new discriminative non-linear dictionary learning approach, called correlation constrained structured kernel KSVD, for object recognition. The objective function for dictionary learning contains a reconstructive term and a discriminative term. In the reconstructive term, signals are implicitly non-linearly mapped into a space, where a structured kernel dictionary, each sub-dictionary of which lies in the span of the mapped signals from the corresponding class, is established. In the discriminative term, by analyzing the classification mechanism, the correlation constraint is proposed in kernel form, constraining the correlations between different discriminative codes, and restricting the coefficient vectors to be transformed into a feature space, where the features are highly correlated inner-class and nearly independent between-classes. The objective function is optimized by the proposed structured kernel KSVD. During the classification stage, the specific form of the discriminative feature is needless to be known, while the inner product of the discriminative feature with kernel matrix embedded is available, and is suitable for a linear SVM classifier. Experimental results demonstrate that the proposed approach outperforms many state-of-the-art dictionary learning approaches for face, scene and synthetic aperture radar (SAR) vehicle target recognition.

  4. Moles: Tool-Assisted Environment Isolation with Closures

    NASA Astrophysics Data System (ADS)

    de Halleux, Jonathan; Tillmann, Nikolai

    Isolating test cases from environment dependencies is often desirable, as it increases test reliability and reduces test execution time. However, code that calls non-virtual methods or consumes sealed classes is often impossible to test in isolation. Moles is a new lightweight framework which addresses this problem. For any .NET method, Moles allows test-code to provide alternative implementations, given as .NET delegates, for which C# provides very concise syntax while capturing local variables in a closure object. Using code instrumentation, the Moles framework will redirect calls to provided delegates instead of the original methods. The Moles framework is designed to work together with the dynamic symbolic execution tool Pex to enable automated test generation. In a case study, testing code programmed against the Microsoft SharePoint Foundation API, we achieved full code coverage while running tests in isolation without an actual SharePoint server. The Moles framework integrates with .NET and Visual Studio.

  5. A knowledge discovery object model API for Java

    PubMed Central

    Zuyderduyn, Scott D; Jones, Steven JM

    2003-01-01

    Background Biological data resources have become heterogeneous and derive from multiple sources. This introduces challenges in the management and utilization of this data in software development. Although efforts are underway to create a standard format for the transmission and storage of biological data, this objective has yet to be fully realized. Results This work describes an application programming interface (API) that provides a framework for developing an effective biological knowledge ontology for Java-based software projects. The API provides a robust framework for the data acquisition and management needs of an ontology implementation. In addition, the API contains classes to assist in creating GUIs to represent this data visually. Conclusions The Knowledge Discovery Object Model (KDOM) API is particularly useful for medium to large applications, or for a number of smaller software projects with common characteristics or objectives. KDOM can be coupled effectively with other biologically relevant APIs and classes. Source code, libraries, documentation and examples are available at . PMID:14583100

  6. Representing metabolic pathway information: an object-oriented approach.

    PubMed

    Ellis, L B; Speedie, S M; McLeish, R

    1998-01-01

    The University of Minnesota Biocatalysis/Biodegradation Database (UM-BBD) is a website providing information and dynamic links for microbial metabolic pathways, enzyme reactions, and their substrates and products. The Compound, Organism, Reaction and Enzyme (CORE) object-oriented database management system was developed to contain and serve this information. CORE was developed using Java, an object-oriented programming language, and PSE persistent object classes from Object Design, Inc. CORE dynamically generates descriptive web pages for reactions, compounds and enzymes, and reconstructs ad hoc pathway maps starting from any UM-BBD reaction. CORE code is available from the authors upon request. CORE is accessible through the UM-BBD at: http://www. labmed.umn.edu/umbbd/index.html.

  7. System and method for integrating and accessing multiple data sources within a data warehouse architecture

    DOEpatents

    Musick, Charles R [Castro Valley, CA; Critchlow, Terence [Livermore, CA; Ganesh, Madhaven [San Jose, CA; Slezak, Tom [Livermore, CA; Fidelis, Krzysztof [Brentwood, CA

    2006-12-19

    A system and method is disclosed for integrating and accessing multiple data sources within a data warehouse architecture. The metadata formed by the present method provide a way to declaratively present domain specific knowledge, obtained by analyzing data sources, in a consistent and useable way. Four types of information are represented by the metadata: abstract concepts, databases, transformations and mappings. A mediator generator automatically generates data management computer code based on the metadata. The resulting code defines a translation library and a mediator class. The translation library provides a data representation for domain specific knowledge represented in a data warehouse, including "get" and "set" methods for attributes that call transformation methods and derive a value of an attribute if it is missing. The mediator class defines methods that take "distinguished" high-level objects as input and traverse their data structures and enter information into the data warehouse.

  8. A description of the new 3D electron gun and collector modeling tool: MICHELLE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Petillo, J.; Mondelli, A.; Krueger, W.

    1999-07-01

    A new 3D finite element gun and collector modeling code is under development at SAIC in collaboration with industrial partners and national laboratories. This development program has been designed specifically to address the shortcomings of current simulation and modeling tools. In particular, although there are 3D gun codes that exist today, their ability to address fine scale features is somewhat limited in 3D due to disparate length scales of certain classes of devices. Additionally, features like advanced emission rules, including thermionic Child's law and comprehensive secondary emission models also need attention. The program specifically targets problems classes including gridded-guns, sheet-beammore » guns, multi-beam devices, and anisotropic collectors. The presentation will provide an overview of the program objectives, the approach to be taken by the development team, and a status of the project.« less

  9. Pacific Northwest (PNW) Hydrologic Landscape (HL) polygons and HL code

    EPA Pesticide Factsheets

    A five-letter hydrologic landscape code representing five indices of hydrologic form that are related to hydrologic function: climate, seasonality, aquifer permeability, terrain, and soil permeability. Each hydrologic assessment unit is classified by one of the 81 different five-letter codes representing these indices. Polygon features in this dataset were created by aggregating (dissolving boundaries between) adjacent, similarly-coded hydrologic assessment units. Climate Classes: V-Very wet, W-Wet, M-Moist, D-Dry, S-Semiarid, A-Arid. Seasonality Sub-Classes: w-Fall or winter, s-Spring. Aquifer Permeability Classes: H-High, L-Low. Terrain Classes: M-Mountain, T-Transitional, F-Flat. Soil Permeability Classes: H-High, L-Low.

  10. Organizing and Typing Persistent Objects Within an Object-Oriented Framework

    NASA Technical Reports Server (NTRS)

    Madany, Peter W.; Campbell, Roy H.

    1991-01-01

    Conventional operating systems provide little or no direct support for the services required for an efficient persistent object system implementation. We have built a persistent object scheme using a customization and extension of an object-oriented operating system called Choices. Choices includes a framework for the storage of persistent data that is suited to the construction of both conventional file system and persistent object system. In this paper we describe three areas in which persistent object support differs from file system support: storage organization, storage management, and typing. Persistent object systems must support various sizes of objects efficiently. Customizable containers, which are themselves persistent objects and can be nested, support a wide range of object sizes in Choices. Collections of persistent objects that are accessed as an aggregate and collections of light-weight persistent objects can be clustered in containers that are nested within containers for larger objects. Automated garbage collection schemes are added to storage management and have a major impact on persistent object applications. The Choices persistent object store provides extensible sets of persistent object types. The store contains not only the data for persistent objects but also the names of the classes to which they belong and the code for the operation of the classes. Besides presenting persistent object storage organization, storage management, and typing, this paper discusses how persistent objects are named and used within the Choices persistent data/file system framework.

  11. Evaluating Students' Programming Skill Behaviour and Personalizing Their Computer Learning Environment Using "The Hour of Code" Paradigm

    ERIC Educational Resources Information Center

    Mallios, Nikolaos; Vassilakopoulos, Michael Gr.

    2015-01-01

    One of the most intriguing objectives when teaching computer science in mid-adolescence high school students is attracting and mainly maintaining their concentration within the limits of the class. A number of theories have been proposed and numerous methodologies have been applied, aiming to assist in the implementation of a personalized learning…

  12. LEGO: A modular accelerator design code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cai, Y.; Donald, M.; Irwin, J.

    1997-08-01

    An object-oriented accelerator design code has been designed and implemented in a simple and modular fashion. It contains all major features of its predecessors: TRACY and DESPOT. All physics of single-particle dynamics is implemented based on the Hamiltonian in the local frame of the component. Components can be moved arbitrarily in the three dimensional space. Several symplectic integrators are used to approximate the integration of the Hamiltonian. A differential algebra class is introduced to extract a Taylor map up to arbitrary order. Analysis of optics is done in the same way both for the linear and nonlinear case. Currently, themore » code is used to design and simulate the lattices of the PEP-II. It will also be used for the commissioning.« less

  13. Open-Source Development of the Petascale Reactive Flow and Transport Code PFLOTRAN

    NASA Astrophysics Data System (ADS)

    Hammond, G. E.; Andre, B.; Bisht, G.; Johnson, T.; Karra, S.; Lichtner, P. C.; Mills, R. T.

    2013-12-01

    Open-source software development has become increasingly popular in recent years. Open-source encourages collaborative and transparent software development and promotes unlimited free redistribution of source code to the public. Open-source development is good for science as it reveals implementation details that are critical to scientific reproducibility, but generally excluded from journal publications. In addition, research funds that would have been spent on licensing fees can be redirected to code development that benefits more scientists. In 2006, the developers of PFLOTRAN open-sourced their code under the U.S. Department of Energy SciDAC-II program. Since that time, the code has gained popularity among code developers and users from around the world seeking to employ PFLOTRAN to simulate thermal, hydraulic, mechanical and biogeochemical processes in the Earth's surface/subsurface environment. PFLOTRAN is a massively-parallel subsurface reactive multiphase flow and transport simulator designed from the ground up to run efficiently on computing platforms ranging from the laptop to leadership-class supercomputers, all from a single code base. The code employs domain decomposition for parallelism and is founded upon the well-established and open-source parallel PETSc and HDF5 frameworks. PFLOTRAN leverages modern Fortran (i.e. Fortran 2003-2008) in its extensible object-oriented design. The use of this progressive, yet domain-friendly programming language has greatly facilitated collaboration in the code's software development. Over the past year, PFLOTRAN's top-level data structures were refactored as Fortran classes (i.e. extendible derived types) to improve the flexibility of the code, ease the addition of new process models, and enable coupling to external simulators. For instance, PFLOTRAN has been coupled to the parallel electrical resistivity tomography code E4D to enable hydrogeophysical inversion while the same code base can be used as a third-party library to provide hydrologic flow, energy transport, and biogeochemical capability to the community land model, CLM, part of the open-source community earth system model (CESM) for climate. In this presentation, the advantages and disadvantages of open source software development in support of geoscience research at government laboratories, universities, and the private sector are discussed. Since the code is open-source (i.e. it's transparent and readily available to competitors), the PFLOTRAN team's development strategy within a competitive research environment is presented. Finally, the developers discuss their approach to object-oriented programming and the leveraging of modern Fortran in support of collaborative geoscience research as the Fortran standard evolves among compiler vendors.

  14. A Strategy for Reusing the Data of Electronic Medical Record Systems for Clinical Research.

    PubMed

    Matsumura, Yasushi; Hattori, Atsushi; Manabe, Shiro; Tsuda, Tsutomu; Takeda, Toshihiro; Okada, Katsuki; Murata, Taizo; Mihara, Naoki

    2016-01-01

    There is a great need to reuse data stored in electronic medical records (EMR) databases for clinical research. We previously reported the development of a system in which progress notes and case report forms (CRFs) were simultaneously recorded using a template in the EMR in order to exclude redundant data entry. To make the data collection process more efficient, we are developing a system in which the data originally stored in the EMR database can be populated within a frame in a template. We developed interface plugin modules that retrieve data from the databases of other EMR applications. A universal keyword written in a template master is converted to a local code using a data conversion table, then the objective data is retrieved from the corresponding database. The template element data, which are entered by a template, are stored in the template element database. To retrieve the data entered by other templates, the objective data is designated by the template element code with the template code, or by the concept code if it is written for the element. When the application systems in the EMR generate documents, they also generate a PDF file and a corresponding document profile XML, which includes important data, and send them to the document archive server and the data sharing saver, respectively. In the data sharing server, the data are represented by an item with an item code with a document class code and its value. By linking a concept code to an item identifier, an objective data can be retrieved by designating a concept code. We employed a flexible strategy in which a unique identifier for a hospital is initially attached to all of the data that the hospital generates. The identifier is secondarily linked with concept codes. The data that are not linked with a concept code can also be retrieved using the unique identifier of the hospital. This strategy makes it possible to reuse any of a hospital's data.

  15. Multiple component codes based generalized LDPC codes for high-speed optical transport.

    PubMed

    Djordjevic, Ivan B; Wang, Ting

    2014-07-14

    A class of generalized low-density parity-check (GLDPC) codes suitable for optical communications is proposed, which consists of multiple local codes. It is shown that Hamming, BCH, and Reed-Muller codes can be used as local codes, and that the maximum a posteriori probability (MAP) decoding of these local codes by Ashikhmin-Lytsin algorithm is feasible in terms of complexity and performance. We demonstrate that record coding gains can be obtained from properly designed GLDPC codes, derived from multiple component codes. We then show that several recently proposed classes of LDPC codes such as convolutional and spatially-coupled codes can be described using the concept of GLDPC coding, which indicates that the GLDPC coding can be used as a unified platform for advanced FEC enabling ultra-high speed optical transport. The proposed class of GLDPC codes is also suitable for code-rate adaption, to adjust the error correction strength depending on the optical channel conditions.

  16. PAL: an object-oriented programming library for molecular evolution and phylogenetics.

    PubMed

    Drummond, A; Strimmer, K

    2001-07-01

    Phylogenetic Analysis Library (PAL) is a collection of Java classes for use in molecular evolution and phylogenetics. PAL provides a modular environment for the rapid construction of both special-purpose and general analysis programs. PAL version 1.1 consists of 145 public classes or interfaces in 13 packages, including classes for models of character evolution, maximum-likelihood estimation, and the coalescent, with a total of more than 27000 lines of code. The PAL project is set up as a collaborative project to facilitate contributions from other researchers. AVAILIABILTY: The program is free and is available at http://www.pal-project.org. It requires Java 1.1 or later. PAL is licensed under the GNU General Public License.

  17. An object-oriented approach for parallel self adaptive mesh refinement on block structured grids

    NASA Technical Reports Server (NTRS)

    Lemke, Max; Witsch, Kristian; Quinlan, Daniel

    1993-01-01

    Self-adaptive mesh refinement dynamically matches the computational demands of a solver for partial differential equations to the activity in the application's domain. In this paper we present two C++ class libraries, P++ and AMR++, which significantly simplify the development of sophisticated adaptive mesh refinement codes on (massively) parallel distributed memory architectures. The development is based on our previous research in this area. The C++ class libraries provide abstractions to separate the issues of developing parallel adaptive mesh refinement applications into those of parallelism, abstracted by P++, and adaptive mesh refinement, abstracted by AMR++. P++ is a parallel array class library to permit efficient development of architecture independent codes for structured grid applications, and AMR++ provides support for self-adaptive mesh refinement on block-structured grids of rectangular non-overlapping blocks. Using these libraries, the application programmers' work is greatly simplified to primarily specifying the serial single grid application and obtaining the parallel and self-adaptive mesh refinement code with minimal effort. Initial results for simple singular perturbation problems solved by self-adaptive multilevel techniques (FAC, AFAC), being implemented on the basis of prototypes of the P++/AMR++ environment, are presented. Singular perturbation problems frequently arise in large applications, e.g. in the area of computational fluid dynamics. They usually have solutions with layers which require adaptive mesh refinement and fast basic solvers in order to be resolved efficiently.

  18. Integrated Task and Data Parallel Programming

    NASA Technical Reports Server (NTRS)

    Grimshaw, A. S.

    1998-01-01

    This research investigates the combination of task and data parallel language constructs within a single programming language. There are an number of applications that exhibit properties which would be well served by such an integrated language. Examples include global climate models, aircraft design problems, and multidisciplinary design optimization problems. Our approach incorporates data parallel language constructs into an existing, object oriented, task parallel language. The language will support creation and manipulation of parallel classes and objects of both types (task parallel and data parallel). Ultimately, the language will allow data parallel and task parallel classes to be used either as building blocks or managers of parallel objects of either type, thus allowing the development of single and multi-paradigm parallel applications. 1995 Research Accomplishments In February I presented a paper at Frontiers 1995 describing the design of the data parallel language subset. During the spring I wrote and defended my dissertation proposal. Since that time I have developed a runtime model for the language subset. I have begun implementing the model and hand-coding simple examples which demonstrate the language subset. I have identified an astrophysical fluid flow application which will validate the data parallel language subset. 1996 Research Agenda Milestones for the coming year include implementing a significant portion of the data parallel language subset over the Legion system. Using simple hand-coded methods, I plan to demonstrate (1) concurrent task and data parallel objects and (2) task parallel objects managing both task and data parallel objects. My next steps will focus on constructing a compiler and implementing the fluid flow application with the language. Concurrently, I will conduct a search for a real-world application exhibiting both task and data parallelism within the same program. Additional 1995 Activities During the fall I collaborated with Andrew Grimshaw and Adam Ferrari to write a book chapter which will be included in Parallel Processing in C++ edited by Gregory Wilson. I also finished two courses, Compilers and Advanced Compilers, in 1995. These courses complete my class requirements at the University of Virginia. I have only my dissertation research and defense to complete.

  19. Integrated Task And Data Parallel Programming: Language Design

    NASA Technical Reports Server (NTRS)

    Grimshaw, Andrew S.; West, Emily A.

    1998-01-01

    his research investigates the combination of task and data parallel language constructs within a single programming language. There are an number of applications that exhibit properties which would be well served by such an integrated language. Examples include global climate models, aircraft design problems, and multidisciplinary design optimization problems. Our approach incorporates data parallel language constructs into an existing, object oriented, task parallel language. The language will support creation and manipulation of parallel classes and objects of both types (task parallel and data parallel). Ultimately, the language will allow data parallel and task parallel classes to be used either as building blocks or managers of parallel objects of either type, thus allowing the development of single and multi-paradigm parallel applications. 1995 Research Accomplishments In February I presented a paper at Frontiers '95 describing the design of the data parallel language subset. During the spring I wrote and defended my dissertation proposal. Since that time I have developed a runtime model for the language subset. I have begun implementing the model and hand-coding simple examples which demonstrate the language subset. I have identified an astrophysical fluid flow application which will validate the data parallel language subset. 1996 Research Agenda Milestones for the coming year include implementing a significant portion of the data parallel language subset over the Legion system. Using simple hand-coded methods, I plan to demonstrate (1) concurrent task and data parallel objects and (2) task parallel objects managing both task and data parallel objects. My next steps will focus on constructing a compiler and implementing the fluid flow application with the language. Concurrently, I will conduct a search for a real-world application exhibiting both task and data parallelism within the same program m. Additional 1995 Activities During the fall I collaborated with Andrew Grimshaw and Adam Ferrari to write a book chapter which will be included in Parallel Processing in C++ edited by Gregory Wilson. I also finished two courses, Compilers and Advanced Compilers, in 1995. These courses complete my class requirements at the University of Virginia. I have only my dissertation research and defense to complete.

  20. Development of an ADP Training Program to Serve the EPA Data Processing Community.

    DTIC Science & Technology

    1976-07-29

    divide, compute , perform and alter statements; data representation and conversion; table processing; and indexed sequential and random access file...processing. The course workshop will include the testing of coded exercises and problems on a computer system. CLASS SIZE: Individualized METHODS/CONDUCT...familiarization with computer concepts will be helpful. OBJECTIVES OF CURRICULUM After completing this course, the student should have developed a working

  1. Visual object agnosia is associated with a breakdown of object-selective responses in the lateral occipital cortex.

    PubMed

    Ptak, Radek; Lazeyras, François; Di Pietro, Marie; Schnider, Armin; Simon, Stéphane R

    2014-07-01

    Patients with visual object agnosia fail to recognize the identity of visually presented objects despite preserved semantic knowledge. Object agnosia may result from damage to visual cortex lying close to or overlapping with the lateral occipital complex (LOC), a brain region that exhibits selectivity to the shape of visually presented objects. Despite this anatomical overlap the relationship between shape processing in the LOC and shape representations in object agnosia is unknown. We studied a patient with object agnosia following isolated damage to the left occipito-temporal cortex overlapping with the LOC. The patient showed intact processing of object structure, yet often made identification errors that were mainly based on the global visual similarity between objects. Using functional Magnetic Resonance Imaging (fMRI) we found that the damaged as well as the contralateral, structurally intact right LOC failed to show any object-selective fMRI activity, though the latter retained selectivity for faces. Thus, unilateral damage to the left LOC led to a bilateral breakdown of neural responses to a specific stimulus class (objects and artefacts) while preserving the response to a different stimulus class (faces). These findings indicate that representations of structure necessary for the identification of objects crucially rely on bilateral, distributed coding of shape features. Copyright © 2014 Elsevier Ltd. All rights reserved.

  2. The research of .NET framework based on delegate of the LCE

    NASA Astrophysics Data System (ADS)

    Chen, Yi-peng

    2011-10-01

    Programmers realize LCE Enterprise services provided by NET framework when they develop applied VC# programming design language with component technology facing objects Lots of basic codes used to be compiled in the traditional programming design. However, nowadays this can be done just by adding corresponding character at class, interface, method, assembly with simple declarative program. This paper mainly expatiates the mechanism to realize LCE event services with delegate mode in C#. It also introduces the procedure of applying event class, event publisher, subscriber and client in LCE technology. It analyses the technology points of LCE based on delegate mode with popular language and practicing cases.

  3. LAMOST OBSERVATIONS IN THE KEPLER FIELD: SPECTRAL CLASSIFICATION WITH THE MKCLASS CODE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gray, R. O.; Corbally, C. J.; Cat, P. De

    2016-01-15

    The LAMOST-Kepler project was designed to obtain high-quality, low-resolution spectra of many of the stars in the Kepler field with the Large Sky Area Multi Object Fiber Spectroscopic Telescope (LAMOST) spectroscopic telescope. To date 101,086 spectra of 80,447 objects over the entire Kepler field have been acquired. Physical parameters, radial velocities, and rotational velocities of these stars will be reported in other papers. In this paper we present MK spectral classifications for these spectra determined with the automatic classification code MKCLASS. We discuss the quality and reliability of the spectral types and present histograms showing the frequency of the spectralmore » types in the main table organized according to luminosity class. Finally, as examples of the use of this spectral database, we compute the proportion of A-type stars that are Am stars, and identify 32 new barium dwarf candidates.« less

  4. Parallel and Portable Monte Carlo Particle Transport

    NASA Astrophysics Data System (ADS)

    Lee, S. R.; Cummings, J. C.; Nolen, S. D.; Keen, N. D.

    1997-08-01

    We have developed a multi-group, Monte Carlo neutron transport code in C++ using object-oriented methods and the Parallel Object-Oriented Methods and Applications (POOMA) class library. This transport code, called MC++, currently computes k and α eigenvalues of the neutron transport equation on a rectilinear computational mesh. It is portable to and runs in parallel on a wide variety of platforms, including MPPs, clustered SMPs, and individual workstations. It contains appropriate classes and abstractions for particle transport and, through the use of POOMA, for portable parallelism. Current capabilities are discussed, along with physics and performance results for several test problems on a variety of hardware, including all three Accelerated Strategic Computing Initiative (ASCI) platforms. Current parallel performance indicates the ability to compute α-eigenvalues in seconds or minutes rather than days or weeks. Current and future work on the implementation of a general transport physics framework (TPF) is also described. This TPF employs modern C++ programming techniques to provide simplified user interfaces, generic STL-style programming, and compile-time performance optimization. Physics capabilities of the TPF will be extended to include continuous energy treatments, implicit Monte Carlo algorithms, and a variety of convergence acceleration techniques such as importance combing.

  5. EX6AFS: A data acquisition system for high-speed dispersive EXAFS measurements implemented using object-oriented programming techniques

    NASA Astrophysics Data System (ADS)

    Jennings, Guy; Lee, Peter L.

    1995-02-01

    In this paper we describe the design and implementation of a computerized data-acquisition system for high-speed energy-dispersive EXAFS experiments on the X6A beamline at the National Synchrotron Light Source. The acquisition system drives the stepper motors used to move the components of the experimental setup and controls the readout of the EXAFS spectra. The system runs on a Macintosh IIfx computer and is written entirely in the object-oriented language C++. Large segments of the system are implemented by means of commercial class libraries, specifically the MacApp application framework from Apple, the Rogue Wave class library, and the Hierarchical Data Format datafile format library from the National Center for Supercomputing Applications. This reduces the amount of code that must be written and enhances reliability. The system makes use of several advanced features of C++: Multiple inheritance allows the code to be decomposed into independent software components and the use of exception handling allows the system to be much more reliable in the event of unexpected errors. Object-oriented techniques allow the program to be extended easily as new requirements develop. All sections of the program related to a particular concept are located in a small set of source files. The program will also be used as a prototype for future software development plans for the Basic Energy Science Synchrotron Radiation Center Collaborative Access Team beamlines being designed and built at the Advanced Photon Source.

  6. Somatosensory pleasure circuit: from skin to brain and back.

    PubMed

    Lloyd, Donna M; McGlone, Francis P; Yosipovitch, Gil

    2015-05-01

    The skin senses serve a discriminative function, allowing us to manipulate objects and detect touch and temperature, and an affective/emotional function, manifested as itch or pain when the skin is damaged. Two different classes of nerve fibre mediate these dissociable aspects of cutaneous somatosensation: (i) myelinated A-beta and A-delta afferents that provide rapid information about the location and physical characteristics of skin contact; and (ii) unmyelinated, slow-conducting C-fibre afferents that are typically associated with coding the emotional properties of pain and itch. However, recent research has identified a third class of C-fibre afferents that code for the pleasurable properties of touch - c-tactile afferents or CTs. Clinical application of treatments that target pleasant, CT-mediated touch (such as massage therapy) could, in the future, provide a complementary, non-pharmacological means of treating both the physical and psychological aspects of chronic skin conditions such as itch and eczema. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  7. Risk-Based Object Oriented Testing

    NASA Technical Reports Server (NTRS)

    Rosenberg, Linda H.; Stapko, Ruth; Gallo, Albert

    2000-01-01

    Software testing is a well-defined phase of the software development life cycle. Functional ("black box") testing and structural ("white box") testing are two methods of test case design commonly used by software developers. A lesser known testing method is risk-based testing, which takes into account the probability of failure of a portion of code as determined by its complexity. For object oriented programs, a methodology is proposed for identification of risk-prone classes. Risk-based testing is a highly effective testing technique that can be used to find and fix the most important problems as quickly as possible.

  8. GENASIS Mathematics : Object-oriented manifolds, operations, and solvers for large-scale physics simulations

    NASA Astrophysics Data System (ADS)

    Cardall, Christian Y.; Budiardja, Reuben D.

    2018-01-01

    The large-scale computer simulation of a system of physical fields governed by partial differential equations requires some means of approximating the mathematical limit of continuity. For example, conservation laws are often treated with a 'finite-volume' approach in which space is partitioned into a large number of small 'cells,' with fluxes through cell faces providing an intuitive discretization modeled on the mathematical definition of the divergence operator. Here we describe and make available Fortran 2003 classes furnishing extensible object-oriented implementations of simple meshes and the evolution of generic conserved currents thereon, along with individual 'unit test' programs and larger example problems demonstrating their use. These classes inaugurate the Mathematics division of our developing astrophysics simulation code GENASIS (Gen eral A strophysical Si mulation S ystem), which will be expanded over time to include additional meshing options, mathematical operations, solver types, and solver variations appropriate for many multiphysics applications.

  9. A Calculus for Boxes and Traits in a Java-Like Setting

    NASA Astrophysics Data System (ADS)

    Bettini, Lorenzo; Damiani, Ferruccio; de Luca, Marco; Geilmann, Kathrin; Schäfer, Jan

    The box model is a component model for the object-oriented paradigm, that defines components (the boxes) with clear encapsulation boundaries. Having well-defined boundaries is crucial in component-based software development, because it enables to argue about the interference and interaction between a component and its context. In general, boxes contain several objects and inner boxes, of which some are local to the box and cannot be accessed from other boxes and some can be accessible by other boxes. A trait is a set of methods divorced from any class hierarchy. Traits can be composed together to form classes or other traits. We present a calculus for boxes and traits. Traits are units of fine-grained reuse, whereas boxes can be seen as units of coarse-grained reuse. The calculus is equipped with an ownership type system and allows us to combine coarse- and fine-grained reuse of code by maintaining encapsulation of components.

  10. A Validation of Object-Oriented Design Metrics as Quality Indicators

    NASA Technical Reports Server (NTRS)

    Basili, Victor R.; Briand, Lionel C.; Melo, Walcelio

    1997-01-01

    This paper presents the results of a study in which we empirically investigated the suits of object-oriented (00) design metrics introduced in another work. More specifically, our goal is to assess these metrics as predictors of fault-prone classes and, therefore, determine whether they can be used as early quality indicators. This study is complementary to the work described where the same suite of metrics had been used to assess frequencies of maintenance changes to classes. To perform our validation accurately, we collected data on the development of eight medium-sized information management systems based on identical requirements. All eight projects were developed using a sequential life cycle model, a well-known 00 analysis/design method and the C++ programming language. Based on empirical and quantitative analysis, the advantages and drawbacks of these 00 metrics are discussed. Several of Chidamber and Kamerer's 00 metrics appear to be useful to predict class fault-proneness during the early phases of the life-cycle. Also, on our data set, they are better predictors than 'traditional' code metrics, which can only be collected at a later phase of the software development processes.

  11. Implementation of a frame-based representation in CLIPS

    NASA Technical Reports Server (NTRS)

    Assal, Hisham; Myers, Leonard

    1990-01-01

    Knowledge representation is one of the major concerns in expert systems. The representation of domain-specific knowledge should agree with the nature of the domain entities and their use in the real world. For example, architectural applications deal with objects and entities such as spaces, walls, and windows. A natural way of representing these architectural entities is provided by frames. This research explores the potential of using the expert system shell CLIPS, developed by NASA, to implement a frame-based representation that can accommodate architectural knowledge. These frames are similar but quite different from the 'template' construct in version 4.3 of CLIPS. Templates support only the grouping of related information and the assignment of default values to template fields. In addition to these features frames provide other capabilities including definition of classes, inheritance between classes and subclasses, relation of objects of different classes with 'has-a', association of methods (demons) of different types (standard and user-defined) to fields (slots), and creation of new fields at run-time. This frame-based representation is implemented completely in CLIPS. No change to the source code is necessary.

  12. A Validation of Object-Oriented Design Metrics

    NASA Technical Reports Server (NTRS)

    Basili, Victor R.; Briand, Lionel; Melo, Walcelio L.

    1995-01-01

    This paper presents the results of a study conducted at the University of Maryland in which we experimentally investigated the suite of Object-Oriented (00) design metrics introduced by [Chidamber and Kemerer, 1994]. In order to do this, we assessed these metrics as predictors of fault-prone classes. This study is complementary to [Lieand Henry, 1993] where the same suite of metrics had been used to assess frequencies of maintenance changes to classes. To perform our validation accurately, we collected data on the development of eight medium-sized information management systems based on identical requirements. All eight projects were developed using a sequential life cycle model, a well-known 00 analysis/design method and the C++ programming language. Based on experimental results, the advantages and drawbacks of these 00 metrics are discussed and suggestions for improvement are provided. Several of Chidamber and Kemerer's 00 metrics appear to be adequate to predict class fault-proneness during the early phases of the life-cycle. We also showed that they are, on our data set, better predictors than "traditional" code metrics, which can only be collected at a later phase of the software development processes.

  13. Some partial-unit-memory convolutional codes

    NASA Technical Reports Server (NTRS)

    Abdel-Ghaffar, K.; Mceliece, R. J.; Solomon, G.

    1991-01-01

    The results of a study on a class of error correcting codes called partial unit memory (PUM) codes are presented. This class of codes, though not entirely new, has until now remained relatively unexplored. The possibility of using the well developed theory of block codes to construct a large family of promising PUM codes is shown. The performance of several specific PUM codes are compared with that of the Voyager standard (2, 1, 6) convolutional code. It was found that these codes can outperform the Voyager code with little or no increase in decoder complexity. This suggests that there may very well be PUM codes that can be used for deep space telemetry that offer both increased performance and decreased implementational complexity over current coding systems.

  14. LEGO - A Class Library for Accelerator Design and Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cai, Yunhai

    1998-11-19

    An object-oriented class library of accelerator design and simulation is designed and implemented in a simple and modular fashion. All physics of single-particle dynamics is implemented based on the Hamiltonian in the local frame of the component. Symplectic integrators are used to approximate the integration of the Hamiltonian. A differential algebra class is introduced to extract a Taylor map up to arbitrary order. Analysis of optics is done in the same way both for the linear and non-linear cases. Recently, Monte Carlo simulation of synchrotron radiation has been added into the library. The code is used to design and simulatemore » the lattices of the PEP-II and SPEAR3. And it is also used for the commissioning of the PEP-II. Some examples of how to use the library will be given.« less

  15. High compression image and image sequence coding

    NASA Technical Reports Server (NTRS)

    Kunt, Murat

    1989-01-01

    The digital representation of an image requires a very large number of bits. This number is even larger for an image sequence. The goal of image coding is to reduce this number, as much as possible, and reconstruct a faithful duplicate of the original picture or image sequence. Early efforts in image coding, solely guided by information theory, led to a plethora of methods. The compression ratio reached a plateau around 10:1 a couple of years ago. Recent progress in the study of the brain mechanism of vision and scene analysis has opened new vistas in picture coding. Directional sensitivity of the neurones in the visual pathway combined with the separate processing of contours and textures has led to a new class of coding methods capable of achieving compression ratios as high as 100:1 for images and around 300:1 for image sequences. Recent progress on some of the main avenues of object-based methods is presented. These second generation techniques make use of contour-texture modeling, new results in neurophysiology and psychophysics and scene analysis.

  16. Emergency department discharge prescription errors in an academic medical center

    PubMed Central

    Belanger, April; Devine, Lauren T.; Lane, Aaron; Condren, Michelle E.

    2017-01-01

    This study described discharge prescription medication errors written for emergency department patients. This study used content analysis in a cross-sectional design to systematically categorize prescription errors found in a report of 1000 discharge prescriptions submitted in the electronic medical record in February 2015. Two pharmacy team members reviewed the discharge prescription list for errors. Open-ended data were coded by an additional rater for agreement on coding categories. Coding was based upon majority rule. Descriptive statistics were used to address the study objective. Categories evaluated were patient age, provider type, drug class, and type and time of error. The discharge prescription error rate out of 1000 prescriptions was 13.4%, with “incomplete or inadequate prescription” being the most commonly detected error (58.2%). The adult and pediatric error rates were 11.7% and 22.7%, respectively. The antibiotics reviewed had the highest number of errors. The highest within-class error rates were with antianginal medications, antiparasitic medications, antacids, appetite stimulants, and probiotics. Emergency medicine residents wrote the highest percentage of prescriptions (46.7%) and had an error rate of 9.2%. Residents of other specialties wrote 340 prescriptions and had an error rate of 20.9%. Errors occurred most often between 10:00 am and 6:00 pm. PMID:28405061

  17. Integrated Support for Manipulation and Display of 3D Objects for the Command and Control Workstation of the Future

    DTIC Science & Technology

    1989-06-01

    Science Unclassified SECURITY CLASSIFICATION OF THIS PAGE REPORT DOCUMENTATION PAGE la. REPORT SECURITY CLASS’r!CATION )b RESTRICTIVE MARKINGS UNCLASSIFIED...2a. SECURITY CLASSIFICATION AUTHORITY 3. DISTRIBUTION/AVAILABILITY OF REPORT Approved for public release; Zb. DECLASSIFICATION I DOWNGRADING SCHEDULE...ZIP Code) 10 SOURCE OF FUNDING NUMBERS PROGRAM PROJECT TASK WORK UNIT Monterey, CA. 93943 FLEMENT NO. NO. NO ACCESSION NO. 11. TITLE (Include Security

  18. Potential Effects of Leak-Before-Break on Light Water Reactor Design.

    DTIC Science & Technology

    1985-08-26

    Boiler and Pressure Vessel Code . In fact, section 3 of that code was created for nuclear applications. This... Boiler and Pressure Vessel Code . The only major change which leak-before-break would require in these analyses would be that all piping to be considered...XI of the ASME Boiler and Pressure Vessel Code , and is already required for all Class I piping systems in the plant. Class I systems are those

  19. English-Thai Code-Switching of Teachers in ESP Classes

    ERIC Educational Resources Information Center

    Promnath, Korawan; Tayjasanant, Chamaipak

    2016-01-01

    The term code-switching (CS) that occurs in everyday situations, or naturalistic code-switching, has been a controversial strategy regarding whether it benefits or impedes language learning. The aim of this study was to investigate CS in conversations between teachers and students of ESP classes in order to explore the types and functions of CS…

  20. Neural representation of objects in space: a dual coding account.

    PubMed Central

    Humphreys, G W

    1998-01-01

    I present evidence on the nature of object coding in the brain and discuss the implications of this coding for models of visual selective attention. Neuropsychological studies of task-based constraints on: (i) visual neglect; and (ii) reading and counting, reveal the existence of parallel forms of spatial representation for objects: within-object representations, where elements are coded as parts of objects, and between-object representations, where elements are coded as independent objects. Aside from these spatial codes for objects, however, the coding of visual space is limited. We are extremely poor at remembering small spatial displacements across eye movements, indicating (at best) impoverished coding of spatial position per se. Also, effects of element separation on spatial extinction can be eliminated by filling the space with an occluding object, indicating that spatial effects on visual selection are moderated by object coding. Overall, there are separate limits on visual processing reflecting: (i) the competition to code parts within objects; (ii) the small number of independent objects that can be coded in parallel; and (iii) task-based selection of whether within- or between-object codes determine behaviour. Between-object coding may be linked to the dorsal visual system while parallel coding of parts within objects takes place in the ventral system, although there may additionally be some dorsal involvement either when attention must be shifted within objects or when explicit spatial coding of parts is necessary for object identification. PMID:9770227

  1. About the necessity to manage events coded with MedDRA prior to statistical analysis: proposal of a strategy with application to a randomized clinical trial, ANRS 099 ALIZE.

    PubMed

    Journot, Valérie; Tabuteau, Sophie; Collin, Fidéline; Molina, Jean-Michel; Chene, Geneviève; Rancinan, Corinne

    2008-03-01

    Since 2003, the Medical Dictionary for Regulatory Activities (MedDRA) is the regulatory standard for safety report in clinical trials in the European Community. Yet, we found no published example of a practical experience for a scientifically oriented statistical analysis of events coded with MedDRA. We took advantage of a randomized trial in HIV-infected patients with MedDRA-coded events to explain the difficulties encountered during the events analysis and the strategy developed to report events consistently with trial-specific objectives. MedDRA has a rich hierarchical structure, which allows the grouping of coded terms into 5 levels, the highest being "System Organ Class" (SOC). Each coded term may be related to several SOCs, among which one primary SOC is defined. We developed a new general 5-step strategy to select a SOC as trial primary SOC, consistently with trial-specific objectives for this analysis. We applied it to the ANRS 099 ALIZE trial, where all events were coded with MedDRA version 3.0. We compared the MedDRA and the ALIZE primary SOCs. In the ANRS 099 ALIZE trial, 355 patients were recruited, and 3,722 events were reported and documented, among which 35% had multiple SOCs (2 to 4). We applied the proposed 5-step strategy. Altogether, 23% of MedDRA primary SOCs were modified, mainly from MedDRA primary SOCs "Investigations" (69%) and "Ear and labyrinth disorders" (6%), for the ALIZE primary SOCs "Hepatobiliary disorders" (35%), "Musculoskeletal and connective tissue disorders" (21%), and "Gastrointestinal disorders" (15%). MedDRA largely enhanced in size and complexity with versioning and the development of Standardized MedDRA Queries. Yet, statisticians should not systematically rely on primary SOCs proposed by MedDRA to report events. A simple general 5-step strategy to re-classify events consistently with the trial-specific objectives might be useful in HIV trials as well as in other fields.

  2. Context-aware and locality-constrained coding for image categorization.

    PubMed

    Xiao, Wenhua; Wang, Bin; Liu, Yu; Bao, Weidong; Zhang, Maojun

    2014-01-01

    Improving the coding strategy for BOF (Bag-of-Features) based feature design has drawn increasing attention in recent image categorization works. However, the ambiguity in coding procedure still impedes its further development. In this paper, we introduce a context-aware and locality-constrained Coding (CALC) approach with context information for describing objects in a discriminative way. It is generally achieved by learning a word-to-word cooccurrence prior to imposing context information over locality-constrained coding. Firstly, the local context of each category is evaluated by learning a word-to-word cooccurrence matrix representing the spatial distribution of local features in neighbor region. Then, the learned cooccurrence matrix is used for measuring the context distance between local features and code words. Finally, a coding strategy simultaneously considers locality in feature space and context space, while introducing the weight of feature is proposed. This novel coding strategy not only semantically preserves the information in coding, but also has the ability to alleviate the noise distortion of each class. Extensive experiments on several available datasets (Scene-15, Caltech101, and Caltech256) are conducted to validate the superiority of our algorithm by comparing it with baselines and recent published methods. Experimental results show that our method significantly improves the performance of baselines and achieves comparable and even better performance with the state of the arts.

  3. Process Management and Exception Handling in Multiprocessor Operating Systems Using Object-Oriented Design Techniques. Revised Sep. 1988

    NASA Technical Reports Server (NTRS)

    Russo, Vincent; Johnston, Gary; Campbell, Roy

    1988-01-01

    The programming of the interrupt handling mechanisms, process switching primitives, scheduling mechanism, and synchronization primitives of an operating system for a multiprocessor require both efficient code in order to support the needs of high- performance or real-time applications and careful organization to facilitate maintenance. Although many advantages have been claimed for object-oriented class hierarchical languages and their corresponding design methodologies, the application of these techniques to the design of the primitives within an operating system has not been widely demonstrated. To investigate the role of class hierarchical design in systems programming, the authors have constructed the Choices multiprocessor operating system architecture the C++ programming language. During the implementation, it was found that many operating system design concerns can be represented advantageously using a class hierarchical approach, including: the separation of mechanism and policy; the organization of an operating system into layers, each of which represents an abstract machine; and the notions of process and exception management. In this paper, we discuss an implementation of the low-level primitives of this system and outline the strategy by which we developed our solution.

  4. A novel class sensitive hashing technique for large-scale content-based remote sensing image retrieval

    NASA Astrophysics Data System (ADS)

    Reato, Thomas; Demir, Begüm; Bruzzone, Lorenzo

    2017-10-01

    This paper presents a novel class sensitive hashing technique in the framework of large-scale content-based remote sensing (RS) image retrieval. The proposed technique aims at representing each image with multi-hash codes, each of which corresponds to a primitive (i.e., land cover class) present in the image. To this end, the proposed method consists of a three-steps algorithm. The first step is devoted to characterize each image by primitive class descriptors. These descriptors are obtained through a supervised approach, which initially extracts the image regions and their descriptors that are then associated with primitives present in the images. This step requires a set of annotated training regions to define primitive classes. A correspondence between the regions of an image and the primitive classes is built based on the probability of each primitive class to be present at each region. All the regions belonging to the specific primitive class with a probability higher than a given threshold are highly representative of that class. Thus, the average value of the descriptors of these regions is used to characterize that primitive. In the second step, the descriptors of primitive classes are transformed into multi-hash codes to represent each image. This is achieved by adapting the kernel-based supervised locality sensitive hashing method to multi-code hashing problems. The first two steps of the proposed technique, unlike the standard hashing methods, allow one to represent each image by a set of primitive class sensitive descriptors and their hash codes. Then, in the last step, the images in the archive that are very similar to a query image are retrieved based on a multi-hash-code-matching scheme. Experimental results obtained on an archive of aerial images confirm the effectiveness of the proposed technique in terms of retrieval accuracy when compared to the standard hashing methods.

  5. Flexible Method for Inter-object Communication in C++

    NASA Technical Reports Server (NTRS)

    Curlett, Brian P.; Gould, Jack J.

    1994-01-01

    A method has been developed for organizing and sharing large amounts of information between objects in C++ code. This method uses a set of object classes to define variables and group them into tables. The variable tables presented here provide a convenient way of defining and cataloging data, as well as a user-friendly input/output system, a standardized set of access functions, mechanisms for ensuring data integrity, methods for interprocessor data transfer, and an interpretive language for programming relationships between parameters. The object-oriented nature of these variable tables enables the use of multiple data types, each with unique attributes and behavior. Because each variable provides its own access methods, redundant table lookup functions can be bypassed, thus decreasing access times while maintaining data integrity. In addition, a method for automatic reference counting was developed to manage memory safely.

  6. Learning Discriminative Binary Codes for Large-scale Cross-modal Retrieval.

    PubMed

    Xu, Xing; Shen, Fumin; Yang, Yang; Shen, Heng Tao; Li, Xuelong

    2017-05-01

    Hashing based methods have attracted considerable attention for efficient cross-modal retrieval on large-scale multimedia data. The core problem of cross-modal hashing is how to learn compact binary codes that construct the underlying correlations between heterogeneous features from different modalities. A majority of recent approaches aim at learning hash functions to preserve the pairwise similarities defined by given class labels. However, these methods fail to explicitly explore the discriminative property of class labels during hash function learning. In addition, they usually discard the discrete constraints imposed on the to-be-learned binary codes, and compromise to solve a relaxed problem with quantization to obtain the approximate binary solution. Therefore, the binary codes generated by these methods are suboptimal and less discriminative to different classes. To overcome these drawbacks, we propose a novel cross-modal hashing method, termed discrete cross-modal hashing (DCH), which directly learns discriminative binary codes while retaining the discrete constraints. Specifically, DCH learns modality-specific hash functions for generating unified binary codes, and these binary codes are viewed as representative features for discriminative classification with class labels. An effective discrete optimization algorithm is developed for DCH to jointly learn the modality-specific hash function and the unified binary codes. Extensive experiments on three benchmark data sets highlight the superiority of DCH under various cross-modal scenarios and show its state-of-the-art performance.

  7. Trace-shortened Reed-Solomon codes

    NASA Technical Reports Server (NTRS)

    Mceliece, R. J.; Solomon, G.

    1994-01-01

    Reed-Solomon (RS) codes have been part of standard NASA telecommunications systems for many years. RS codes are character-oriented error-correcting codes, and their principal use in space applications has been as outer codes in concatenated coding systems. However, for a given character size, say m bits, RS codes are limited to a length of, at most, 2(exp m). It is known in theory that longer character-oriented codes would be superior to RS codes in concatenation applications, but until recently no practical class of 'long' character-oriented codes had been discovered. In 1992, however, Solomon discovered an extensive class of such codes, which are now called trace-shortened Reed-Solomon (TSRS) codes. In this article, we will continue the study of TSRS codes. Our main result is a formula for the dimension of any TSRS code, as a function of its error-correcting power. Using this formula, we will give several examples of TSRS codes, some of which look very promising as candidate outer codes in high-performance coded telecommunications systems.

  8. Coordinated design of coding and modulation systems

    NASA Technical Reports Server (NTRS)

    Massey, J. L.

    1976-01-01

    Work on partial unit memory codes continued; it was shown that for a given virtual state complexity, the maximum free distance over the class of all convolutional codes is achieved within the class of unit memory codes. The effect of phase-lock loop (PLL) tracking error on coding system performance was studied by using the channel cut-off rate as the measure of quality of a modulation system. Optimum modulation signal sets for a non-white Gaussian channel considered an heuristic selection rule based on a water-filling argument. The use of error correcting codes to perform data compression by the technique of syndrome source coding was researched and a weight-and-error-locations scheme was developed that is closely related to LDSC coding.

  9. Revisiting the operational RNA code for amino acids: Ensemble attributes and their implications.

    PubMed

    Shaul, Shaul; Berel, Dror; Benjamini, Yoav; Graur, Dan

    2010-01-01

    It has been suggested that tRNA acceptor stems specify an operational RNA code for amino acids. In the last 20 years several attributes of the putative code have been elucidated for a small number of model organisms. To gain insight about the ensemble attributes of the code, we analyzed 4925 tRNA sequences from 102 bacterial and 21 archaeal species. Here, we used a classification and regression tree (CART) methodology, and we found that the degrees of degeneracy or specificity of the RNA codes in both Archaea and Bacteria differ from those of the genetic code. We found instances of taxon-specific alternative codes, i.e., identical acceptor stem determinants encrypting different amino acids in different species, as well as instances of ambiguity, i.e., identical acceptor stem determinants encrypting two or more amino acids in the same species. When partitioning the data by class of synthetase, the degree of code ambiguity was significantly reduced. In cryptographic terms, a plausible interpretation of this result is that the class distinction in synthetases is an essential part of the decryption rules for resolving the subset of RNA code ambiguities enciphered by identical acceptor stem determinants of tRNAs acylated by enzymes belonging to the two classes. In evolutionary terms, our findings lend support to the notion that in the pre-DNA world, interactions between tRNA acceptor stems and synthetases formed the basis for the distinction between the two classes; hence, ambiguities in the ancient RNA code were pivotal for the fixation of these enzymes in the genomes of ancestral prokaryotes.

  10. Revisiting the operational RNA code for amino acids: Ensemble attributes and their implications

    PubMed Central

    Shaul, Shaul; Berel, Dror; Benjamini, Yoav; Graur, Dan

    2010-01-01

    It has been suggested that tRNA acceptor stems specify an operational RNA code for amino acids. In the last 20 years several attributes of the putative code have been elucidated for a small number of model organisms. To gain insight about the ensemble attributes of the code, we analyzed 4925 tRNA sequences from 102 bacterial and 21 archaeal species. Here, we used a classification and regression tree (CART) methodology, and we found that the degrees of degeneracy or specificity of the RNA codes in both Archaea and Bacteria differ from those of the genetic code. We found instances of taxon-specific alternative codes, i.e., identical acceptor stem determinants encrypting different amino acids in different species, as well as instances of ambiguity, i.e., identical acceptor stem determinants encrypting two or more amino acids in the same species. When partitioning the data by class of synthetase, the degree of code ambiguity was significantly reduced. In cryptographic terms, a plausible interpretation of this result is that the class distinction in synthetases is an essential part of the decryption rules for resolving the subset of RNA code ambiguities enciphered by identical acceptor stem determinants of tRNAs acylated by enzymes belonging to the two classes. In evolutionary terms, our findings lend support to the notion that in the pre-DNA world, interactions between tRNA acceptor stems and synthetases formed the basis for the distinction between the two classes; hence, ambiguities in the ancient RNA code were pivotal for the fixation of these enzymes in the genomes of ancestral prokaryotes. PMID:19952117

  11. acme: The Amendable Coal-Fire Modeling Exercise. A C++ Class Library for the Numerical Simulation of Coal-Fires

    NASA Astrophysics Data System (ADS)

    Wuttke, Manfred W.

    2017-04-01

    At LIAG, we use numerical models to develop and enhance understanding of coupled transport processes and to predict the dynamics of the system under consideration. Topics include geothermal heat utilization, subrosion processes, and spontaneous underground coal fires. Although the details make it inconvenient if not impossible to apply a single code implementation to all systems, their investigations go along similar paths: They all depend on the solution of coupled transport equations. We thus saw a need for a modular code system with open access for the various communities to maximize the shared synergistic effects. To this purpose we develop the oops! ( open object-oriented parallel solutions) - toolkit, a C++ class library for the numerical solution of mathematical models of coupled thermal, hydraulic and chemical processes. This is used to develop problem-specific libraries like acme( amendable coal-fire modeling exercise), a class library for the numerical simulation of coal-fires and applications like kobra (Kohlebrand, german for coal-fire), a numerical simulation code for standard coal-fire models. Basic principle of the oops!-code system is the provision of data types for the description of space and time dependent data fields, description of terms of partial differential equations (pde), their discretisation and solving methods. Coupling of different processes, described by their particular pde is modeled by an automatic timescale-ordered operator-splitting technique. acme is a derived coal-fire specific application library, depending on oops!. If specific functionalities of general interest are implemented and have been tested they will be assimilated into the main oops!-library. Interfaces to external pre- and post-processing tools are easily implemented. Thus a construction kit which can be arbitrarily amended is formed. With the kobra-application constructed with acme we study the processes and propagation of shallow coal seam fires in particular in Xinjiang, China, as well as analyze and interpret results from lab experiments.

  12. A Hybrid Parachute Simulation Environment for the Orion Parachute Development Project

    NASA Technical Reports Server (NTRS)

    Moore, James W.

    2011-01-01

    A parachute simulation environment (PSE) has been developed that aims to take advantage of legacy parachute simulation codes and modern object-oriented programming techniques. This hybrid simulation environment provides the parachute analyst with a natural and intuitive way to construct simulation tasks while preserving the pedigree and authority of established parachute simulations. NASA currently employs four simulation tools for developing and analyzing air-drop tests performed by the CEV Parachute Assembly System (CPAS) Project. These tools were developed at different times, in different languages, and with different capabilities in mind. As a result, each tool has a distinct interface and set of inputs and outputs. However, regardless of the simulation code that is most appropriate for the type of test, engineers typically perform similar tasks for each drop test such as prediction of loads, assessment of altitude, and sequencing of disreefs or cut-aways. An object-oriented approach to simulation configuration allows the analyst to choose models of real physical test articles (parachutes, vehicles, etc.) and sequence them to achieve the desired test conditions. Once configured, these objects are translated into traditional input lists and processed by the legacy simulation codes. This approach minimizes the number of sim inputs that the engineer must track while configuring an input file. An object oriented approach to simulation output allows a common set of post-processing functions to perform routine tasks such as plotting and timeline generation with minimal sensitivity to the simulation that generated the data. Flight test data may also be translated into the common output class to simplify test reconstruction and analysis.

  13. Coding of Class I and II aminoacyl-tRNA synthetases

    PubMed Central

    Carter, Charles W.

    2018-01-01

    SUMMARY The aminoacyl-tRNA synthetases and their cognate transfer RNAs translate the universal genetic code. The twenty canonical amino acids are sufficiently diverse to create a selective advantage for dividing amino acid activation between two distinct, apparently unrelated superfamilies of synthetases, Class I amino acids being generally larger and less polar, Class II amino acids smaller and more polar. Biochemical, bioinformatic, and protein engineering experiments support the hypothesis that the two Classes descended from opposite strands of the same ancestral gene. Parallel experimental deconstructions of Class I and II synthetases reveal parallel losses in catalytic proficiency at two novel modular levels—protozymes and Urzymes—associated with the evolution of catalytic activity. Bi-directional coding supports an important unification of the proteome; affords a genetic relatedness metric—middle base-pairing frequencies in sense/antisense alignments—that probes more deeply into the evolutionary history of translation than do single multiple sequence alignments; and has facilitated the analysis of hitherto unknown coding relationships in tRNA sequences. Reconstruction of native synthetases by modular thermodynamic cycles facilitated by domain engineering emphasizes the subtlety associated with achieving high specificity, shedding new light on allosteric relationships in contemporary synthetases. Synthetase Urzyme structural biology suggests that they are catalytically active molten globules, broadening the potential manifold of polypeptide catalysts accessible to primitive genetic coding and motivating revisions of the origins of catalysis. Finally, bi-directional genetic coding of some of the oldest genes in the proteome places major limitations on the likelihood that any RNA World preceded the origins of coded proteins. PMID:28828732

  14. Two Classes of New Optimal Asymmetric Quantum Codes

    NASA Astrophysics Data System (ADS)

    Chen, Xiaojing; Zhu, Shixin; Kai, Xiaoshan

    2018-03-01

    Let q be an even prime power and ω be a primitive element of F_{q2}. By analyzing the structure of cyclotomic cosets, we determine a sufficient condition for ω q- 1-constacyclic codes over F_{q2} to be Hermitian dual-containing codes. By the CSS construction, two classes of new optimal AQECCs are obtained according to the Singleton bound for AQECCs.

  15. The aminoacyl-tRNA synthetases had only a marginal role in the origin of the organization of the genetic code: Evidence in favor of the coevolution theory.

    PubMed

    Di Giulio, Massimo

    2017-11-07

    The coevolution theory of the origin of the genetic code suggests that the organization of the genetic code coevolved with the biosynthetic relationships between amino acids. The mechanism that allowed this coevolution was based on tRNA-like molecules on which-this theory-would postulate the biosynthetic transformations between amino acids to have occurred. This mechanism makes a prediction on how the role conducted by the aminoacyl-tRNA synthetases (ARSs), in the origin of the genetic code, should have been. Indeed, if the biosynthetic transformations between amino acids occurred on tRNA-like molecules, then there was no need to link amino acids to these molecules because amino acids were already charged on tRNA-like molecules, as the coevolution theory suggests. In spite of the fact that ARSs make the genetic code responsible for the first interaction between a component of nucleic acids and that of proteins, for the coevolution theory the role of ARSs should have been entirely marginal in the genetic code origin. Therefore, I have conducted a further analysis of the distribution of the two classes of ARSs and of their subclasses-in the genetic code table-in order to perform a falsification test of the coevolution theory. Indeed, in the case in which the distribution of ARSs within the genetic code would have been highly significant, then the coevolution theory would be falsified since the mechanism on which it is based would not predict a fundamental role of ARSs in the origin of the genetic code. I found that the statistical significance of the distribution of the two classes of ARSs in the table of the genetic code is low or marginal, whereas that of the subclasses of ARSs statistically significant. However, this is in perfect agreement with the postulates of the coevolution theory. Indeed, the only case of statistical significance-regarding the classes of ARSs-is appreciable for the CAG code, whereas for its complement-the UNN/NUN code-only a marginal significance is measurable. These two codes codify roughly for the two ARS classes, in particular, the CAG code for the class II while the UNN/NUN code for the class I. Furthermore, the subclasses of ARSs show a statistical significance of their distribution in the genetic code table. Nevertheless, the more sensible explanation for these observations would be the following. The observation that would link the two classes of ARSs to the CAG and UNN/NUN codes, and the statistical significance of the distribution of the subclasses of ARSs in the genetic code table, would be only a secondary effect due to the highly significant distribution of the polarity of amino acids and their biosynthetic relationships in the genetic code. That is to say, the polarity of amino acids and their biosynthetic relationships would have conditioned the evolution of ARSs so that their presence in the genetic code would have been detectable. Even if the ARSs would not have-on their own-influenced directly the evolutionary organization of the genetic code. In other words, the role that ARSs had in the origin of the genetic code would have been entirely marginal. This conclusion would be in perfect accord with the predictions of the coevolution theory. Conversely, this conclusion would be in contrast-at least partially-with the physicochemical theories of the origin of the genetic code because they would foresee an absolutely more active role of ARSs in the origin of the organization of the genetic code. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. Low Density Parity Check Codes Based on Finite Geometries: A Rediscovery and More

    NASA Technical Reports Server (NTRS)

    Kou, Yu; Lin, Shu; Fossorier, Marc

    1999-01-01

    Low density parity check (LDPC) codes with iterative decoding based on belief propagation achieve astonishing error performance close to Shannon limit. No algebraic or geometric method for constructing these codes has been reported and they are largely generated by computer search. As a result, encoding of long LDPC codes is in general very complex. This paper presents two classes of high rate LDPC codes whose constructions are based on finite Euclidean and projective geometries, respectively. These classes of codes a.re cyclic and have good constraint parameters and minimum distances. Cyclic structure adows the use of linear feedback shift registers for encoding. These finite geometry LDPC codes achieve very good error performance with either soft-decision iterative decoding based on belief propagation or Gallager's hard-decision bit flipping algorithm. These codes can be punctured or extended to obtain other good LDPC codes. A generalization of these codes is also presented.

  17. Development of an object-oriented finite element program: application to metal-forming and impact simulations

    NASA Astrophysics Data System (ADS)

    Pantale, O.; Caperaa, S.; Rakotomalala, R.

    2004-07-01

    During the last 50 years, the development of better numerical methods and more powerful computers has been a major enterprise for the scientific community. In the same time, the finite element method has become a widely used tool for researchers and engineers. Recent advances in computational software have made possible to solve more physical and complex problems such as coupled problems, nonlinearities, high strain and high-strain rate problems. In this field, an accurate analysis of large deformation inelastic problems occurring in metal-forming or impact simulations is extremely important as a consequence of high amount of plastic flow. In this presentation, the object-oriented implementation, using the C++ language, of an explicit finite element code called DynELA is presented. The object-oriented programming (OOP) leads to better-structured codes for the finite element method and facilitates the development, the maintainability and the expandability of such codes. The most significant advantage of OOP is in the modeling of complex physical systems such as deformation processing where the overall complex problem is partitioned in individual sub-problems based on physical, mathematical or geometric reasoning. We first focus on the advantages of OOP for the development of scientific programs. Specific aspects of OOP, such as the inheritance mechanism, the operators overload procedure or the use of template classes are detailed. Then we present the approach used for the development of our finite element code through the presentation of the kinematics, conservative and constitutive laws and their respective implementation in C++. Finally, the efficiency and accuracy of our finite element program are investigated using a number of benchmark tests relative to metal forming and impact simulations.

  18. Accumulate repeat accumulate codes

    NASA Technical Reports Server (NTRS)

    Abbasfar, A.; Divsalar, D.; Yao, K.

    2004-01-01

    In this paper we propose an innovative channel coding scheme called Accumulate Repeat Accumulate codes. This class of codes can be viewed as trubo-like codes, namely a double serial concatenation of a rate-1 accumulator as an outer code, a regular or irregular repetition as a middle code, and a punctured accumulator as an inner code.

  19. Network coding based joint signaling and dynamic bandwidth allocation scheme for inter optical network unit communication in passive optical networks

    NASA Astrophysics Data System (ADS)

    Wei, Pei; Gu, Rentao; Ji, Yuefeng

    2014-06-01

    As an innovative and promising technology, network coding has been introduced to passive optical networks (PON) in recent years to support inter optical network unit (ONU) communication, yet the signaling process and dynamic bandwidth allocation (DBA) in PON with network coding (NC-PON) still need further study. Thus, we propose a joint signaling and DBA scheme for efficiently supporting differentiated services of inter ONU communication in NC-PON. In the proposed joint scheme, the signaling process lays the foundation to fulfill network coding in PON, and it can not only avoid the potential threat to downstream security in previous schemes but also be suitable for the proposed hybrid dynamic bandwidth allocation (HDBA) scheme. In HDBA, a DBA cycle is divided into two sub-cycles for applying different coding, scheduling and bandwidth allocation strategies to differentiated classes of services. Besides, as network traffic load varies, the entire upstream transmission window for all REPORT messages slides accordingly, leaving the transmission time of one or two sub-cycles to overlap with the bandwidth allocation calculation time at the optical line terminal (the OLT), so that the upstream idle time can be efficiently eliminated. Performance evaluation results validate that compared with the existing two DBA algorithms deployed in NC-PON, HDBA demonstrates the best quality of service (QoS) support in terms of delay for all classes of services, especially guarantees the end-to-end delay bound of high class services. Specifically, HDBA can eliminate queuing delay and scheduling delay of high class services, reduce those of lower class services by at least 20%, and reduce the average end-to-end delay of all services over 50%. Moreover, HDBA also achieves the maximum delay fairness between coded and uncoded lower class services, and medium delay fairness for high class services.

  20. LUXSim: A component-centric approach to low-background simulations

    DOE PAGES

    Akerib, D. S.; Bai, X.; Bedikian, S.; ...

    2012-02-13

    Geant4 has been used throughout the nuclear and high-energy physics community to simulate energy depositions in various detectors and materials. These simulations have mostly been run with a source beam outside the detector. In the case of low-background physics, however, a primary concern is the effect on the detector from radioactivity inherent in the detector parts themselves. From this standpoint, there is no single source or beam, but rather a collection of sources with potentially complicated spatial extent. LUXSim is a simulation framework used by the LUX collaboration that takes a component-centric approach to event generation and recording. A newmore » set of classes allows for multiple radioactive sources to be set within any number of components at run time, with the entire collection of sources handled within a single simulation run. Various levels of information can also be recorded from the individual components, with these record levels also being set at runtime. This flexibility in both source generation and information recording is possible without the need to recompile, reducing the complexity of code management and the proliferation of versions. Within the code itself, casting geometry objects within this new set of classes rather than as the default Geant4 classes automatically extends this flexibility to every individual component. No additional work is required on the part of the developer, reducing development time and increasing confidence in the results. Here, we describe the guiding principles behind LUXSim, detail some of its unique classes and methods, and give examples of usage.« less

  1. The search for person-related information in general practice: a qualitative study.

    PubMed

    Schrans, Diego; Avonts, Dirk; Christiaens, Thierry; Willems, Sara; de Smet, Kaat; van Boven, Kees; Boeckxstaens, Pauline; Kühlein, Thomas

    2016-02-01

    General practice is person-focused. Contextual information influences the clinical decision-making process in primary care. Currently, person-related information (PeRI) is neither recorded in a systematic way nor coded in the electronic medical record (EMR), and therefore not usable for scientific use. To search for classes of PeRI influencing the process of care. GPs, from nine countries worldwide, were asked to write down narrative case histories where personal factors played a role in decision-making. In an inductive process, the case histories were consecutively coded according to classes of PeRI. The classes found were deductively applied to the following cases and refined, until saturation was reached. Then, the classes were grouped into code-families and further clustered into domains. The inductive analysis of 32 case histories resulted in 33 defined PeRI codes, classifying all personal-related information in the cases. The 33 codes were grouped in the following seven mutually exclusive code-families: 'aspects between patient and formal care provider', 'social environment and family', 'functioning/behaviour', 'life history/non-medical experiences', 'personal medical information', 'socio-demographics' and 'work-/employment-related information'. The code-families were clustered into four domains: 'social environment and extended family', 'medicine', 'individual' and 'work and employment'. As PeRI is used in the process of decision-making, it should be part of the EMR. The PeRI classes we identified might form the basis of a new contextual classification mainly for research purposes. This might help to create evidence of the person-centredness of general practice. © The Author 2016. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  2. Alternative approach for fire suppression of class A, B and C fires in gloveboxes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rosenberger, Mark S; Tsiagkouris, James A

    2011-02-10

    Department of Energy (DOE) Orders and National Fire Protection Association (NFPA) Codes and Standards require fire suppression in gloveboxes. Several potential solutions have been and are currently being considered at Los Alamos National Laboratory (LANL). The objective is to provide reliable, minimally invasive, and seismically robust fire suppression capable of extinguishing Class A, B, and C fires; achieve compliance with DOE and NFPA requirements; and provide value-added improvements to fire safety in gloveboxes. This report provides a brief summary of current approaches and also documents the successful fire tests conducted to prove that one approach, specifically Fire Foe{trademark} tubes, ismore » capable of achieving the requirement to provide reliable fire protection in gloveboxes in a cost-effective manner.« less

  3. Generalized Bezout's Theorem and its applications in coding theory

    NASA Technical Reports Server (NTRS)

    Berg, Gene A.; Feng, Gui-Liang; Rao, T. R. N.

    1996-01-01

    This paper presents a generalized Bezout theorem which can be used to determine a tighter lower bound of the number of distinct points of intersection of two or more curves for a large class of plane curves. A new approach to determine a lower bound on the minimum distance (and also the generalized Hamming weights) for algebraic-geometric codes defined from a class of plane curves is introduced, based on the generalized Bezout theorem. Examples of more efficient linear codes are constructed using the generalized Bezout theorem and the new approach. For d = 4, the linear codes constructed by the new construction are better than or equal to the known linear codes. For d greater than 5, these new codes are better than the known codes. The Klein code over GF(2(sup 3)) is also constructed.

  4. The analysis of convolutional codes via the extended Smith algorithm

    NASA Technical Reports Server (NTRS)

    Mceliece, R. J.; Onyszchuk, I.

    1993-01-01

    Convolutional codes have been the central part of most error-control systems in deep-space communication for many years. Almost all such applications, however, have used the restricted class of (n,1), also known as 'rate 1/n,' convolutional codes. The more general class of (n,k) convolutional codes contains many potentially useful codes, but their algebraic theory is difficult and has proved to be a stumbling block in the evolution of convolutional coding systems. In this article, the situation is improved by describing a set of practical algorithms for computing certain basic things about a convolutional code (among them the degree, the Forney indices, a minimal generator matrix, and a parity-check matrix), which are usually needed before a system using the code can be built. The approach is based on the classic Forney theory for convolutional codes, together with the extended Smith algorithm for polynomial matrices, which is introduced in this article.

  5. Analysis of protein-coding genetic variation in 60,706 humans.

    PubMed

    Lek, Monkol; Karczewski, Konrad J; Minikel, Eric V; Samocha, Kaitlin E; Banks, Eric; Fennell, Timothy; O'Donnell-Luria, Anne H; Ware, James S; Hill, Andrew J; Cummings, Beryl B; Tukiainen, Taru; Birnbaum, Daniel P; Kosmicki, Jack A; Duncan, Laramie E; Estrada, Karol; Zhao, Fengmei; Zou, James; Pierce-Hoffman, Emma; Berghout, Joanne; Cooper, David N; Deflaux, Nicole; DePristo, Mark; Do, Ron; Flannick, Jason; Fromer, Menachem; Gauthier, Laura; Goldstein, Jackie; Gupta, Namrata; Howrigan, Daniel; Kiezun, Adam; Kurki, Mitja I; Moonshine, Ami Levy; Natarajan, Pradeep; Orozco, Lorena; Peloso, Gina M; Poplin, Ryan; Rivas, Manuel A; Ruano-Rubio, Valentin; Rose, Samuel A; Ruderfer, Douglas M; Shakir, Khalid; Stenson, Peter D; Stevens, Christine; Thomas, Brett P; Tiao, Grace; Tusie-Luna, Maria T; Weisburd, Ben; Won, Hong-Hee; Yu, Dongmei; Altshuler, David M; Ardissino, Diego; Boehnke, Michael; Danesh, John; Donnelly, Stacey; Elosua, Roberto; Florez, Jose C; Gabriel, Stacey B; Getz, Gad; Glatt, Stephen J; Hultman, Christina M; Kathiresan, Sekar; Laakso, Markku; McCarroll, Steven; McCarthy, Mark I; McGovern, Dermot; McPherson, Ruth; Neale, Benjamin M; Palotie, Aarno; Purcell, Shaun M; Saleheen, Danish; Scharf, Jeremiah M; Sklar, Pamela; Sullivan, Patrick F; Tuomilehto, Jaakko; Tsuang, Ming T; Watkins, Hugh C; Wilson, James G; Daly, Mark J; MacArthur, Daniel G

    2016-08-18

    Large-scale reference data sets of human genetic variation are critical for the medical and functional interpretation of DNA sequence changes. Here we describe the aggregation and analysis of high-quality exome (protein-coding region) DNA sequence data for 60,706 individuals of diverse ancestries generated as part of the Exome Aggregation Consortium (ExAC). This catalogue of human genetic diversity contains an average of one variant every eight bases of the exome, and provides direct evidence for the presence of widespread mutational recurrence. We have used this catalogue to calculate objective metrics of pathogenicity for sequence variants, and to identify genes subject to strong selection against various classes of mutation; identifying 3,230 genes with near-complete depletion of predicted protein-truncating variants, with 72% of these genes having no currently established human disease phenotype. Finally, we demonstrate that these data can be used for the efficient filtering of candidate disease-causing variants, and for the discovery of human 'knockout' variants in protein-coding genes.

  6. Status and future plans for open source QuickPIC

    NASA Astrophysics Data System (ADS)

    An, Weiming; Decyk, Viktor; Mori, Warren

    2017-10-01

    QuickPIC is a three dimensional (3D) quasi-static particle-in-cell (PIC) code developed based on the UPIC framework. It can be used for efficiently modeling plasma based accelerator (PBA) problems. With quasi-static approximation, QuickPIC can use different time scales for calculating the beam (or laser) evolution and the plasma response, and a 3D plasma wake field can be simulated using a two-dimensional (2D) PIC code where the time variable is ξ = ct - z and z is the beam propagation direction. QuickPIC can be thousand times faster than the normal PIC code when simulating the PBA. It uses an MPI/OpenMP hybrid parallel algorithm, which can be run on either a laptop or the largest supercomputer. The open source QuickPIC is an object-oriented program with high level classes written in Fortran 2003. It can be found at https://github.com/UCLA-Plasma-Simulation-Group/QuickPIC-OpenSource.git

  7. MARC Coding of DDC for Subject Retrieval.

    ERIC Educational Resources Information Center

    Wajenberg, Arnold S.

    1983-01-01

    Recommends an expansion of MARC codes for decimal class numbers to enhance automated subject retrieval. Five values for a second indicator and two new subfields are suggested for encoding hierarchical relationships among decimal class numbers. Additional subfields are suggested to enhance retrieval through analysis of synthesized numbers in…

  8. An object-oriented, coprocessor-accelerated model for ice sheet simulations

    NASA Astrophysics Data System (ADS)

    Seddik, H.; Greve, R.

    2013-12-01

    Recently, numerous models capable of modeling the thermo-dynamics of ice sheets have been developed within the ice sheet modeling community. Their capabilities have been characterized by a wide range of features with different numerical methods (finite difference or finite element), different implementations of the ice flow mechanics (shallow-ice, higher-order, full Stokes) and different treatments for the basal and coastal areas (basal hydrology, basal sliding, ice shelves). Shallow-ice models (SICOPOLIS, IcIES, PISM, etc) have been widely used for modeling whole ice sheets (Greenland and Antarctica) due to the relatively low computational cost of the shallow-ice approximation but higher order (ISSM, AIF) and full Stokes (Elmer/Ice) models have been recently used to model the Greenland ice sheet. The advance in processor speed and the decrease in cost for accessing large amount of memory and storage have undoubtedly been the driving force in the commoditization of models with higher capabilities, and the popularity of Elmer/Ice (http://elmerice.elmerfem.com) with an active user base is a notable representation of this trend. Elmer/Ice is a full Stokes model built on top of the multi-physics package Elmer (http://www.csc.fi/english/pages/elmer) which provides the full machinery for the complex finite element procedure and is fully parallel (mesh partitioning with OpenMPI communication). Elmer is mainly written in Fortran 90 and targets essentially traditional processors as the code base was not initially written to run on modern coprocessors (yet adding support for the recently introduced x86 based coprocessors is possible). Furthermore, a truly modular and object-oriented implementation is required for quick adaptation to fast evolving capabilities in hardware (Fortran 2003 provides an object-oriented programming model while not being clean and requiring a tricky refactoring of Elmer code). In this work, the object-oriented, coprocessor-accelerated finite element code Sainou is introduced. Sainou is an Elmer fork which is reimplemented in Objective C and used for experimenting with ice sheet models running on coprocessors, essentially GPU devices. GPUs are highly parallel processors that provide opportunities for fine-grained parallelization of the full Stokes problem using the standard OpenCL language (http://www.khronos.org/opencl/) to access the device. Sainou is built upon a collection of Objective C base classes that service a modular kernel (itself a base class) which provides the core methods to solve the finite element problem. An early implementation of Sainou will be presented with emphasis on the object architecture and the strategies of parallelizations. The computation of a simple heat conduction problem is used to test the implementation which also provides experimental support for running the global matrix assembly on GPU.

  9. hi-class: Horndeski in the Cosmic Linear Anisotropy Solving System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zumalacárregui, Miguel; Bellini, Emilio; Sawicki, Ignacy

    We present the public version of hi-class (www.hiclass-code.net), an extension of the Boltzmann code CLASS to a broad ensemble of modifications to general relativity. In particular, hi-class can calculate predictions for models based on Horndeski's theory, which is the most general scalar-tensor theory described by second-order equations of motion and encompasses any perfect-fluid dark energy, quintessence, Brans-Dicke, f ( R ) and covariant Galileon models. hi-class has been thoroughly tested and can be readily used to understand the impact of alternative theories of gravity on linear structure formation as well as for cosmological parameter extraction.

  10. Object Classification With Joint Projection and Low-Rank Dictionary Learning.

    PubMed

    Foroughi, Homa; Ray, Nilanjan; Hong Zhang

    2018-02-01

    For an object classification system, the most critical obstacles toward real-world applications are often caused by large intra-class variability, arising from different lightings, occlusion, and corruption, in limited sample sets. Most methods in the literature would fail when the training samples are heavily occluded, corrupted or have significant illumination or viewpoint variations. Besides, most of the existing methods and especially deep learning-based methods, need large training sets to achieve a satisfactory recognition performance. Although using the pre-trained network on a generic large-scale data set and fine-tune it to the small-sized target data set is a widely used technique, this would not help when the content of base and target data sets are very different. To address these issues simultaneously, we propose a joint projection and low-rank dictionary learning method using dual graph constraints. Specifically, a structured class-specific dictionary is learned in the low-dimensional space, and the discrimination is further improved by imposing a graph constraint on the coding coefficients, that maximizes the intra-class compactness and inter-class separability. We enforce structural incoherence and low-rank constraints on sub-dictionaries to reduce the redundancy among them, and also make them robust to variations and outliers. To preserve the intrinsic structure of data, we introduce a supervised neighborhood graph into the framework to make the proposed method robust to small-sized and high-dimensional data sets. Experimental results on several benchmark data sets verify the superior performance of our method for object classification of small-sized data sets, which include a considerable amount of different kinds of variation, and may have high-dimensional feature vectors.

  11. A discriminative test among the different theories proposed to explain the origin of the genetic code: the coevolution theory finds additional support.

    PubMed

    Giulio, Massimo Di

    2018-05-19

    A discriminative statistical test among the different theories proposed to explain the origin of the genetic code is presented. Gathering the amino acids into polarity and biosynthetic classes that are the first expression of the physicochemical theory of the origin of the genetic code and the second expression of the coevolution theory, these classes are utilized in the Fisher's exact test to establish their significance within the genetic code table. Linking to the rows and columns of the genetic code of probabilities that express the statistical significance of these classes, I have finally been in the condition to be able to calculate a χ value to link to both the physicochemical theory and to the coevolution theory that would express the corroboration level referred to these theories. The comparison between these two χ values showed that the coevolution theory is able to explain - in this strictly empirical analysis - the origin of the genetic code better than that of the physicochemical theory. Copyright © 2018 Elsevier B.V. All rights reserved.

  12. VizieR Online Data Catalog: C/2012 F6 (Lemmon) and C/2013 R1 (Lovejoy) spectra (Biver+, 2014)

    NASA Astrophysics Data System (ADS)

    Biver, N.; Bockelee-Morvan, D.; Debout, V.; Crovisier, J.; Boissier, J.; Lis, D. C.; Dello Russo, N.; Moreno, R.; Colom, P.; Paubert, G.; Vervack, R.; Weaver, H. A.

    2014-06-01

    Sum spectra of the lines of formamide and ethylene-glycol which intensities are listed in Tables 4 and 5. One fits file per spectrum, fits output from class (http://www.iram.fr/IRAMFR/GILDAS/). object.dat : -------------------------------------------------------------------------------- Code Name Elem q e i H1 d AU deg mag -------------------------------------------------------------------------------- C/2012 F6 Lemmon 2456375.5 0.7312461 0.9985125 82.607966 7.96 C/2013 R1 Lovejoy 2456651.5 0.8118182 0.9983297 64.040457 11.66 (2 data files).

  13. Leadership Class Configuration Interaction Code - Status and Opportunities

    NASA Astrophysics Data System (ADS)

    Vary, James

    2011-10-01

    With support from SciDAC-UNEDF (www.unedf.org) nuclear theorists have developed and are continuously improving a Leadership Class Configuration Interaction Code (LCCI) for forefront nuclear structure calculations. The aim of this project is to make state-of-the-art nuclear structure tools available to the entire community of researchers including graduate students. The project includes codes such as NuShellX, MFDn and BIGSTICK that run a range of computers from laptops to leadership class supercomputers. Codes, scripts, test cases and documentation have been assembled, are under continuous development and are scheduled for release to the entire research community in November 2011. A covering script that accesses the appropriate code and supporting files is under development. In addition, a Data Base Management System (DBMS) that records key information from large production runs and archived results of those runs has been developed (http://nuclear.physics.iastate.edu/info/) and will be released. Following an outline of the project, the code structure, capabilities, the DBMS and current efforts, I will suggest a path forward that would benefit greatly from a significant partnership between researchers who use the codes, code developers and the National Nuclear Data efforts. This research is supported in part by DOE under grant DE-FG02-87ER40371 and grant DE-FC02-09ER41582 (SciDAC-UNEDF).

  14. New Site Coefficients and Site Classification System Used in Recent Building Seismic Code Provisions

    USGS Publications Warehouse

    Dobry, R.; Borcherdt, R.D.; Crouse, C.B.; Idriss, I.M.; Joyner, W.B.; Martin, G.R.; Power, M.S.; Rinne, E.E.; Seed, R.B.

    2000-01-01

    Recent code provisions for buildings and other structures (1994 and 1997 NEHRP Provisions, 1997 UBC) have adopted new site amplification factors and a new procedure for site classification. Two amplitude-dependent site amplification factors are specified: Fa for short periods and Fv for longer periods. Previous codes included only a long period factor S and did not provide for a short period amplification factor. The new site classification system is based on definitions of five site classes in terms of a representative average shear wave velocity to a depth of 30 m (V?? s). This definition permits sites to be classified unambiguously. When the shear wave velocity is not available, other soil properties such as standard penetration resistance or undrained shear strength can be used. The new site classes denoted by letters A - E, replace site classes in previous codes denoted by S1 - S4. Site classes A and B correspond to hard rock and rock, Site Class C corresponds to soft rock and very stiff / very dense soil, and Site Classes D and E correspond to stiff soil and soft soil. A sixth site class, F, is defined for soils requiring site-specific evaluations. Both Fa and Fv are functions of the site class, and also of the level of seismic hazard on rock, defined by parameters such as Aa and Av (1994 NEHRP Provisions), Ss and S1 (1997 NEHRP Provisions) or Z (1997 UBC). The values of Fa and Fv decrease as the seismic hazard on rock increases due to soil nonlinearity. The greatest impact of the new factors Fa and Fv as compared with the old S factors occurs in areas of low-to-medium seismic hazard. This paper summarizes the new site provisions, explains the basis for them, and discusses ongoing studies of site amplification in recent earthquakes that may influence future code developments.

  15. The weight hierarchies and chain condition of a class of codes from varieties over finite fields

    NASA Technical Reports Server (NTRS)

    Wu, Xinen; Feng, Gui-Liang; Rao, T. R. N.

    1996-01-01

    The generalized Hamming weights of linear codes were first introduced by Wei. These are fundamental parameters related to the minimal overlap structures of the subcodes and very useful in several fields. It was found that the chain condition of a linear code is convenient in studying the generalized Hamming weights of the product codes. In this paper we consider a class of codes defined over some varieties in projective spaces over finite fields, whose generalized Hamming weights can be determined by studying the orbits of subspaces of the projective spaces under the actions of classical groups over finite fields, i.e., the symplectic groups, the unitary groups and orthogonal groups. We give the weight hierarchies and generalized weight spectra of the codes from Hermitian varieties and prove that the codes satisfy the chain condition.

  16. Content Coding of Psychotherapy Transcripts Using Labeled Topic Models.

    PubMed

    Gaut, Garren; Steyvers, Mark; Imel, Zac E; Atkins, David C; Smyth, Padhraic

    2017-03-01

    Psychotherapy represents a broad class of medical interventions received by millions of patients each year. Unlike most medical treatments, its primary mechanisms are linguistic; i.e., the treatment relies directly on a conversation between a patient and provider. However, the evaluation of patient-provider conversation suffers from critical shortcomings, including intensive labor requirements, coder error, nonstandardized coding systems, and inability to scale up to larger data sets. To overcome these shortcomings, psychotherapy analysis needs a reliable and scalable method for summarizing the content of treatment encounters. We used a publicly available psychotherapy corpus from Alexander Street press comprising a large collection of transcripts of patient-provider conversations to compare coding performance for two machine learning methods. We used the labeled latent Dirichlet allocation (L-LDA) model to learn associations between text and codes, to predict codes in psychotherapy sessions, and to localize specific passages of within-session text representative of a session code. We compared the L-LDA model to a baseline lasso regression model using predictive accuracy and model generalizability (measured by calculating the area under the curve (AUC) from the receiver operating characteristic curve). The L-LDA model outperforms the lasso logistic regression model at predicting session-level codes with average AUC scores of 0.79, and 0.70, respectively. For fine-grained level coding, L-LDA and logistic regression are able to identify specific talk-turns representative of symptom codes. However, model performance for talk-turn identification is not yet as reliable as human coders. We conclude that the L-LDA model has the potential to be an objective, scalable method for accurate automated coding of psychotherapy sessions that perform better than comparable discriminative methods at session-level coding and can also predict fine-grained codes.

  17. Content Coding of Psychotherapy Transcripts Using Labeled Topic Models

    PubMed Central

    Gaut, Garren; Steyvers, Mark; Imel, Zac E; Atkins, David C; Smyth, Padhraic

    2016-01-01

    Psychotherapy represents a broad class of medical interventions received by millions of patients each year. Unlike most medical treatments, its primary mechanisms are linguistic; i.e., the treatment relies directly on a conversation between a patient and provider. However, the evaluation of patient-provider conversation suffers from critical shortcomings, including intensive labor requirements, coder error, non-standardized coding systems, and inability to scale up to larger data sets. To overcome these shortcomings, psychotherapy analysis needs a reliable and scalable method for summarizing the content of treatment encounters. We used a publicly-available psychotherapy corpus from Alexander Street press comprising a large collection of transcripts of patient-provider conversations to compare coding performance for two machine learning methods. We used the Labeled Latent Dirichlet Allocation (L-LDA) model to learn associations between text and codes, to predict codes in psychotherapy sessions, and to localize specific passages of within-session text representative of a session code. We compared the L-LDA model to a baseline lasso regression model using predictive accuracy and model generalizability (measured by calculating the area under the curve (AUC) from the receiver operating characteristic (ROC) curve). The L-LDA model outperforms the lasso logistic regression model at predicting session-level codes with average AUC scores of .79, and .70, respectively. For fine-grained level coding, L-LDA and logistic regression are able to identify specific talk-turns representative of symptom codes. However, model performance for talk-turn identification is not yet as reliable as human coders. We conclude that the L-LDA model has the potential to be an objective, scaleable method for accurate automated coding of psychotherapy sessions that performs better than comparable discriminative methods at session-level coding and can also predict fine-grained codes. PMID:26625437

  18. Connecting Architecture and Implementation

    NASA Astrophysics Data System (ADS)

    Buchgeher, Georg; Weinreich, Rainer

    Software architectures are still typically defined and described independently from implementation. To avoid architectural erosion and drift, architectural representation needs to be continuously updated and synchronized with system implementation. Existing approaches for architecture representation like informal architecture documentation, UML diagrams, and Architecture Description Languages (ADLs) provide only limited support for connecting architecture descriptions and implementations. Architecture management tools like Lattix, SonarJ, and Sotoarc and UML-tools tackle this problem by extracting architecture information directly from code. This approach works for low-level architectural abstractions like classes and interfaces in object-oriented systems but fails to support architectural abstractions not found in programming languages. In this paper we present an approach for linking and continuously synchronizing a formalized architecture representation to an implementation. The approach is a synthesis of functionality provided by code-centric architecture management and UML tools and higher-level architecture analysis approaches like ADLs.

  19. Rosetta3: An Object-Oriented Software Suite for the Simulation and Design of Macromolecules

    PubMed Central

    Leaver-Fay, Andrew; Tyka, Michael; Lewis, Steven M.; Lange, Oliver F.; Thompson, James; Jacak, Ron; Kaufman, Kristian; Renfrew, P. Douglas; Smith, Colin A.; Sheffler, Will; Davis, Ian W.; Cooper, Seth; Treuille, Adrien; Mandell, Daniel J.; Richter, Florian; Ban, Yih-En Andrew; Fleishman, Sarel J.; Corn, Jacob E.; Kim, David E.; Lyskov, Sergey; Berrondo, Monica; Mentzer, Stuart; Popović, Zoran; Havranek, James J.; Karanicolas, John; Das, Rhiju; Meiler, Jens; Kortemme, Tanja; Gray, Jeffrey J.; Kuhlman, Brian; Baker, David; Bradley, Philip

    2013-01-01

    We have recently completed a full re-architecturing of the Rosetta molecular modeling program, generalizing and expanding its existing functionality. The new architecture enables the rapid prototyping of novel protocols by providing easy to use interfaces to powerful tools for molecular modeling. The source code of this rearchitecturing has been released as Rosetta3 and is freely available for academic use. At the time of its release, it contained 470,000 lines of code. Counting currently unpublished protocols at the time of this writing, the source includes 1,285,000 lines. Its rapid growth is a testament to its ease of use. This document describes the requirements for our new architecture, justifies the design decisions, sketches out central classes, and highlights a few of the common tasks that the new software can perform. PMID:21187238

  20. 49 CFR Appendix E to Part 512 - Consumer Assistance to Recycle and Save (CARS) Class Determinations

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 49 Transportation 6 2012-10-01 2012-10-01 false Consumer Assistance to Recycle and Save (CARS... Save (CARS) Class Determinations (a) The Chief Counsel has determined that the following information... (3) CARS Dealer Code and Authorization Code. (b) The Chief Counsel has determined that the disclosure...

  1. 49 CFR Appendix E to Part 512 - Consumer Assistance to Recycle and Save (CARS) Class Determinations

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 49 Transportation 6 2014-10-01 2014-10-01 false Consumer Assistance to Recycle and Save (CARS... Save (CARS) Class Determinations (a) The Chief Counsel has determined that the following information... (3) CARS Dealer Code and Authorization Code. (b) The Chief Counsel has determined that the disclosure...

  2. 49 CFR Appendix E to Part 512 - Consumer Assistance to Recycle and Save (CARS) Class Determinations

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 49 Transportation 6 2010-10-01 2010-10-01 false Consumer Assistance to Recycle and Save (CARS... Save (CARS) Class Determinations (a) The Chief Counsel has determined that the following information... (3) CARS Dealer Code and Authorization Code. (b) The Chief Counsel has determined that the disclosure...

  3. 49 CFR Appendix E to Part 512 - Consumer Assistance to Recycle and Save (CARS) Class Determinations

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 49 Transportation 6 2011-10-01 2011-10-01 false Consumer Assistance to Recycle and Save (CARS... Save (CARS) Class Determinations (a) The Chief Counsel has determined that the following information... (3) CARS Dealer Code and Authorization Code. (b) The Chief Counsel has determined that the disclosure...

  4. A novel Morse code-inspired method for multiclass motor imagery brain-computer interface (BCI) design.

    PubMed

    Jiang, Jun; Zhou, Zongtan; Yin, Erwei; Yu, Yang; Liu, Yadong; Hu, Dewen

    2015-11-01

    Motor imagery (MI)-based brain-computer interfaces (BCIs) allow disabled individuals to control external devices voluntarily, helping us to restore lost motor functions. However, the number of control commands available in MI-based BCIs remains limited, limiting the usability of BCI systems in control applications involving multiple degrees of freedom (DOF), such as control of a robot arm. To address this problem, we developed a novel Morse code-inspired method for MI-based BCI design to increase the number of output commands. Using this method, brain activities are modulated by sequences of MI (sMI) tasks, which are constructed by alternately imagining movements of the left or right hand or no motion. The codes of the sMI task was detected from EEG signals and mapped to special commands. According to permutation theory, an sMI task with N-length allows 2 × (2(N)-1) possible commands with the left and right MI tasks under self-paced conditions. To verify its feasibility, the new method was used to construct a six-class BCI system to control the arm of a humanoid robot. Four subjects participated in our experiment and the averaged accuracy of the six-class sMI tasks was 89.4%. The Cohen's kappa coefficient and the throughput of our BCI paradigm are 0.88 ± 0.060 and 23.5bits per minute (bpm), respectively. Furthermore, all of the subjects could operate an actual three-joint robot arm to grasp an object in around 49.1s using our approach. These promising results suggest that the Morse code-inspired method could be used in the design of BCIs for multi-DOF control. Copyright © 2015 Elsevier Ltd. All rights reserved.

  5. A common class of transcripts with 5'-intron depletion, distinct early coding sequence features, and N1-methyladenosine modification.

    PubMed

    Cenik, Can; Chua, Hon Nian; Singh, Guramrit; Akef, Abdalla; Snyder, Michael P; Palazzo, Alexander F; Moore, Melissa J; Roth, Frederick P

    2017-03-01

    Introns are found in 5' untranslated regions (5'UTRs) for 35% of all human transcripts. These 5'UTR introns are not randomly distributed: Genes that encode secreted, membrane-bound and mitochondrial proteins are less likely to have them. Curiously, transcripts lacking 5'UTR introns tend to harbor specific RNA sequence elements in their early coding regions. To model and understand the connection between coding-region sequence and 5'UTR intron status, we developed a classifier that can predict 5'UTR intron status with >80% accuracy using only sequence features in the early coding region. Thus, the classifier identifies transcripts with 5 ' proximal- i ntron- m inus-like-coding regions ("5IM" transcripts). Unexpectedly, we found that the early coding sequence features defining 5IM transcripts are widespread, appearing in 21% of all human RefSeq transcripts. The 5IM class of transcripts is enriched for non-AUG start codons, more extensive secondary structure both preceding the start codon and near the 5' cap, greater dependence on eIF4E for translation, and association with ER-proximal ribosomes. 5IM transcripts are bound by the exon junction complex (EJC) at noncanonical 5' proximal positions. Finally, N 1 -methyladenosines are specifically enriched in the early coding regions of 5IM transcripts. Taken together, our analyses point to the existence of a distinct 5IM class comprising ∼20% of human transcripts. This class is defined by depletion of 5' proximal introns, presence of specific RNA sequence features associated with low translation efficiency, N 1 -methyladenosines in the early coding region, and enrichment for noncanonical binding by the EJC. © 2017 Cenik et al.; Published by Cold Spring Harbor Laboratory Press for the RNA Society.

  6. MetaJC++: A flexible and automatic program transformation technique using meta framework

    NASA Astrophysics Data System (ADS)

    Beevi, Nadera S.; Reghu, M.; Chitraprasad, D.; Vinodchandra, S. S.

    2014-09-01

    Compiler is a tool to translate abstract code containing natural language terms to machine code. Meta compilers are available to compile more than one languages. We have developed a meta framework intends to combine two dissimilar programming languages, namely C++ and Java to provide a flexible object oriented programming platform for the user. Suitable constructs from both the languages have been combined, thereby forming a new and stronger Meta-Language. The framework is developed using the compiler writing tools, Flex and Yacc to design the front end of the compiler. The lexer and parser have been developed to accommodate the complete keyword set and syntax set of both the languages. Two intermediate representations have been used in between the translation of the source program to machine code. Abstract Syntax Tree has been used as a high level intermediate representation that preserves the hierarchical properties of the source program. A new machine-independent stack-based byte-code has also been devised to act as a low level intermediate representation. The byte-code is essentially organised into an output class file that can be used to produce an interpreted output. The results especially in the spheres of providing C++ concepts in Java have given an insight regarding the potential strong features of the resultant meta-language.

  7. PD5: a general purpose library for primer design software.

    PubMed

    Riley, Michael C; Aubrey, Wayne; Young, Michael; Clare, Amanda

    2013-01-01

    Complex PCR applications for large genome-scale projects require fast, reliable and often highly sophisticated primer design software applications. Presently, such applications use pipelining methods to utilise many third party applications and this involves file parsing, interfacing and data conversion, which is slow and prone to error. A fully integrated suite of software tools for primer design would considerably improve the development time, the processing speed, and the reliability of bespoke primer design software applications. The PD5 software library is an open-source collection of classes and utilities, providing a complete collection of software building blocks for primer design and analysis. It is written in object-oriented C(++) with an emphasis on classes suitable for efficient and rapid development of bespoke primer design programs. The modular design of the software library simplifies the development of specific applications and also integration with existing third party software where necessary. We demonstrate several applications created using this software library that have already proved to be effective, but we view the project as a dynamic environment for building primer design software and it is open for future development by the bioinformatics community. Therefore, the PD5 software library is published under the terms of the GNU General Public License, which guarantee access to source-code and allow redistribution and modification. The PD5 software library is downloadable from Google Code and the accompanying Wiki includes instructions and examples: http://code.google.com/p/primer-design.

  8. SimITK: rapid ITK prototyping using the Simulink visual programming environment

    NASA Astrophysics Data System (ADS)

    Dickinson, A. W. L.; Mousavi, P.; Gobbi, D. G.; Abolmaesumi, P.

    2011-03-01

    The Insight Segmentation and Registration Toolkit (ITK) is a long-established, software package used for image analysis, visualization, and image-guided surgery applications. This package is a collection of C++ libraries, that can pose usability problems for users without C++ programming experience. To bridge the gap between the programming complexities and the required learning curve of ITK, we present a higher-level visual programming environment that represents ITK methods and classes by wrapping them into "blocks" within MATLAB's visual programming environment, Simulink. These blocks can be connected to form workflows: visual schematics that closely represent the structure of a C++ program. Due to the heavily C++ templated nature of ITK, direct interaction between Simulink and ITK requires an intermediary to convert their respective datatypes and allow intercommunication. We have developed a "Virtual Block" that serves as an intermediate wrapper around the ITK class and is responsible for resolving the templated datatypes used by ITK to native types used by Simulink. Presently, the wrapping procedure for SimITK is semi-automatic in that it requires XML descriptions of the ITK classes as a starting point, as this data is used to create all other necessary integration files. The generation of all source code and object code from the XML is done automatically by a CMake build script that yields Simulink blocks as the final result. An example 3D segmentation workflow using cranial-CT data as well as a 3D MR-to-CT registration workflow are presented as a proof-of-concept.

  9. A Simple Test of Class-Level Genetic Association Can Reveal Novel Cardiometabolic Trait Loci.

    PubMed

    Qian, Jing; Nunez, Sara; Reed, Eric; Reilly, Muredach P; Foulkes, Andrea S

    2016-01-01

    Characterizing the genetic determinants of complex diseases can be further augmented by incorporating knowledge of underlying structure or classifications of the genome, such as newly developed mappings of protein-coding genes, epigenetic marks, enhancer elements and non-coding RNAs. We apply a simple class-level testing framework, termed Genetic Class Association Testing (GenCAT), to identify protein-coding gene association with 14 cardiometabolic (CMD) related traits across 6 publicly available genome wide association (GWA) meta-analysis data resources. GenCAT uses SNP-level meta-analysis test statistics across all SNPs within a class of elements, as well as the size of the class and its unique correlation structure, to determine if the class is statistically meaningful. The novelty of findings is evaluated through investigation of regional signals. A subset of findings are validated using recently updated, larger meta-analysis resources. A simulation study is presented to characterize overall performance with respect to power, control of family-wise error and computational efficiency. All analysis is performed using the GenCAT package, R version 3.2.1. We demonstrate that class-level testing complements the common first stage minP approach that involves individual SNP-level testing followed by post-hoc ascribing of statistically significant SNPs to genes and loci. GenCAT suggests 54 protein-coding genes at 41 distinct loci for the 13 CMD traits investigated in the discovery analysis, that are beyond the discoveries of minP alone. An additional application to biological pathways demonstrates flexibility in defining genetic classes. We conclude that it would be prudent to include class-level testing as standard practice in GWA analysis. GenCAT, for example, can be used as a simple, complementary and efficient strategy for class-level testing that leverages existing data resources, requires only summary level data in the form of test statistics, and adds significant value with respect to its potential for identifying multiple novel and clinically relevant trait associations.

  10. GSE, data management system programmers/User' manual

    NASA Technical Reports Server (NTRS)

    Schlagheck, R. A.; Dolerhie, B. D., Jr.; Ghiglieri, F. J.

    1974-01-01

    The GSE data management system is a computerized program which provides for a central storage source for key data associated with the mechanical ground support equipment (MGSE). Eight major sort modes can be requested by the user. Attributes that are printed automatically with each sort include the GSE end item number, description, class code, functional code, fluid media, use location, design responsibility, weight, cost, quantity, dimensions, and applicable documents. Multiple subsorts are available for the class code, functional code, fluid media, use location, design responsibility, and applicable document categories. These sorts and how to use them are described. The program and GSE data bank may be easily updated and expanded.

  11. Using Abbreviated Injury Scale (AIS) codes to classify Computed Tomography (CT) features in the Marshall System.

    PubMed

    Lesko, Mehdi M; Woodford, Maralyn; White, Laura; O'Brien, Sarah J; Childs, Charmaine; Lecky, Fiona E

    2010-08-06

    The purpose of Abbreviated Injury Scale (AIS) is to code various types of Traumatic Brain Injuries (TBI) based on their anatomical location and severity. The Marshall CT Classification is used to identify those subgroups of brain injured patients at higher risk of deterioration or mortality. The purpose of this study is to determine whether and how AIS coding can be translated to the Marshall Classification Initially, a Marshall Class was allocated to each AIS code through cross-tabulation. This was agreed upon through several discussion meetings with experts from both fields (clinicians and AIS coders). Furthermore, in order to make this translation possible, some necessary assumptions with regards to coding and classification of mass lesions and brain swelling were essential which were all approved and made explicit. The proposed method involves two stages: firstly to determine all possible Marshall Classes which a given patient can attract based on allocated AIS codes; via cross-tabulation and secondly to assign one Marshall Class to each patient through an algorithm. This method can be easily programmed in computer softwares and it would enable future important TBI research programs using trauma registry data.

  12. Using Abbreviated Injury Scale (AIS) codes to classify Computed Tomography (CT) features in the Marshall System

    PubMed Central

    2010-01-01

    Background The purpose of Abbreviated Injury Scale (AIS) is to code various types of Traumatic Brain Injuries (TBI) based on their anatomical location and severity. The Marshall CT Classification is used to identify those subgroups of brain injured patients at higher risk of deterioration or mortality. The purpose of this study is to determine whether and how AIS coding can be translated to the Marshall Classification Methods Initially, a Marshall Class was allocated to each AIS code through cross-tabulation. This was agreed upon through several discussion meetings with experts from both fields (clinicians and AIS coders). Furthermore, in order to make this translation possible, some necessary assumptions with regards to coding and classification of mass lesions and brain swelling were essential which were all approved and made explicit. Results The proposed method involves two stages: firstly to determine all possible Marshall Classes which a given patient can attract based on allocated AIS codes; via cross-tabulation and secondly to assign one Marshall Class to each patient through an algorithm. Conclusion This method can be easily programmed in computer softwares and it would enable future important TBI research programs using trauma registry data. PMID:20691038

  13. Implementation of Rule 8 of the International Code of Nomenclature of Prokaryotes for the renaming of classes. Request for an Opinion.

    PubMed

    Oren, Aharon; Parte, Aidan; Garrity, George M

    2016-10-01

    The new version of Rule 8 of the International Code of Nomenclature of Prokaryotes as approved in Istanbul in 2008 has the clear advantage of establishing a uniform way to name classes of prokaryotes, similar to the way other higher taxa are named. However, retroactive implementation of the modified Rule is problematic as it destabilizes the nomenclature and requires the replacement of a large number of names of classes that have been validly published, which would be in violation of Principle 1 of the Code. Therefore, we call upon the International Committee on Systematics of Prokaryotes and its Judicial Commission to reconsider the retroactivity of Rule 8.

  14. Future perspectives - proposal for Oxford Physiome Project.

    PubMed

    Oku, Yoshitaka

    2010-01-01

    The Physiome Project is an effort to understand living creatures using "analysis by synthesis" strategy, i.e., by reproducing their behaviors. In order to achieve its goal, sharing developed models between different computer languages and application programs to incorporate into integrated models is critical. To date, several XML-based markup languages has been developed for this purpose. However, source codes written with XML-based languages are very difficult to read and edit using text editors. An alternative way is to use an object-oriented meta-language, which can be translated to different computer languages and transplanted to different application programs. Object-oriented languages are suitable for describing structural organization by hierarchical classes and taking advantage of statistical properties to reduce the number of parameter while keeping the complexity of behaviors. Using object-oriented languages to describe each element and posting it to a public domain should be the next step to build up integrated models of the respiratory control system.

  15. New class of photonic quantum error correction codes

    NASA Astrophysics Data System (ADS)

    Silveri, Matti; Michael, Marios; Brierley, R. T.; Salmilehto, Juha; Albert, Victor V.; Jiang, Liang; Girvin, S. M.

    We present a new class of quantum error correction codes for applications in quantum memories, communication and scalable computation. These codes are constructed from a finite superposition of Fock states and can exactly correct errors that are polynomial up to a specified degree in creation and destruction operators. Equivalently, they can perform approximate quantum error correction to any given order in time step for the continuous-time dissipative evolution under these errors. The codes are related to two-mode photonic codes but offer the advantage of requiring only a single photon mode to correct loss (amplitude damping), as well as the ability to correct other errors, e.g. dephasing. Our codes are also similar in spirit to photonic ''cat codes'' but have several advantages including smaller mean occupation number and exact rather than approximate orthogonality of the code words. We analyze how the rate of uncorrectable errors scales with the code complexity and discuss the unitary control for the recovery process. These codes are realizable with current superconducting qubit technology and can increase the fidelity of photonic quantum communication and memories.

  16. The association between measurements of antimicrobial use and resistance in the faeces microbiota of finisher batches.

    PubMed

    Andersen, V D; DE Knegt, L V; Munk, P; Jensen, M S; Agersø, Y; Aarestrup, F M; Vigre, H

    2017-10-01

    The objectives were to present three approaches for calculating antimicrobial (AM) use in pigs that take into account the rearing period and rearing site, and to study the association between these measurements and phenotypical resistance and abundance of resistance genes in faeces samples from 10 finisher batches. The AM use was calculated relative to the rearing period of the batches as (i) 'Finisher Unit Exposure' at unit level, (ii) 'Lifetime Exposure' at batch level and (iii) 'Herd Exposure' at herd level. A significant effect on the occurrence of tetracycline resistance measured by cultivation was identified for Lifetime Exposure for the AM class: tetracycline. Furthermore, for Lifetime Exposure for the AM classes: macrolide, broad-spectrum penicillin, sulfonamide and tetracycline use as well as Herd Unit Exposure for the AM classes: aminoglycoside, lincosamide and tetracycline use, a significant effect was observed on the occurrence of genes coding for the AM resistance classes: aminoglycoside, lincosamide, macrolide, β-lactam, sulfonamide and tetracycline. No effect was observed for Finisher Unit Exposure. Overall, the study shows that Lifetime Exposure is an efficient measurement of AM use in finisher batches, and has a significant effect on the occurrence of resistance, measured either by cultivation or metagenomics.

  17. Compression of digital images over local area networks. Appendix 1: Item 3. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Gorjala, Bhargavi

    1991-01-01

    Differential Pulse Code Modulation (DPCM) has been used with speech for many years. It has not been as successful for images because of poor edge performance. The only corruption in DPC is quantizer error but this corruption becomes quite large in the region of an edge because of the abrupt changes in the statistics of the signal. We introduce two improved DPCM schemes; Edge correcting DPCM and Edge Preservation Differential Coding. These two coding schemes will detect the edges and take action to correct them. In an Edge Correcting scheme, the quantizer error for an edge is encoded using a recursive quantizer with entropy coding and sent to the receiver as side information. In an Edge Preserving scheme, when the quantizer input falls in the overload region, the quantizer error is encoded and sent to the receiver repeatedly until the quantizer input falls in the inner levels. Therefore these coding schemes increase the bit rate in the region of an edge and require variable rate channels. We implement these two variable rate coding schemes on a token wing network. Timed token protocol supports two classes of messages; asynchronous and synchronous. The synchronous class provides a pre-allocated bandwidth and guaranteed response time. The remaining bandwidth is dynamically allocated to the asynchronous class. The Edge Correcting DPCM is simulated by considering the edge information under the asynchronous class. For the simulation of the Edge Preserving scheme, the amount of information sent each time is fixed, but the length of the packet or the bit rate for that packet is chosen depending on the availability capacity. The performance of the network, and the performance of the image coding algorithms, is studied.

  18. Accumulate repeat accumulate codes

    NASA Technical Reports Server (NTRS)

    Abbasfar, Aliazam; Divsalar, Dariush; Yao, Kung

    2004-01-01

    In this paper we propose an innovative channel coding scheme called 'Accumulate Repeat Accumulate codes' (ARA). This class of codes can be viewed as serial turbo-like codes, or as a subclass of Low Density Parity Check (LDPC) codes, thus belief propagation can be used for iterative decoding of ARA codes on a graph. The structure of encoder for this class can be viewed as precoded Repeat Accumulate (RA) code or as precoded Irregular Repeat Accumulate (IRA) code, where simply an accumulator is chosen as a precoder. Thus ARA codes have simple, and very fast encoder structure when they representing LDPC codes. Based on density evolution for LDPC codes through some examples for ARA codes, we show that for maximum variable node degree 5 a minimum bit SNR as low as 0.08 dB from channel capacity for rate 1/2 can be achieved as the block size goes to infinity. Thus based on fixed low maximum variable node degree, its threshold outperforms not only the RA and IRA codes but also the best known LDPC codes with the dame maximum node degree. Furthermore by puncturing the accumulators any desired high rate codes close to code rate 1 can be obtained with thresholds that stay close to the channel capacity thresholds uniformly. Iterative decoding simulation results are provided. The ARA codes also have projected graph or protograph representation that allows for high speed decoder implementation.

  19. Field Encapsulation Library The FEL 2.2 User Guide

    NASA Technical Reports Server (NTRS)

    Moran, Patrick J.; Henze, Chris; Ellsworth, David

    1999-01-01

    This document describes version 2.2 of the Field Encapsulation Library (FEL), a library of mesh and field classes. FEL is a library for programmers - it is a "building block" enabling the rapid development of applications by a user. Since FEL is a library intended for code development, it is essential that enough technical detail be provided so that one can make full use of the code. Providing such detail requires some assumptions with respect to the reader's familiarity with the library implementation language, C++, particularly C++ with templates. We have done our best to make the explanations accessible to those who may not be completely C++ literate. Nevertheless, familiarity with the language will certainly help one's understanding of how and why things work the way they do. One consolation is that the level of understanding essential for using the library is significantly less than the level that one should have in order to modify or extend the library. One more remark on C++ templates: Templates have been a source of both joy and frustration for us. The frustration stems from the lack of mature or complete implementations that one has to work with. Template problems rear their ugly head particularly when porting. When porting C code, successfully compiling to a set of object files typically means that one is almost done. With templated C++ and the current state of the compilers and linkers, generating the object files is often only the beginning of the fun. On the other hand, templates are quite powerful. Used judiciously, templates enable more succinct designs and more efficient code. Templates also help with code maintenance. Designers can avoid creating objects that are the same in many respects, but not exactly the same. For example, FEL fields are templated by node type, thus the code for scalar fields and vector fields is shared. Furthermore, node type templating allows the library user to instantiate fields with data types not provided by the FEL authors. This type of flexibility would be difficult to offer without the support of the language. For users who may be having template-related problems, we offer the consolation that support for C++ templates is destined to improve with time. Efforts such as the Standard Template Library (STL) will inevitably drive vendors to provide more thorough, optimized tools for template code development. Furthermore, the benefits will become harder to resist for those who currently subscribe to the least-common-denominator "code it all in C" strategy. May FEL bring you both increased productivity and aesthetic satisfaction.

  20. Assessment and Mitigation of Radiation, EMP, Debris & Shrapnel Impacts at Megajoule-Class Laser Facilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eder, D C; Anderson, R W; Bailey, D S

    2009-10-05

    The generation of neutron/gamma radiation, electromagnetic pulses (EMP), debris and shrapnel at mega-Joule class laser facilities (NIF and LMJ) impacts experiments conducted at these facilities. The complex 3D numerical codes used to assess these impacts range from an established code that required minor modifications (MCNP - calculates neutron and gamma radiation levels in complex geometries), through a code that required significant modifications to treat new phenomena (EMSolve - calculates EMP from electrons escaping from laser targets), to a new code, ALE-AMR, that is being developed through a joint collaboration between LLNL, CEA, and UC (UCSD, UCLA, and LBL) for debrismore » and shrapnel modelling.« less

  1. Explicit area-based accuracy assessment for mangrove tree crown delineation using Geographic Object-Based Image Analysis (GEOBIA)

    NASA Astrophysics Data System (ADS)

    Kamal, Muhammad; Johansen, Kasper

    2017-10-01

    Effective mangrove management requires spatially explicit information of mangrove tree crown map as a basis for ecosystem diversity study and health assessment. Accuracy assessment is an integral part of any mapping activities to measure the effectiveness of the classification approach. In geographic object-based image analysis (GEOBIA) the assessment of the geometric accuracy (shape, symmetry and location) of the created image objects from image segmentation is required. In this study we used an explicit area-based accuracy assessment to measure the degree of similarity between the results of the classification and reference data from different aspects, including overall quality (OQ), user's accuracy (UA), producer's accuracy (PA) and overall accuracy (OA). We developed a rule set to delineate the mangrove tree crown using WorldView-2 pan-sharpened image. The reference map was obtained by visual delineation of the mangrove tree crowns boundaries form a very high-spatial resolution aerial photograph (7.5cm pixel size). Ten random points with a 10 m radius circular buffer were created to calculate the area-based accuracy assessment. The resulting circular polygons were used to clip both the classified image objects and reference map for area comparisons. In this case, the area-based accuracy assessment resulted 64% and 68% for the OQ and OA, respectively. The overall quality of the calculation results shows the class-related area accuracy; which is the area of correctly classified as tree crowns was 64% out of the total area of tree crowns. On the other hand, the overall accuracy of 68% was calculated as the percentage of all correctly classified classes (tree crowns and canopy gaps) in comparison to the total class area (an entire image). Overall, the area-based accuracy assessment was simple to implement and easy to interpret. It also shows explicitly the omission and commission error variations of object boundary delineation with colour coded polygons.

  2. Millisecond timing on PCs and Macs.

    PubMed

    MacInnes, W J; Taylor, T L

    2001-05-01

    A real-time, object-oriented solution for displaying stimuli on Windows 95/98, MacOS and Linux platforms is presented. The program, written in C++, utilizes a special-purpose window class (GLWindow), OpenGL, and 32-bit graphics acceleration; it avoids display timing uncertainty by substituting the new window class for the default window code for each system. We report the outcome of tests for real-time capability across PC and Mac platforms running a variety of operating systems. The test program, which can be used as a shell for programming real-time experiments and testing specific processors, is available at http://www.cs.dal.ca/~macinnwj. We propose to provide researchers with a sense of the usefulness of our program, highlight the ability of many multitasking environments to achieve real time, as well as caution users about systems that may not achieve real time, even under optimal conditions.

  3. National Underground Mines Inventory

    DTIC Science & Technology

    1983-10-01

    system is well designed to minimize water accumulation on the drift levels. In many areas, sufficient water has accumulated to make the use of boots a...four characters designate Field office. 17-18 State Code Pic 99 FIPS code for state in which minets located. 19-21 County Code Plc 999 FIPS code for... Designate a general product class based onSIC code. 28-29 Nine Type Plc 99 Natal/Nonmetal mine type code. Based on subunit operations code and canvass code

  4. Proposal to include the rank of phylum in the International Code of Nomenclature of Prokaryotes.

    PubMed

    Oren, Aharon; da Costa, Milton S; Garrity, George M; Rainey, Fred A; Rosselló-Móra, Ramon; Schink, Bernhard; Sutcliffe, Iain; Trujillo, Martha E; Whitman, William B

    2015-11-01

    The International Code of Nomenclature of Prokaryotes covers the nomenclature of prokaryotes up to the rank of class. We propose here modifying the Code to include the rank of phylum so that names of phyla that fulfil the rules of the Code will obtain standing in the nomenclature.

  5. The forgotten grammatical category: Adjective use in agrammatic aphasia

    PubMed Central

    Meltzer-Asscher, Aya; Thompson, Cynthia K.

    2014-01-01

    Background In contrast to nouns and verbs, the use of adjectives in agrammatic aphasia has not been systematically studied. However, because of the linguistic and psycholinguistic attributes of adjectives, some of which overlap with nouns and some with verbs, analysis of adjective production is important for testing theories of word class production deficits in agrammatism. Aims The objective of the current study was to compare adjective use in agrammatic and healthy individuals, focusing on three factors: overall adjective production rate, production of predicative and attributive adjectives, and production of adjectives with complex argument structure. Method & Procedures Narratives elicited from 14 agrammatic and 14 control participants were coded for open class grammatical category production (i.e., nouns, verbs, adjectives), with each adjective also coded for its syntactic environment (attributive/predicative) and argument structure. Outcomes & Results Overall, agrammatic speakers used adjectives in proportions similar to that of cognitively healthy speakers. However, they exhibited a greater proportion of predicative adjectives and a lesser proportion of attributive adjectives, compared to controls. Additionally, agrammatic participants produced adjectives with less complex argument structure than controls. Conclusions The overall normal-like frequency of adjectives produced by agrammatic speakers suggests that agrammatism does not involve an inherent difficulty with adjectives as a word class or with predication, or that it entails a deficit in processing low imageability words. However, agrammatic individuals’ reduced production of attributive adjectives and adjectives with complements extends previous findings of an adjunction deficit and of impairment in complex argument structure processing, respectively, to the adjectival domain. The results suggest that these deficits are not tied to a specific grammatical category. PMID:24882945

  6. The forgotten grammatical category: Adjective use in agrammatic aphasia.

    PubMed

    Meltzer-Asscher, Aya; Thompson, Cynthia K

    2014-07-01

    In contrast to nouns and verbs, the use of adjectives in agrammatic aphasia has not been systematically studied. However, because of the linguistic and psycholinguistic attributes of adjectives, some of which overlap with nouns and some with verbs, analysis of adjective production is important for testing theories of word class production deficits in agrammatism. The objective of the current study was to compare adjective use in agrammatic and healthy individuals, focusing on three factors: overall adjective production rate, production of predicative and attributive adjectives, and production of adjectives with complex argument structure. Narratives elicited from 14 agrammatic and 14 control participants were coded for open class grammatical category production (i.e., nouns, verbs, adjectives), with each adjective also coded for its syntactic environment (attributive/predicative) and argument structure. Overall, agrammatic speakers used adjectives in proportions similar to that of cognitively healthy speakers. However, they exhibited a greater proportion of predicative adjectives and a lesser proportion of attributive adjectives, compared to controls. Additionally, agrammatic participants produced adjectives with less complex argument structure than controls. The overall normal-like frequency of adjectives produced by agrammatic speakers suggests that agrammatism does not involve an inherent difficulty with adjectives as a word class or with predication, or that it entails a deficit in processing low imageability words. However, agrammatic individuals' reduced production of attributive adjectives and adjectives with complements extends previous findings of an adjunction deficit and of impairment in complex argument structure processing, respectively, to the adjectival domain. The results suggest that these deficits are not tied to a specific grammatical category.

  7. Accumulate Repeat Accumulate Coded Modulation

    NASA Technical Reports Server (NTRS)

    Abbasfar, Aliazam; Divsalar, Dariush; Yao, Kung

    2004-01-01

    In this paper we propose an innovative coded modulation scheme called 'Accumulate Repeat Accumulate Coded Modulation' (ARA coded modulation). This class of codes can be viewed as serial turbo-like codes, or as a subclass of Low Density Parity Check (LDPC) codes that are combined with high level modulation. Thus at the decoder belief propagation can be used for iterative decoding of ARA coded modulation on a graph, provided a demapper transforms the received in-phase and quadrature samples to reliability of the bits.

  8. Effects of Action Relations on the Configural Coding between Objects

    ERIC Educational Resources Information Center

    Riddoch, M. J.; Pippard, B.; Booth, L.; Rickell, J.; Summers, J.; Brownson, A.; Humphreys, G. W.

    2011-01-01

    Configural coding is known to take place between the parts of individual objects but has never been shown between separate objects. We provide novel evidence here for configural coding between separate objects through a study of the effects of action relations between objects on extinction. Patients showing visual extinction were presented with…

  9. Higher-Order Neural Networks Applied to 2D and 3D Object Recognition

    NASA Technical Reports Server (NTRS)

    Spirkovska, Lilly; Reid, Max B.

    1994-01-01

    A Higher-Order Neural Network (HONN) can be designed to be invariant to geometric transformations such as scale, translation, and in-plane rotation. Invariances are built directly into the architecture of a HONN and do not need to be learned. Thus, for 2D object recognition, the network needs to be trained on just one view of each object class, not numerous scaled, translated, and rotated views. Because the 2D object recognition task is a component of the 3D object recognition task, built-in 2D invariance also decreases the size of the training set required for 3D object recognition. We present results for 2D object recognition both in simulation and within a robotic vision experiment and for 3D object recognition in simulation. We also compare our method to other approaches and show that HONNs have distinct advantages for position, scale, and rotation-invariant object recognition. The major drawback of HONNs is that the size of the input field is limited due to the memory required for the large number of interconnections in a fully connected network. We present partial connectivity strategies and a coarse-coding technique for overcoming this limitation and increasing the input field to that required by practical object recognition problems.

  10. Local structure preserving sparse coding for infrared target recognition

    PubMed Central

    Han, Jing; Yue, Jiang; Zhang, Yi; Bai, Lianfa

    2017-01-01

    Sparse coding performs well in image classification. However, robust target recognition requires a lot of comprehensive template images and the sparse learning process is complex. We incorporate sparsity into a template matching concept to construct a local sparse structure matching (LSSM) model for general infrared target recognition. A local structure preserving sparse coding (LSPSc) formulation is proposed to simultaneously preserve the local sparse and structural information of objects. By adding a spatial local structure constraint into the classical sparse coding algorithm, LSPSc can improve the stability of sparse representation for targets and inhibit background interference in infrared images. Furthermore, a kernel LSPSc (K-LSPSc) formulation is proposed, which extends LSPSc to the kernel space to weaken the influence of the linear structure constraint in nonlinear natural data. Because of the anti-interference and fault-tolerant capabilities, both LSPSc- and K-LSPSc-based LSSM can implement target identification based on a simple template set, which just needs several images containing enough local sparse structures to learn a sufficient sparse structure dictionary of a target class. Specifically, this LSSM approach has stable performance in the target detection with scene, shape and occlusions variations. High performance is demonstrated on several datasets, indicating robust infrared target recognition in diverse environments and imaging conditions. PMID:28323824

  11. SoAx: A generic C++ Structure of Arrays for handling particles in HPC codes

    NASA Astrophysics Data System (ADS)

    Homann, Holger; Laenen, Francois

    2018-03-01

    The numerical study of physical problems often require integrating the dynamics of a large number of particles evolving according to a given set of equations. Particles are characterized by the information they are carrying such as an identity, a position other. There are generally speaking two different possibilities for handling particles in high performance computing (HPC) codes. The concept of an Array of Structures (AoS) is in the spirit of the object-oriented programming (OOP) paradigm in that the particle information is implemented as a structure. Here, an object (realization of the structure) represents one particle and a set of many particles is stored in an array. In contrast, using the concept of a Structure of Arrays (SoA), a single structure holds several arrays each representing one property (such as the identity) of the whole set of particles. The AoS approach is often implemented in HPC codes due to its handiness and flexibility. For a class of problems, however, it is known that the performance of SoA is much better than that of AoS. We confirm this observation for our particle problem. Using a benchmark we show that on modern Intel Xeon processors the SoA implementation is typically several times faster than the AoS one. On Intel's MIC co-processors the performance gap even attains a factor of ten. The same is true for GPU computing, using both computational and multi-purpose GPUs. Combining performance and handiness, we present the library SoAx that has optimal performance (on CPUs, MICs, and GPUs) while providing the same handiness as AoS. For this, SoAx uses modern C++ design techniques such template meta programming that allows to automatically generate code for user defined heterogeneous data structures.

  12. u-Constacyclic codes over F_p+u{F}_p and their applications of constructing new non-binary quantum codes

    NASA Astrophysics Data System (ADS)

    Gao, Jian; Wang, Yongkang

    2018-01-01

    Structural properties of u-constacyclic codes over the ring F_p+u{F}_p are given, where p is an odd prime and u^2=1. Under a special Gray map from F_p+u{F}_p to F_p^2, some new non-binary quantum codes are obtained by this class of constacyclic codes.

  13. Safe, Multiphase Bounds Check Elimination in Java

    DTIC Science & Technology

    2010-01-28

    production of mobile code from source code, JIT compilation in the virtual ma- chine, and application code execution. The code producer uses...invariants, and inequality constraint analysis) to identify and prove redundancy of bounds checks. During class-loading and JIT compilation, the virtual...unoptimized code if the speculated invariants do not hold. The combined effect of the multiple phases is to shift the effort as- sociated with bounds

  14. Interframe vector wavelet coding technique

    NASA Astrophysics Data System (ADS)

    Wus, John P.; Li, Weiping

    1997-01-01

    Wavelet coding is often used to divide an image into multi- resolution wavelet coefficients which are quantized and coded. By 'vectorizing' scalar wavelet coding and combining this with vector quantization (VQ), vector wavelet coding (VWC) can be implemented. Using a finite number of states, finite-state vector quantization (FSVQ) takes advantage of the similarity between frames by incorporating memory into the video coding system. Lattice VQ eliminates the potential mismatch that could occur using pre-trained VQ codebooks. It also eliminates the need for codebook storage in the VQ process, thereby creating a more robust coding system. Therefore, by using the VWC coding method in conjunction with the FSVQ system and lattice VQ, the formulation of a high quality very low bit rate coding systems is proposed. A coding system using a simple FSVQ system where the current state is determined by the previous channel symbol only is developed. To achieve a higher degree of compression, a tree-like FSVQ system is implemented. The groupings are done in this tree-like structure from the lower subbands to the higher subbands in order to exploit the nature of subband analysis in terms of the parent-child relationship. Class A and Class B video sequences from the MPEG-IV testing evaluations are used in the evaluation of this coding method.

  15. Scheduling System Assessment, and Development and Enhancement of Re-engineered Version of GPSS

    NASA Technical Reports Server (NTRS)

    Loganantharaj, Rasiah; Thomas, Bushrod; Passonno, Nicole

    1996-01-01

    The objective of this project is two-fold. First to provide an evaluation of a commercially developed version of the ground processing scheduling system (GPSS) for its applicability to the Kennedy Space Center (KSC) ground processing problem. Second, to work with the KSC GPSS development team and provide enhancement to the existing software. Systems reengineering is required to provide a sustainable system for the users and the software maintenance group. Using the LISP profile prototype code developed by the GPSS reverse reengineering groups as a building block, we have implemented the resource deconfliction portion of GPSS in common LISP using its object oriented features. The prototype corrects and extends some of the deficiencies of the current production version, plus it uses and builds on the classes from the development team's profile prototype.

  16. The general theory of convolutional codes

    NASA Technical Reports Server (NTRS)

    Mceliece, R. J.; Stanley, R. P.

    1993-01-01

    This article presents a self-contained introduction to the algebraic theory of convolutional codes. This introduction is partly a tutorial, but at the same time contains a number of new results which will prove useful for designers of advanced telecommunication systems. Among the new concepts introduced here are the Hilbert series for a convolutional code and the class of compact codes.

  17. 76 FR 66662 - Proposed Amendment of Class D Airspace; Santa Monica, CA

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-10-27

    ...-0611; Airspace Docket No. 11-AWP-11] Proposed Amendment of Class D Airspace; Santa Monica, CA AGENCY... action proposes to modify Class D airspace at Santa Monica Municipal Airport, CA, to accommodate aircraft... an amendment to Title 14 Code of Federal Regulations (14 CFR) Part 71 by modifying Class D airspace...

  18. 76 FR 39038 - Proposed Establishment of Class E Airspace; Lebanon, PA

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-07-05

    ...-0558; Airspace Docket No. 11-AEA-13] Proposed Establishment of Class E Airspace; Lebanon, PA AGENCY... action proposes to establish Class E Airspace at Lebanon, PA, to accommodate new Standard Instrument... amendment to Title 14, Code of Federal Regulations (14 CFR) part 71 to establish Class E airspace at Lebanon...

  19. 75 FR 39145 - Amendment of Class C Airspace; Flint, MI

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-07-08

    ...-0599; Airspace Docket No. 10-AWA-3] RIN 2120-AA66 Amendment of Class C Airspace; Flint, MI AGENCY... description of the Bishop International Airport, Flint, MI, Class C airspace area by amending the airport... defines the Class C airspace area's center point. The Rule This action amends Title 14 Code of Federal...

  20. A unified model of the standard genetic code.

    PubMed

    José, Marco V; Zamudio, Gabriel S; Morgado, Eberto R

    2017-03-01

    The Rodin-Ohno (RO) and the Delarue models divide the table of the genetic code into two classes of aminoacyl-tRNA synthetases (aaRSs I and II) with recognition from the minor or major groove sides of the tRNA acceptor stem, respectively. These models are asymmetric but they are biologically meaningful. On the other hand, the standard genetic code (SGC) can be derived from the primeval RNY code (R stands for purines, Y for pyrimidines and N any of them). In this work, the RO-model is derived by means of group actions, namely, symmetries represented by automorphisms, assuming that the SGC originated from a primeval RNY code. It turns out that the RO-model is symmetric in a six-dimensional (6D) hypercube. Conversely, using the same automorphisms, we show that the RO-model can lead to the SGC. In addition, the asymmetric Delarue model becomes symmetric by means of quotient group operations. We formulate isometric functions that convert the class aaRS I into the class aaRS II and vice versa. We show that the four polar requirement categories display a symmetrical arrangement in our 6D hypercube. Altogether these results cannot be attained, neither in two nor in three dimensions. We discuss the present unified 6D algebraic model, which is compatible with both the SGC (based upon the primeval RNY code) and the RO-model.

  1. Nonlinear viscoplasticity in ASPECT: benchmarking and applications to subduction

    NASA Astrophysics Data System (ADS)

    Glerum, Anne; Thieulot, Cedric; Fraters, Menno; Blom, Constantijn; Spakman, Wim

    2018-03-01

    ASPECT (Advanced Solver for Problems in Earth's ConvecTion) is a massively parallel finite element code originally designed for modeling thermal convection in the mantle with a Newtonian rheology. The code is characterized by modern numerical methods, high-performance parallelism and extensibility. This last characteristic is illustrated in this work: we have extended the use of ASPECT from global thermal convection modeling to upper-mantle-scale applications of subduction.

    Subduction modeling generally requires the tracking of multiple materials with different properties and with nonlinear viscous and viscoplastic rheologies. To this end, we implemented a frictional plasticity criterion that is combined with a viscous diffusion and dislocation creep rheology. Because ASPECT uses compositional fields to represent different materials, all material parameters are made dependent on a user-specified number of fields.

    The goal of this paper is primarily to describe and verify our implementations of complex, multi-material rheology by reproducing the results of four well-known two-dimensional benchmarks: the indentor benchmark, the brick experiment, the sandbox experiment and the slab detachment benchmark. Furthermore, we aim to provide hands-on examples for prospective users by demonstrating the use of multi-material viscoplasticity with three-dimensional, thermomechanical models of oceanic subduction, putting ASPECT on the map as a community code for high-resolution, nonlinear rheology subduction modeling.

  2. C++ Coding Standards and Style Guide

    NASA Technical Reports Server (NTRS)

    Hughes, Steven; Jun, Linda; Shoan, Wendy

    2005-01-01

    This document is based on the "C Style Guide" (SEL-94-003). It contains recommendations for C++ implementations that build on, or in some cases replace, the style described in the C style guide. Style guidelines on any topics that are not covered in this document can be found in the "C Style Guide." An attempt has been made to indicate when these recommendations are just guidelines or suggestions versus when they are more strongly encouraged. Using coding standards makes code easier to read and maintain. General principles that maximize the readability and maintainability of C++ are: (1) Organize classes using encapsulation and information hiding techniques. (2) Enhance readability through the use of indentation and blank lines. (3) Add comments to header files to help users of classes. (4) Add comments to implementation files to help maintainers of classes. (5) Create names that are meaningful and readable.

  3. Topological quantum distillation.

    PubMed

    Bombin, H; Martin-Delgado, M A

    2006-11-03

    We construct a class of topological quantum codes to perform quantum entanglement distillation. These codes implement the whole Clifford group of unitary operations in a fully topological manner and without selective addressing of qubits. This allows us to extend their application also to quantum teleportation, dense coding, and computation with magic states.

  4. Visualization: a tool for enhancing students' concept images of basic object-oriented concepts

    NASA Astrophysics Data System (ADS)

    Cetin, Ibrahim

    2013-03-01

    The purpose of this study was twofold: to investigate students' concept images about class, object, and their relationship and to help them enhance their learning of these notions with a visualization tool. Fifty-six second-year university students participated in the study. To investigate his/her concept images, the researcher developed a survey including open-ended questions, which was administered to the participants. Follow-up interviews with 12 randomly selected students were conducted to explore their answers to the survey in depth. The results of the first part of the research were utilized to construct visualization scenarios. The students used these scenarios to develop animations using Flash software. The study found that most of the students experienced difficulties in learning object-oriented notions. Overdependence on code-writing practice and examples and incorrectly learned analogies were determined to be the sources of their difficulties. Moreover, visualization was found to be a promising approach in facilitating students' concept images of basic object-oriented notions. The results of this study have implications for researchers and practitioners when designing programming instruction.

  5. Prediction of plant lncRNA by ensemble machine learning classifiers.

    PubMed

    Simopoulos, Caitlin M A; Weretilnyk, Elizabeth A; Golding, G Brian

    2018-05-02

    In plants, long non-protein coding RNAs are believed to have essential roles in development and stress responses. However, relative to advances on discerning biological roles for long non-protein coding RNAs in animal systems, this RNA class in plants is largely understudied. With comparatively few validated plant long non-coding RNAs, research on this potentially critical class of RNA is hindered by a lack of appropriate prediction tools and databases. Supervised learning models trained on data sets of mostly non-validated, non-coding transcripts have been previously used to identify this enigmatic RNA class with applications largely focused on animal systems. Our approach uses a training set comprised only of empirically validated long non-protein coding RNAs from plant, animal, and viral sources to predict and rank candidate long non-protein coding gene products for future functional validation. Individual stochastic gradient boosting and random forest classifiers trained on only empirically validated long non-protein coding RNAs were constructed. In order to use the strengths of multiple classifiers, we combined multiple models into a single stacking meta-learner. This ensemble approach benefits from the diversity of several learners to effectively identify putative plant long non-coding RNAs from transcript sequence features. When the predicted genes identified by the ensemble classifier were compared to those listed in GreeNC, an established plant long non-coding RNA database, overlap for predicted genes from Arabidopsis thaliana, Oryza sativa and Eutrema salsugineum ranged from 51 to 83% with the highest agreement in Eutrema salsugineum. Most of the highest ranking predictions from Arabidopsis thaliana were annotated as potential natural antisense genes, pseudogenes, transposable elements, or simply computationally predicted hypothetical protein. Due to the nature of this tool, the model can be updated as new long non-protein coding transcripts are identified and functionally verified. This ensemble classifier is an accurate tool that can be used to rank long non-protein coding RNA predictions for use in conjunction with gene expression studies. Selection of plant transcripts with a high potential for regulatory roles as long non-protein coding RNAs will advance research in the elucidation of long non-protein coding RNA function.

  6. Delineating slowly and rapidly evolving fractions of the Drosophila genome.

    PubMed

    Keith, Jonathan M; Adams, Peter; Stephen, Stuart; Mattick, John S

    2008-05-01

    Evolutionary conservation is an important indicator of function and a major component of bioinformatic methods to identify non-protein-coding genes. We present a new Bayesian method for segmenting pairwise alignments of eukaryotic genomes while simultaneously classifying segments into slowly and rapidly evolving fractions. We also describe an information criterion similar to the Akaike Information Criterion (AIC) for determining the number of classes. Working with pairwise alignments enables detection of differences in conservation patterns among closely related species. We analyzed three whole-genome and three partial-genome pairwise alignments among eight Drosophila species. Three distinct classes of conservation level were detected. Sequences comprising the most slowly evolving component were consistent across a range of species pairs, and constituted approximately 62-66% of the D. melanogaster genome. Almost all (>90%) of the aligned protein-coding sequence is in this fraction, suggesting much of it (comprising the majority of the Drosophila genome, including approximately 56% of non-protein-coding sequences) is functional. The size and content of the most rapidly evolving component was species dependent, and varied from 1.6% to 4.8%. This fraction is also enriched for protein-coding sequence (while containing significant amounts of non-protein-coding sequence), suggesting it is under positive selection. We also classified segments according to conservation and GC content simultaneously. This analysis identified numerous sub-classes of those identified on the basis of conservation alone, but was nevertheless consistent with that classification. Software, data, and results available at www.maths.qut.edu.au/-keithj/. Genomic segments comprising the conservation classes available in BED format.

  7. 49 CFR 173.52 - Classification codes and compatibility groups of explosives.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 49 Transportation 2 2014-10-01 2014-10-01 false Classification codes and compatibility groups of... Class 1 § 173.52 Classification codes and compatibility groups of explosives. (a) The classification..., consists of the division number followed by the compatibility group letter. Compatibility group letters are...

  8. Big Software for SmallSats: Adapting CFS to CubeSat Missions

    NASA Technical Reports Server (NTRS)

    Cudmore, Alan P.; Crum, Gary; Sheikh, Salman; Marshall, James

    2015-01-01

    Expanding capabilities and mission objectives for SmallSats and CubeSats is driving the need for reliable, reusable, and robust flight software. While missions are becoming more complicated and the scientific goals more ambitious, the level of acceptable risk has decreased. Design challenges are further compounded by budget and schedule constraints that have not kept pace. NASA's Core Flight Software System (cFS) is an open source solution which enables teams to build flagship satellite level flight software within a CubeSat schedule and budget. NASA originally developed cFS to reduce mission and schedule risk for flagship satellite missions by increasing code reuse and reliability. The Lunar Reconnaissance Orbiter, which launched in 2009, was the first of a growing list of Class B rated missions to use cFS. Large parts of cFS are now open source, which has spurred adoption outside of NASA. This paper reports on the experiences of two teams using cFS for current CubeSat missions. The performance overheads of cFS are quantified, and the reusability of code between missions is discussed. The analysis shows that cFS is well suited to use on CubeSats and demonstrates the portability and modularity of cFS code.

  9. Application of unsteady aeroelastic analysis techniques on the national aerospace plane

    NASA Technical Reports Server (NTRS)

    Pototzky, Anthony S.; Spain, Charles V.; Soistmann, David L.; Noll, Thomas E.

    1988-01-01

    A presentation provided at the Fourth National Aerospace Plane Technology Symposium held in Monterey, California, in February 1988 is discussed. The objective is to provide current results of ongoing investigations to develop a methodology for predicting the aerothermoelastic characteristics of NASP-type (hypersonic) flight vehicles. Several existing subsonic and supersonic unsteady aerodynamic codes applicable to the hypersonic class of flight vehicles that are generally available to the aerospace industry are described. These codes were evaluated by comparing calculated results with measured wind-tunnel aeroelastic data. The agreement was quite good in the subsonic speed range but showed mixed agreement in the supersonic range. In addition, a future endeavor to extend the aeroelastic analysis capability to hypersonic speeds is outlined. An investigation to identify the critical parameters affecting the aeroelastic characteristics of a hypersonic vehicle, to define and understand the various flutter mechanisms, and to develop trends for the important parameters using a simplified finite element model of the vehicle is summarized. This study showed the value of performing inexpensive and timely aeroelastic wind-tunnel tests to expand the experimental data base required for code validation using simple to complex models that are representative of the NASP configurations and root boundary conditions are discussed.

  10. Endogenous short RNAs generated by Dicer 2 and RNA-dependent RNA polymerase 1 regulate mRNAs in the basal fungus Mucor circinelloides

    PubMed Central

    Nicolas, Francisco Esteban; Moxon, Simon; de Haro, Juan P.; Calo, Silvia; Grigoriev, Igor V.; Torres-Martínez, Santiago; Moulton, Vincent; Ruiz-Vázquez, Rosa M.; Dalmay, Tamas

    2010-01-01

    Endogenous short RNAs (esRNAs) play diverse roles in eukaryotes and usually are produced from double-stranded RNA (dsRNA) by Dicer. esRNAs are grouped into different classes based on biogenesis and function but not all classes are present in all three eukaryotic kingdoms. The esRNA register of fungi is poorly described compared to other eukaryotes and it is not clear what esRNA classes are present in this kingdom and whether they regulate the expression of protein coding genes. However, evidence that some dicer mutant fungi display altered phenotypes suggests that esRNAs play an important role in fungi. Here, we show that the basal fungus Mucor circinelloides produces new classes of esRNAs that map to exons and regulate the expression of many protein coding genes. The largest class of these exonic-siRNAs (ex-siRNAs) are generated by RNA-dependent RNA Polymerase 1 (RdRP1) and dicer-like 2 (DCL2) and target the mRNAs of protein coding genes from which they were produced. Our results expand the range of esRNAs in eukaryotes and reveal a new role for esRNAs in fungi. PMID:20427422

  11. Objects tell us what action we can expect: dissociating brain areas for retrieval and exploitation of action knowledge during action observation in fMRI

    PubMed Central

    Schubotz, Ricarda I.; Wurm, Moritz F.; Wittmann, Marco K.; von Cramon, D. Yves

    2014-01-01

    Objects are reminiscent of actions often performed with them: knife and apple remind us on peeling the apple or cutting it. Mnemonic representations of object-related actions (action codes) evoked by the sight of an object may constrain and hence facilitate recognition of unrolling actions. The present fMRI study investigated if and how action codes influence brain activation during action observation. The average number of action codes (NAC) of 51 sets of objects was rated by a group of n = 24 participants. In an fMRI study, different volunteers were asked to recognize actions performed with the same objects presented in short videos. To disentangle areas reflecting the storage of action codes from those exploiting them, we showed object-compatible and object-incompatible (pantomime) actions. Areas storing action codes were considered to positively co-vary with NAC in both object-compatible and object-incompatible action; due to its role in tool-related tasks, we here hypothesized left anterior inferior parietal cortex (aIPL). In contrast, areas exploiting action codes were expected to show this correlation only in object-compatible but not incompatible action, as only object-compatible actions match one of the active action codes. For this interaction, we hypothesized ventrolateral premotor cortex (PMv) to join aIPL due to its role in biasing competition in IPL. We found left anterior intraparietal sulcus (IPS) and left posterior middle temporal gyrus (pMTG) to co-vary with NAC. In addition to these areas, action codes increased activity in object-compatible action in bilateral PMv, right IPS, and lateral occipital cortex (LO). Findings suggest that during action observation, the brain derives possible actions from perceived objects, and uses this information to shape action recognition. In particular, the number of expectable actions quantifies the activity level at PMv, IPL, and pMTG, but only PMv reflects their biased competition while observed action unfolds. PMID:25009519

  12. Utilization Rates of Implantable Cardioverter-Defibrillators for Primary Prevention of Sudden Cardiac Death: A 2012 Calculation for a Midwestern Health Referral Region

    PubMed Central

    Hoang, Allen; Shen, Changyu; Zheng, James; Taylor, Stanley; Groh, William J; Rosenman, Marc; Buxton, Alfred E.; Chen, Peng-Sheng

    2014-01-01

    Background Utilization rates (URs) for implantable cardioverter-defibrillators (ICDs) for primary prevention of sudden cardiac death (PPSCD) are lacking in the community. Objective To establish the ICD UR in central Indiana. Methods A query run on two hospitals in a health information exchange database in Indianapolis identified patients between 2011 and 2012 with left ventricular ejection fraction (EF) ≤0.35. ICD-eligibility and utilization were determined from chart review. Results We identified 1,863 patients with at least one low-EF study. Two cohorts were analyzed: 1,672 patients without, and 191 patients with, ICD-9-CM procedure code 37.94 for ICD placement. We manually reviewed a stratified (by hospital) random sample of 300 patients from the no-ICD procedure code cohort and found that 48 (16%) had no ICD but had class I indications for ICD. Eight of 300 (2.7%) actually had ICD implantation for PPSCD. Review of all 191 patients in the ICD procedure code cohort identified 70 with ICD implantation for PPSCD. The ICD UR (ratio between patients with ICD for PPSCD and all with indication) was 38% overall (95% CI 28–49%). URs were 48% for males (95% CI 34–61%), 21% for females (95% CI 16–26%, p=0.0002 vs males), 40% for whites (95% CI 27–53%), and 37% for blacks (95% CI 28–46%, p=0.66 vs whites). Conclusions The ICD UR is 38% among patients meeting Class I indications, suggesting further opportunities to improve guideline compliance. Furthermore, this study illustrates limitations in calculating ICD UR using large electronic repositories without hands-on chart review. PMID:24566233

  13. The impact of differences between subjective and objective social class on life satisfaction among the Korean population in early old age: Analysis of Korean longitudinal study on aging.

    PubMed

    Choi, Young; Kim, Jae-Hyun; Park, Eun-Cheol

    2016-01-01

    Several previous studies have established the relationship between the effects of socioeconomic status or subjective social strata on life satisfaction. However, no previous study has examined the relationship between social class and life satisfaction in terms of a disparity between subjective and objective social status. To investigate the relationship between differences in subjective and objective social class and life satisfaction. Data from the Korean Longitudinal Study of Aging with 8252 participants aged 45 or older was used. Life satisfaction was measured by the question, "How satisfied are you with your quality of life?" The main independent variable was differences in objective (income and education) and subjective social class, which was classified according to nine categories (ranging from high-high to low-low). This association was investigated by linear mixed model due to two waves data nested within individuals. Lower social class (income, education, subjective social class) was associated with dissatisfaction. The impact of objective and subjective social class on life satisfaction varied according to the level of differences in objective and subjective social class. Namely, an individual's life satisfaction declined as objective social classes decreased at the same level of subjective social class (i.e., HH, MH, LH). In both dimensions of objective social class (education and income), an individual's life satisfaction declined as subjective social class decreased by one level (i.e., HH, HM, HL). Our findings indicated that social supports is needed to improve the life satisfaction among the population aged 45 or more with low social class. The government should place increased focus on policies that encourage not only the life satisfaction of the Korean elderly with low objective social class, but also subjective social class. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  14. Crystal Symmetry Algorithms in a High-Throughput Framework for Materials

    NASA Astrophysics Data System (ADS)

    Taylor, Richard

    The high-throughput framework AFLOW that has been developed and used successfully over the last decade is improved to include fully-integrated software for crystallographic symmetry characterization. The standards used in the symmetry algorithms conform with the conventions and prescriptions given in the International Tables of Crystallography (ITC). A standard cell choice with standard origin is selected, and the space group, point group, Bravais lattice, crystal system, lattice system, and representative symmetry operations are determined. Following the conventions of the ITC, the Wyckoff sites are also determined and their labels and site symmetry are provided. The symmetry code makes no assumptions on the input cell orientation, origin, or reduction and has been integrated in the AFLOW high-throughput framework for materials discovery by adding to the existing code base and making use of existing classes and functions. The software is written in object-oriented C++ for flexibility and reuse. A performance analysis and examination of the algorithms scaling with cell size and symmetry is also reported.

  15. Patterns of protective factors in an intervention for the prevention of suicide and alcohol abuse with Yup'ik Alaska Native youth.

    PubMed

    Henry, David; Allen, James; Fok, Carlotta Ching Ting; Rasmus, Stacy; Charles, Bill

    2012-09-01

    Community-based participatory research (CBPR) with American Indian and Alaska Native communities creates distinct interventions, complicating cross-setting comparisons. The objective of this study is to develop a method for quantifying intervention exposure in CBPR interventions that differ in their forms across communities, permitting multi-site evaluation. Attendance data from 195 youth from three Yup'ik communities were coded for the specific protective factor exposure of each youth, based on information from the intervention manual. The coded attendance data were then submitted to latent class analysis to obtain participation patterns. Five patterns of exposure to protective factors were obtained: Internal, External, Limits, Community/family, and Low Protection. Patterns differed significantly by community and youth age. Standardizing interventions by the functions an intervention serves (protective factors promoted) instead of their forms or components (specific activities) can assist in refining CBPR interventions and evaluating effects in culturally distinct settings.

  16. FY16 ASME High Temperature Code Activities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Swindeman, M. J.; Jetter, R. I.; Sham, T. -L.

    2016-09-01

    One of the objectives of the ASME high temperature Code activities is to develop and validate both improvements and the basic features of Section III, Division 5, Subsection HB, Subpart B (HBB). The overall scope of this task is to develop a computer program to be used to assess whether or not a specific component under specified loading conditions will satisfy the elevated temperature design requirements for Class A components in Section III, Division 5, Subsection HB, Subpart B (HBB). There are many features and alternative paths of varying complexity in HBB. The initial focus of this task is amore » basic path through the various options for a single reference material, 316H stainless steel. However, the program will be structured for eventual incorporation all the features and permitted materials of HBB. Since this task has recently been initiated, this report focuses on the description of the initial path forward and an overall description of the approach to computer program development.« less

  17. Pure global acalculia following a left subangular lesion.

    PubMed

    Martory, M D; Mayer, E; Pegna, A J; Annoni, J M; Landis, T; Khateb, A

    2003-08-01

    We describe the case of a right-handed patient who presented a severe acalculia in the context of a pure Gerstmann syndrome following a subangular lesion that spared the left inferior parietal lobule (IPL). The patient showed impairments in Arabic and verbal codes, in number production and comprehension, as well as in numerical facts and problem solving. By using the EC301 calculation battery, semantic and syntactic tasks in Arabic and verbal codes, we tested the different hypotheses raised by the cognitive neuropsychological models of acalculia. The patients' difficulties, which were not associated with a general intellectual deterioration, and those affecting number processing as a particular semantic class, were indicative of a "global acalculia". This deficit, which exceeded the anarithmetia usually described in Gerstmann syndrome following left IPL lesion, suggested that the isolation of this area may constitute a sufficient condition for producing such a global acalculia. These results are discussed in terms of a disorder in the manipulation of mental images of spatially related objects.

  18. The audiovisual structure of onomatopoeias: An intrusion of real-world physics in lexical creation.

    PubMed

    Taitz, Alan; Assaneo, M Florencia; Elisei, Natalia; Trípodi, Mónica; Cohen, Laurent; Sitt, Jacobo D; Trevisan, Marcos A

    2018-01-01

    Sound-symbolic word classes are found in different cultures and languages worldwide. These words are continuously produced to code complex information about events. Here we explore the capacity of creative language to transport complex multisensory information in a controlled experiment, where our participants improvised onomatopoeias from noisy moving objects in audio, visual and audiovisual formats. We found that consonants communicate movement types (slide, hit or ring) mainly through the manner of articulation in the vocal tract. Vowels communicate shapes in visual stimuli (spiky or rounded) and sound frequencies in auditory stimuli through the configuration of the lips and tongue. A machine learning model was trained to classify movement types and used to validate generalizations of our results across formats. We implemented the classifier with a list of cross-linguistic onomatopoeias simple actions were correctly classified, while different aspects were selected to build onomatopoeias of complex actions. These results show how the different aspects of complex sensory information are coded and how they interact in the creation of novel onomatopoeias.

  19. Low-Density Parity-Check (LDPC) Codes Constructed from Protographs

    NASA Astrophysics Data System (ADS)

    Thorpe, J.

    2003-08-01

    We introduce a new class of low-density parity-check (LDPC) codes constructed from a template called a protograph. The protograph serves as a blueprint for constructing LDPC codes of arbitrary size whose performance can be predicted by analyzing the protograph. We apply standard density evolution techniques to predict the performance of large protograph codes. Finally, we use a randomized search algorithm to find good protographs.

  20. PatternCoder: A Programming Support Tool for Learning Binary Class Associations and Design Patterns

    ERIC Educational Resources Information Center

    Paterson, J. H.; Cheng, K. F.; Haddow, J.

    2009-01-01

    PatternCoder is a software tool to aid student understanding of class associations. It has a wizard-based interface which allows students to select an appropriate binary class association or design pattern for a given problem. Java code is then generated which allows students to explore the way in which the class associations are implemented in a…

  1. A Study of the Communicative Abilities of Disadvantaged Children. Final Report.

    ERIC Educational Resources Information Center

    Osser, Harry; And Others

    The purpose of this series of four studies was to precisely describe the code and dialect features of the speech of both lower class Negro children and middle class white children. In the first study, 16 white middle class (WMC) children were compared to 16 Negro lower class (NLC) children on both an imitation and a comprehension task. The WMC…

  2. 76 FR 38580 - Proposed Amendment of Class D Airspace; Eglin AFB, FL

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-07-01

    ...-0087; Airspace Docket No. 11-ASO-12] Proposed Amendment of Class D Airspace; Eglin AFB, FL AGENCY... action proposes to amend Class D Airspace in the Eglin Air Force Base (AFB), FL airspace area. The Destin... amendment to Title 14, Code of Federal Regulations (14 CFR) part 71 to amend Class D airspace in the Eglin...

  3. An Investigation of Eighth Grade Students' Problem Posing Skills (Turkey Sample)

    ERIC Educational Resources Information Center

    Arikan, Elif Esra; Ünal, Hasan

    2015-01-01

    To pose a problem refers to the creative activity for mathematics education. The purpose of the study was to explore the eighth grade students' problem posing ability. Three learning domains such as requiring four operations, fractions and geometry were chosen for this reason. There were two classes which were coded as class A and class B. Class A…

  4. Framing of Transitional Pedagogic Practices in the Sciences: Enabling Access

    ERIC Educational Resources Information Center

    Ellery, Karen

    2017-01-01

    Educational literature shows that students from working-class backgrounds are significantly less likely to persist to completion in higher education than middle-class students. This paper draws theoretically and analytically on Bernstein's ([1990. "Class, Codes and Control, Volume IV: The Structuring of Pedagogic Discourse." London:…

  5. Code-Mixing as a Bilingual Instructional Strategy

    ERIC Educational Resources Information Center

    Jiang, Yih-Lin Belinda; García, Georgia Earnest; Willis, Arlette Ingram

    2014-01-01

    This study investigated code-mixing practices, specifically the use of L2 (English) in an L1 (Chinese) class in a U.S. bilingual program. Our findings indicate that the code-mixing practices made and prompted by the teacher served five pedagogical functions: (a) to enhance students' bilingualism and bilingual learning, (b) to review and…

  6. Codes, Code-Switching, and Context: Style and Footing in Peer Group Bilingual Play

    ERIC Educational Resources Information Center

    Kyratzis, Amy; Tang, Ya-Ting; Koymen, S. Bahar

    2009-01-01

    According to Bernstein (A sociolinguistic approach to socialization; with some reference to educability, Basil Blackwell Ltd., 1972), middle-class parents transmit an elaborated code to their children that relies on verbal means, rather than paralinguistic devices or shared assumptions, to express meanings. Bernstein's ideas were used to argue…

  7. Defense Advanced Research Projects Agency (DARPA) Agent Markup Language Computer Aided Knowledge Acquisition

    DTIC Science & Technology

    2005-06-01

    34> <rdfs:subClassOf rdf:resource="#Condition"/> <rdfs:label>Economic Self -Sufficiency Class</rdfs:label> <cnd:categoryCode>C</cnd:categoryCode...cnd:index>3.3.4.1</cnd:index> <cnd:title>Economic Self -Sufficiency</cnd:title> <cnd:definition>The ability of a nation to...34#International_Economic_Position"/> <cnd:subCategory rdf:resource="# Self -Sufficiency_In_Food"/> <cnd:subCategory rdf:resource="# Self

  8. Find a Diabetes Prevention Program Near You

    MedlinePlus

    ... throughout the country. Find an In-person Class Select From List Find a class near you by ... some locations. Search by ZIP ZIP Code: Distance: Select Location Location: Find an Online Program Online programs ...

  9. 40 CFR 86.085-37 - Production vehicles and engines.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... transmission class. (2) Base level means a unique combination of basic engine, inertia weight, and transmission class. (3) Vehicle configuration means a unique combination of basic engine, engine code, inertia weight...

  10. Doclet To Synthesize UML

    NASA Technical Reports Server (NTRS)

    Barry, Matthew R.; Osborne, Richard N.

    2005-01-01

    The RoseDoclet computer program extends the capability of Java doclet software to automatically synthesize Unified Modeling Language (UML) content from Java language source code. [Doclets are Java-language programs that use the doclet application programming interface (API) to specify the content and format of the output of Javadoc. Javadoc is a program, originally designed to generate API documentation from Java source code, now also useful as an extensible engine for processing Java source code.] RoseDoclet takes advantage of Javadoc comments and tags already in the source code to produce a UML model of that code. RoseDoclet applies the doclet API to create a doclet passed to Javadoc. The Javadoc engine applies the doclet to the source code, emitting the output format specified by the doclet. RoseDoclet emits a Rose model file and populates it with fully documented packages, classes, methods, variables, and class diagrams identified in the source code. The way in which UML models are generated can be controlled by use of new Javadoc comment tags that RoseDoclet provides. The advantage of using RoseDoclet is that Javadoc documentation becomes leveraged for two purposes: documenting the as-built API and keeping the design documentation up to date.

  11. Uni10: an open-source library for tensor network algorithms

    NASA Astrophysics Data System (ADS)

    Kao, Ying-Jer; Hsieh, Yun-Da; Chen, Pochung

    2015-09-01

    We present an object-oriented open-source library for developing tensor network algorithms written in C++ called Uni10. With Uni10, users can build a symmetric tensor from a collection of bonds, while the bonds are constructed from a list of quantum numbers associated with different quantum states. It is easy to label and permute the indices of the tensors and access a block associated with a particular quantum number. Furthermore a network class is used to describe arbitrary tensor network structure and to perform network contractions efficiently. We give an overview of the basic structure of the library and the hierarchy of the classes. We present examples of the construction of a spin-1 Heisenberg Hamiltonian and the implementation of the tensor renormalization group algorithm to illustrate the basic usage of the library. The library described here is particularly well suited to explore and fast prototype novel tensor network algorithms and to implement highly efficient codes for existing algorithms.

  12. Cross-label Suppression: a Discriminative and Fast Dictionary Learning with Group Regularization.

    PubMed

    Wang, Xiudong; Gu, Yuantao

    2017-05-10

    This paper addresses image classification through learning a compact and discriminative dictionary efficiently. Given a structured dictionary with each atom (columns in the dictionary matrix) related to some label, we propose crosslabel suppression constraint to enlarge the difference among representations for different classes. Meanwhile, we introduce group regularization to enforce representations to preserve label properties of original samples, meaning the representations for the same class are encouraged to be similar. Upon the cross-label suppression, we don't resort to frequently-used `0-norm or `1- norm for coding, and obtain computational efficiency without losing the discriminative power for categorization. Moreover, two simple classification schemes are also developed to take full advantage of the learnt dictionary. Extensive experiments on six data sets including face recognition, object categorization, scene classification, texture recognition and sport action categorization are conducted, and the results show that the proposed approach can outperform lots of recently presented dictionary algorithms on both recognition accuracy and computational efficiency.

  13. Integrating end-to-end threads of control into object-oriented analysis and design

    NASA Technical Reports Server (NTRS)

    Mccandlish, Janet E.; Macdonald, James R.; Graves, Sara J.

    1993-01-01

    Current object-oriented analysis and design methodologies fall short in their use of mechanisms for identifying threads of control for the system being developed. The scenarios which typically describe a system are more global than looking at the individual objects and representing their behavior. Unlike conventional methodologies that use data flow and process-dependency diagrams, object-oriented methodologies do not provide a model for representing these global threads end-to-end. Tracing through threads of control is key to ensuring that a system is complete and timing constraints are addressed. The existence of multiple threads of control in a system necessitates a partitioning of the system into processes. This paper describes the application and representation of end-to-end threads of control to the object-oriented analysis and design process using object-oriented constructs. The issue of representation is viewed as a grouping problem, that is, how to group classes/objects at a higher level of abstraction so that the system may be viewed as a whole with both classes/objects and their associated dynamic behavior. Existing object-oriented development methodology techniques are extended by adding design-level constructs termed logical composite classes and process composite classes. Logical composite classes are design-level classes which group classes/objects both logically and by thread of control information. Process composite classes further refine the logical composite class groupings by using process partitioning criteria to produce optimum concurrent execution results. The goal of these design-level constructs is to ultimately provide the basis for a mechanism that can support the creation of process composite classes in an automated way. Using an automated mechanism makes it easier to partition a system into concurrently executing elements that can be run in parallel on multiple processors.

  14. New double-byte error-correcting codes for memory systems

    NASA Technical Reports Server (NTRS)

    Feng, Gui-Liang; Wu, Xinen; Rao, T. R. N.

    1996-01-01

    Error-correcting or error-detecting codes have been used in the computer industry to increase reliability, reduce service costs, and maintain data integrity. The single-byte error-correcting and double-byte error-detecting (SbEC-DbED) codes have been successfully used in computer memory subsystems. There are many methods to construct double-byte error-correcting (DBEC) codes. In the present paper we construct a class of double-byte error-correcting codes, which are more efficient than those known to be optimum, and a decoding procedure for our codes is also considered.

  15. On the Effects of Social Class on Language Use: A Fresh Look at Bernstein's Theory

    ERIC Educational Resources Information Center

    Aliakbari, Mohammad; Allahmoradi, Nazal

    2014-01-01

    Basil Bernstein (1971) introduced the notion of the Restricted and the Elaborated code, claiming that working-class speakers have access only to the former but middle-class members to both. In an attempt to test this theory in the Iranian context and to investigate the effect of social class on the quality of students language use, we examined the…

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Walker, Andrew; Lawrence, Earl

    The Response Surface Modeling (RSM) Tool Suite is a collection of three codes used to generate an empirical interpolation function for a collection of drag coefficient calculations computed with Test Particle Monte Carlo (TPMC) simulations. The first code, "Automated RSM", automates the generation of a drag coefficient RSM for a particular object to a single command. "Automated RSM" first creates a Latin Hypercube Sample (LHS) of 1,000 ensemble members to explore the global parameter space. For each ensemble member, a TPMC simulation is performed and the object drag coefficient is computed. In the next step of the "Automated RSM" code,more » a Gaussian process is used to fit the TPMC simulations. In the final step, Markov Chain Monte Carlo (MCMC) is used to evaluate the non-analytic probability distribution function from the Gaussian process. The second code, "RSM Area", creates a look-up table for the projected area of the object based on input limits on the minimum and maximum allowed pitch and yaw angles and pitch and yaw angle intervals. The projected area from the look-up table is used to compute the ballistic coefficient of the object based on its pitch and yaw angle. An accurate ballistic coefficient is crucial in accurately computing the drag on an object. The third code, "RSM Cd", uses the RSM generated by the "Automated RSM" code and the projected area look-up table generated by the "RSM Area" code to accurately compute the drag coefficient and ballistic coefficient of the object. The user can modify the object velocity, object surface temperature, the translational temperature of the gas, the species concentrations of the gas, and the pitch and yaw angles of the object. Together, these codes allow for the accurate derivation of an object's drag coefficient and ballistic coefficient under any conditions with only knowledge of the object's geometry and mass.« less

  17. 75 FR 28077 - Small Business Size Standards: Waiver of the Nonmanufacturer Rule

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-05-19

    ... class waiver of the Nonmanufacturer Rule for Improved Outer Tactical Vests and related accessories under... Class Waiver for Improved Outer Tactical Vests, PSC 8470, NAICS code 339113, Federal Register Notice...

  18. Quantized phase coding and connected region labeling for absolute phase retrieval.

    PubMed

    Chen, Xiangcheng; Wang, Yuwei; Wang, Yajun; Ma, Mengchao; Zeng, Chunnian

    2016-12-12

    This paper proposes an absolute phase retrieval method for complex object measurement based on quantized phase-coding and connected region labeling. A specific code sequence is embedded into quantized phase of three coded fringes. Connected regions of different codes are labeled and assigned with 3-digit-codes combining the current period and its neighbors. Wrapped phase, more than 36 periods, can be restored with reference to the code sequence. Experimental results verify the capability of the proposed method to measure multiple isolated objects.

  19. MFV-class: a multi-faceted visualization tool of object classes.

    PubMed

    Zhang, Zhi-meng; Pan, Yun-he; Zhuang, Yue-ting

    2004-11-01

    Classes are key software components in an object-oriented software system. In many industrial OO software systems, there are some classes that have complicated structure and relationships. So in the processes of software maintenance, testing, software reengineering, software reuse and software restructure, it is a challenge for software engineers to understand these classes thoroughly. This paper proposes a class comprehension model based on constructivist learning theory, and implements a software visualization tool (MFV-Class) to help in the comprehension of a class. The tool provides multiple views of class to uncover manifold facets of class contents. It enables visualizing three object-oriented metrics of classes to help users focus on the understanding process. A case study was conducted to evaluate our approach and the toolkit.

  20. School Dress Codes and Uniform Policies.

    ERIC Educational Resources Information Center

    Anderson, Wendell

    2002-01-01

    Opinions abound on what students should wear to class. Some see student dress as a safety issue; others see it as a student-rights issue. The issue of dress codes and uniform policies has been tackled in the classroom, the boardroom, and the courtroom. This Policy Report examines the whole fabric of the debate on dress codes and uniform policies…

  1. Evaluation of an Online Instructional Database Accessed by QR Codes to Support Biochemistry Practical Laboratory Classes

    ERIC Educational Resources Information Center

    Yip, Tor; Melling, Louise; Shaw, Kirsty J.

    2016-01-01

    An online instructional database containing information on commonly used pieces of laboratory equipment was created. In order to make the database highly accessible and to promote its use, QR codes were utilized. The instructional materials were available anytime and accessed using QR codes located on the equipment itself and within undergraduate…

  2. An inherited immunoglobulin class-switch recombination deficiency associated with a defect in the INO80 chromatin remodeling complex

    PubMed Central

    Kracker, Sven; Di Virgilio, Michela; Schwartzentruber, Jeremy; Cuenin, Cyrille; Forveille, Monique; Deau, Marie-Céline; McBride, Kevin M.; Majewski, Jacek; Gazumyan, Anna; Seneviratne, Suranjith; Grimbacher, Bodo; Kutukculer, Necil; Herceg, Zdenko; Cavazzana, Marina; Jabado, Nada; Nussenzweig, Michel C.; Fischer, Alain; Durandy, Anne

    2015-01-01

    Background Immunoglobulin class-switch recombination defects (CSR-D) are rare primary immunodeficiencies characterized by impaired production of switched immunoglobulin isotypes and normal or elevated IgM levels. They are caused by impaired T:B cooperation or intrinsic B cell defects. However, many immunoglobulin CSR-Ds are still undefined at the molecular level. Objective This study's objective was to delineate new causes of immunoglobulin CSR-Ds and thus gain further insights into the process of immunoglobulin class-switch recombination (CSR). Methods Exome sequencing in 2 immunoglobulin CSR-D patients identified variations in the INO80 gene. Functional experiments were performed to assess the function of INO80 on immunoglobulin CSR. Results We identified recessive, nonsynonymous coding variations in the INO80 gene in 2 patients affected by defective immunoglobulin CSR. Expression of wild-type INO80 in patients' fibroblastic cells corrected their hypersensitivity to high doses of γ-irradiation. In murine CH12-F3 cells, the INO80 complex accumulates at Sα and Eμ regions of the IgH locus, and downregulation of INO80 as well as its partners Reptin and Pontin impaired CSR. In addition, Reptin and Pontin were shown to interact with activation-induced cytidine deaminase. Finally, an abnormal separation of sister chromatids was observed upon INO80 downregulation in CH12-F3 cells, pinpointing its role in cohesin activity. Conclusion INO80 deficiency appears to be associated with defective immunoglobulin CSR. We propose that the INO80 complex modulates cohesin function that may be required during immunoglobulin switch region synapsis. PMID:25312759

  3. On models of the genetic code generated by binary dichotomic algorithms.

    PubMed

    Gumbel, Markus; Fimmel, Elena; Danielli, Alberto; Strüngmann, Lutz

    2015-02-01

    In this paper we introduce the concept of a BDA-generated model of the genetic code which is based on binary dichotomic algorithms (BDAs). A BDA-generated model is based on binary dichotomic algorithms (BDAs). Such a BDA partitions the set of 64 codons into two disjoint classes of size 32 each and provides a generalization of known partitions like the Rumer dichotomy. We investigate what partitions can be generated when a set of different BDAs is applied sequentially to the set of codons. The search revealed that these models are able to generate code tables with very different numbers of classes ranging from 2 to 64. We have analyzed whether there are models that map the codons to their amino acids. A perfect matching is not possible. However, we present models that describe the standard genetic code with only few errors. There are also models that map all 64 codons uniquely to 64 classes showing that BDAs can be used to identify codons precisely. This could serve as a basis for further mathematical analysis using coding theory, for example. The hypothesis that BDAs might reflect a molecular mechanism taking place in the decoding center of the ribosome is discussed. The scan demonstrated that binary dichotomic partitions are able to model different aspects of the genetic code very well. The search was performed with our tool Beady-A. This software is freely available at http://mi.informatik.hs-mannheim.de/beady-a. It requires a JVM version 6 or higher. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  4. The structure of the clouds distributed operating system

    NASA Technical Reports Server (NTRS)

    Dasgupta, Partha; Leblanc, Richard J., Jr.

    1989-01-01

    A novel system architecture, based on the object model, is the central structuring concept used in the Clouds distributed operating system. This architecture makes Clouds attractive over a wide class of machines and environments. Clouds is a native operating system, designed and implemented at Georgia Tech. and runs on a set of generated purpose computers connected via a local area network. The system architecture of Clouds is composed of a system-wide global set of persistent (long-lived) virtual address spaces, called objects that contain persistent data and code. The object concept is implemented at the operating system level, thus presenting a single level storage view to the user. Lightweight treads carry computational activity through the code stored in the objects. The persistent objects and threads gives rise to a programming environment composed of shared permanent memory, dispensing with the need for hardware-derived concepts such as the file systems and message systems. Though the hardware may be distributed and may have disks and networks, the Clouds provides the applications with a logically centralized system, based on a shared, structured, single level store. The current design of Clouds uses a minimalist philosophy with respect to both the kernel and the operating system. That is, the kernel and the operating system support a bare minimum of functionality. Clouds also adheres to the concept of separation of policy and mechanism. Most low-level operating system services are implemented above the kernel and most high level services are implemented at the user level. From the measured performance of using the kernel mechanisms, we are able to demonstrate that efficient implementations are feasible for the object model on commercially available hardware. Clouds provides a rich environment for conducting research in distributed systems. Some of the topics addressed in this paper include distributed programming environments, consistency of persistent data and fault-tolerance.

  5. Aviation Safety Modeling and Simulation (ASMM) Propulsion Fleet Modeling: A Tool for Semi-Automatic Construction of CORBA-based Applications from Legacy Fortran Programs

    NASA Technical Reports Server (NTRS)

    Sang, Janche

    2003-01-01

    Within NASA's Aviation Safety Program, NASA GRC participates in the Modeling and Simulation Project called ASMM. NASA GRC s focus is to characterize the propulsion systems performance from a fleet management and maintenance perspective by modeling and through simulation predict the characteristics of two classes of commercial engines (CFM56 and GE90). In prior years, the High Performance Computing and Communication (HPCC) program funded, NASA Glenn in developing a large scale, detailed simulations for the analysis and design of aircraft engines called the Numerical Propulsion System Simulation (NPSS). Three major aspects of this modeling included the integration of different engine components, coupling of multiple disciplines, and engine component zooming at appropriate level fidelity, require relatively tight coupling of different analysis codes. Most of these codes in aerodynamics and solid mechanics are written in Fortran. Refitting these legacy Fortran codes with distributed objects can increase these codes reusability. Aviation Safety s modeling and simulation use in characterizing fleet management has similar needs. The modeling and simulation of these propulsion systems use existing Fortran and C codes that are instrumental in determining the performance of the fleet. The research centers on building a CORBA-based development environment for programmers to easily wrap and couple legacy Fortran codes. This environment consists of a C++ wrapper library to hide the details of CORBA and an efficient remote variable scheme to facilitate data exchange between the client and the server model. Additionally, a Web Service model should also be constructed for evaluation of this technology s use over the next two- three years.

  6. New Class of Quantum Error-Correcting Codes for a Bosonic Mode

    NASA Astrophysics Data System (ADS)

    Michael, Marios H.; Silveri, Matti; Brierley, R. T.; Albert, Victor V.; Salmilehto, Juha; Jiang, Liang; Girvin, S. M.

    2016-07-01

    We construct a new class of quantum error-correcting codes for a bosonic mode, which are advantageous for applications in quantum memories, communication, and scalable computation. These "binomial quantum codes" are formed from a finite superposition of Fock states weighted with binomial coefficients. The binomial codes can exactly correct errors that are polynomial up to a specific degree in bosonic creation and annihilation operators, including amplitude damping and displacement noise as well as boson addition and dephasing errors. For realistic continuous-time dissipative evolution, the codes can perform approximate quantum error correction to any given order in the time step between error detection measurements. We present an explicit approximate quantum error recovery operation based on projective measurements and unitary operations. The binomial codes are tailored for detecting boson loss and gain errors by means of measurements of the generalized number parity. We discuss optimization of the binomial codes and demonstrate that by relaxing the parity structure, codes with even lower unrecoverable error rates can be achieved. The binomial codes are related to existing two-mode bosonic codes, but offer the advantage of requiring only a single bosonic mode to correct amplitude damping as well as the ability to correct other errors. Our codes are similar in spirit to "cat codes" based on superpositions of the coherent states but offer several advantages such as smaller mean boson number, exact rather than approximate orthonormality of the code words, and an explicit unitary operation for repumping energy into the bosonic mode. The binomial quantum codes are realizable with current superconducting circuit technology, and they should prove useful in other quantum technologies, including bosonic quantum memories, photonic quantum communication, and optical-to-microwave up- and down-conversion.

  7. 40 CFR 60.3078 - What definitions must I know?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... connected by road to a Class I municipal solid waste landfill, as defined by Alaska regulatory code 18 AAC 60.300(c) or, if connected by road, is located more than 50 miles from a Class I municipal solid... solid waste landfill is a landfill that is not connected by road to a Class I municipal solid waste...

  8. 40 CFR 60.3078 - What definitions must I know?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... connected by road to a Class I municipal solid waste landfill, as defined by Alaska regulatory code 18 AAC 60.300(c) or, if connected by road, is located more than 50 miles from a Class I municipal solid... solid waste landfill is a landfill that is not connected by road to a Class I municipal solid waste...

  9. 40 CFR 60.3078 - What definitions must I know?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... connected by road to a Class I municipal solid waste landfill, as defined by Alaska regulatory code 18 AAC 60.300(c) or, if connected by road, is located more than 50 miles from a Class I municipal solid... solid waste landfill is a landfill that is not connected by road to a Class I municipal solid waste...

  10. 77 FR 35617 - Amendment of Class C Airspace; Colorado Springs, CO

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-06-14

    ... 44 U.S.C. 1510. #0; #0;The Code of Federal Regulations is sold by the Superintendent of Documents. #0...; Airspace Docket No. 12-AWA-4] Amendment of Class C Airspace; Colorado Springs, CO AGENCY: Federal Aviation Administration (FAA), DOT. ACTION: Final rule. SUMMARY: This action modifies the Colorado Springs, CO, Class C...

  11. Teacher-Child Dyadic Interaction: A Manual for Coding Classroom Behavior. Report Series No. 27.

    ERIC Educational Resources Information Center

    Brophy, Jere E.; Good, Thomas L.

    This manual presents the rationale and coding system for the study of dyadic interaction between teachers and children in classrooms. The introduction notes major differences between this system and others in common use: 1) it is not a universal system that attempts to code all classroom behavior, and 2) the teacher's interactions in his class are…

  12. Synchronization Control for a Class of Discrete-Time Dynamical Networks With Packet Dropouts: A Coding-Decoding-Based Approach.

    PubMed

    Wang, Licheng; Wang, Zidong; Han, Qing-Long; Wei, Guoliang

    2017-09-06

    The synchronization control problem is investigated for a class of discrete-time dynamical networks with packet dropouts via a coding-decoding-based approach. The data is transmitted through digital communication channels and only the sequence of finite coded signals is sent to the controller. A series of mutually independent Bernoulli distributed random variables is utilized to model the packet dropout phenomenon occurring in the transmissions of coded signals. The purpose of the addressed synchronization control problem is to design a suitable coding-decoding procedure for each node, based on which an efficient decoder-based control protocol is developed to guarantee that the closed-loop network achieves the desired synchronization performance. By applying a modified uniform quantization approach and the Kronecker product technique, criteria for ensuring the detectability of the dynamical network are established by means of the size of the coding alphabet, the coding period and the probability information of packet dropouts. Subsequently, by resorting to the input-to-state stability theory, the desired controller parameter is obtained in terms of the solutions to a certain set of inequality constraints which can be solved effectively via available software packages. Finally, two simulation examples are provided to demonstrate the effectiveness of the obtained results.

  13. Constrained coding for the deep-spaced optical channel

    NASA Technical Reports Server (NTRS)

    Moision, B.; Hamkins, J.

    2002-01-01

    In this paper, we demonstrate a class of low-complexity modulation codes satisfying the (d,k) constraint that offer throughput gains over M-PPM on the order of 10-15%, which translate into SNR gains of .4 - .6 dB.

  14. Experiences Building an Object-Oriented System in C++

    NASA Technical Reports Server (NTRS)

    Madany, Peter W.; Campbell, Roy H.; Kougiouris, Panagiotis

    1991-01-01

    This paper describes tools that we built to support the construction of an object-oriented operating system in C++. The tools provide the automatic deletion of unwanted objects, first-class classes, dynamically loadable classes, and class-oriented debugging. As a consequence of our experience building Choices, we advocate these features as useful, simplifying and unifying many aspects of system programming.

  15. Feasibility of self-correcting quantum memory and thermal stability of topological order

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yoshida, Beni, E-mail: rouge@mit.edu

    2011-10-15

    Recently, it has become apparent that the thermal stability of topologically ordered systems at finite temperature, as discussed in condensed matter physics, can be studied by addressing the feasibility of self-correcting quantum memory, as discussed in quantum information science. Here, with this correspondence in mind, we propose a model of quantum codes that may cover a large class of physically realizable quantum memory. The model is supported by a certain class of gapped spin Hamiltonians, called stabilizer Hamiltonians, with translation symmetries and a small number of ground states that does not grow with the system size. We show that themore » model does not work as self-correcting quantum memory due to a certain topological constraint on geometric shapes of its logical operators. This quantum coding theoretical result implies that systems covered or approximated by the model cannot have thermally stable topological order, meaning that systems cannot be stable against both thermal fluctuations and local perturbations simultaneously in two and three spatial dimensions. - Highlights: > We define a class of physically realizable quantum codes. > We determine their coding and physical properties completely. > We establish the connection between topological order and self-correcting memory. > We find they do not work as self-correcting quantum memory. > We find they do not have thermally stable topological order.« less

  16. Micromechanical slit positioning system as a transmissive spatial light modulator

    NASA Astrophysics Data System (ADS)

    Riesenberg, Rainer

    2001-11-01

    Micro-slits have been prepared with a slit-width and a slit- length of 2 ... 1000 micrometers . Linear and two-dimensional arrays up to 10 x 110 slits have been developed and completed with a piezo-actuator for shifting. This system is a so-called mechanical slit positioning system. The light is switched by simple one- or two-dimensional displacement of coded slit masks in a one- or two-layer architecture. The slit positioning system belongs to the transmissive class of MEMS-based spatial light modulators (SLM). It has fundamental advantages for optical contrast and also can be used in the full spectral region. Therefore transmissive versions of SLM should be a future solution. Instrument architectures based on the slit positioning system can increase the resolution by subpixel generation, the throughput by HADAMARD transform mode, or select objects for multi-object-spectroscopy. The linear slit positioning system was space qualified within an advanced micro- spectrometer. A NIR multi-object-spectrometer for the Next Generation Space Telescope (NGST) is based on a field selector for selecting objects. The field selector is a SLM, which could be implemented by a slit positioning system.

  17. Wartime Tracking of Class I Surface Shipments from Production or Procurement to Destination

    DTIC Science & Technology

    1992-04-01

    Armed Forces I ICAF-FAP National Defense University 6c. ADDRESS (City, State, ard ZIP Code ) 7b. ADDRESS (City, State, and ZIP Code ) Fort Lesley J...INSTRUMENT IDENTIFICATION NUMBER ORGANIZATION (If applicable) 9c. ADDRESS (City, State, and ZIP Code ) 10. SOURCE OF FUNDING NUMBERS PROGRAM PROJECT TASK...COSATI CODES 18. SUBJECT TERMS (Continue on reverse if necessary and identify by block number) FIELD GROUP SUB-GROUP 19. ABSTRACT (Continue on reverse

  18. NIFTY - Numerical Information Field Theory. A versatile PYTHON library for signal inference

    NASA Astrophysics Data System (ADS)

    Selig, M.; Bell, M. R.; Junklewitz, H.; Oppermann, N.; Reinecke, M.; Greiner, M.; Pachajoa, C.; Enßlin, T. A.

    2013-06-01

    NIFTy (Numerical Information Field Theory) is a software package designed to enable the development of signal inference algorithms that operate regardless of the underlying spatial grid and its resolution. Its object-oriented framework is written in Python, although it accesses libraries written in Cython, C++, and C for efficiency. NIFTy offers a toolkit that abstracts discretized representations of continuous spaces, fields in these spaces, and operators acting on fields into classes. Thereby, the correct normalization of operations on fields is taken care of automatically without concerning the user. This allows for an abstract formulation and programming of inference algorithms, including those derived within information field theory. Thus, NIFTy permits its user to rapidly prototype algorithms in 1D, and then apply the developed code in higher-dimensional settings of real world problems. The set of spaces on which NIFTy operates comprises point sets, n-dimensional regular grids, spherical spaces, their harmonic counterparts, and product spaces constructed as combinations of those. The functionality and diversity of the package is demonstrated by a Wiener filter code example that successfully runs without modification regardless of the space on which the inference problem is defined. NIFTy homepage http://www.mpa-garching.mpg.de/ift/nifty/; Excerpts of this paper are part of the NIFTy source code and documentation.

  19. Implementation of ASME Code, Section XI, Code Case N-770, on Alternative Examination Requirements for Class 1 Butt Welds Fabricated with Alloy 82/182

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sullivan, Edmund J.; Anderson, Michael T.

    In May 2010, the NRC issued a proposed notice of rulemaking that includes a provision to add a new section to its rules to require licensees to implement ASME Code Case N-770, ‘‘Alternative Examination Requirements and Acceptance Standards for Class 1 PWR Piping and Vessel Nozzle Butt Welds Fabricated with UNS N06082 or UNS W86182 Weld Filler Material With or Without the Application of Listed Mitigation Activities, Section XI, Division 1,’’ with 15 conditions. Code Case N-770 contains baseline and inservice inspection (ISI) requirements for unmitigated butt welds fabricated with Alloy 82/182 material and preservice and ISI requirements for mitigatedmore » butt welds. The NRC stated that application of ASME Code Case N-770 is necessary because the inspections currently required by the ASME Code, Section XI, were not written to address stress corrosion cracking Alloy 82/182 butt welds, and the safety consequences of inadequate inspections can be significant. The NRC expects to issue the final rule incorporating this code case into its regulations in the spring 2011 time frame. This paper discusses the new examination requirements, the conditions that NRC is imposing , and the major concerns with implementation of the new Code Case.« less

  20. Multipath search coding of stationary signals with applications to speech

    NASA Astrophysics Data System (ADS)

    Fehn, H. G.; Noll, P.

    1982-04-01

    This paper deals with the application of multipath search coding (MSC) concepts to the coding of stationary memoryless and correlated sources, and of speech signals, at a rate of one bit per sample. Use is made of three MSC classes: (1) codebook coding, or vector quantization, (2) tree coding, and (3) trellis coding. This paper explains the performances of these coders and compares them both with those of conventional coders and with rate-distortion bounds. The potentials of MSC coding strategies are demonstrated by illustrations. The paper reports also on results of MSC coding of speech, where both the strategy of adaptive quantization and of adaptive prediction were included in coder design.

  1. The Educational and Moral Significance of the American Chemical Society's The Chemist's Code of Conduct

    NASA Astrophysics Data System (ADS)

    Bruton, Samuel V.

    2003-05-01

    While the usefulness of the case study method in teaching research ethics is frequently emphasized, less often noted is the educational value of professional codes of ethics. Much can be gained by having students examine codes and reflect on their significance. This paper argues that codes such as the American Chemical Society‘s The Chemist‘s Code of Conduct are an important supplement to the use of cases and describes one way in which they can be integrated profitably into a class discussion of research ethics.

  2. Error-correction coding for digital communications

    NASA Astrophysics Data System (ADS)

    Clark, G. C., Jr.; Cain, J. B.

    This book is written for the design engineer who must build the coding and decoding equipment and for the communication system engineer who must incorporate this equipment into a system. It is also suitable as a senior-level or first-year graduate text for an introductory one-semester course in coding theory. Fundamental concepts of coding are discussed along with group codes, taking into account basic principles, practical constraints, performance computations, coding bounds, generalized parity check codes, polynomial codes, and important classes of group codes. Other topics explored are related to simple nonalgebraic decoding techniques for group codes, soft decision decoding of block codes, algebraic techniques for multiple error correction, the convolutional code structure and Viterbi decoding, syndrome decoding techniques, and sequential decoding techniques. System applications are also considered, giving attention to concatenated codes, coding for the white Gaussian noise channel, interleaver structures for coded systems, and coding for burst noise channels.

  3. Deep Learning for Automated Extraction of Primary Sites From Cancer Pathology Reports.

    PubMed

    Qiu, John X; Yoon, Hong-Jun; Fearn, Paul A; Tourassi, Georgia D

    2018-01-01

    Pathology reports are a primary source of information for cancer registries which process high volumes of free-text reports annually. Information extraction and coding is a manual, labor-intensive process. In this study, we investigated deep learning and a convolutional neural network (CNN), for extracting ICD-O-3 topographic codes from a corpus of breast and lung cancer pathology reports. We performed two experiments, using a CNN and a more conventional term frequency vector approach, to assess the effects of class prevalence and inter-class transfer learning. The experiments were based on a set of 942 pathology reports with human expert annotations as the gold standard. CNN performance was compared against a more conventional term frequency vector space approach. We observed that the deep learning models consistently outperformed the conventional approaches in the class prevalence experiment, resulting in micro- and macro-F score increases of up to 0.132 and 0.226, respectively, when class labels were well populated. Specifically, the best performing CNN achieved a micro-F score of 0.722 over 12 ICD-O-3 topography codes. Transfer learning provided a consistent but modest performance boost for the deep learning methods but trends were contingent on the CNN method and cancer site. These encouraging results demonstrate the potential of deep learning for automated abstraction of pathology reports.

  4. Distinct neuronal interactions in anterior inferotemporal areas of macaque monkeys during retrieval of object association memory.

    PubMed

    Hirabayashi, Toshiyuki; Tamura, Keita; Takeuchi, Daigo; Takeda, Masaki; Koyano, Kenji W; Miyashita, Yasushi

    2014-07-09

    In macaque monkeys, the anterior inferotemporal cortex, a region crucial for object memory processing, is composed of two adjacent, hierarchically distinct areas, TE and 36, for which different functional roles and neuronal responses in object memory tasks have been characterized. However, it remains unknown how the neuronal interactions differ between these areas during memory retrieval. Here, we conducted simultaneous recordings from multiple single-units in each of these areas while monkeys performed an object association memory task and examined the inter-area differences in neuronal interactions during the delay period. Although memory neurons showing sustained activity for the presented cue stimulus, cue-holding (CH) neurons, interacted with each other in both areas, only those neurons in area 36 interacted with another type of memory neurons coding for the to-be-recalled paired associate (pair-recall neurons) during memory retrieval. Furthermore, pairs of CH neurons in area TE showed functional coupling in response to each individual object during memory retention, whereas the same class of neuron pairs in area 36 exhibited a comparable strength of coupling in response to both associated objects. These results suggest predominant neuronal interactions in area 36 during the mnemonic processing, which may underlie the pivotal role of this brain area in both storage and retrieval of object association memory. Copyright © 2014 the authors 0270-6474/14/349377-12$15.00/0.

  5. A note on the R sub 0-parameter for discrete memoryless channels

    NASA Technical Reports Server (NTRS)

    Mceliece, R. J.

    1980-01-01

    An explicit class of discrete memoryless channels (q-ary erasure channels) is exhibited. Practical and explicit coded systems of rate R with R/R sub o as large as desired can be designed for this class.

  6. Locality-preserving logical operators in topological stabilizer codes

    NASA Astrophysics Data System (ADS)

    Webster, Paul; Bartlett, Stephen D.

    2018-01-01

    Locality-preserving logical operators in topological codes are naturally fault tolerant, since they preserve the correctability of local errors. Using a correspondence between such operators and gapped domain walls, we describe a procedure for finding all locality-preserving logical operators admitted by a large and important class of topological stabilizer codes. In particular, we focus on those equivalent to a stack of a finite number of surface codes of any spatial dimension, where our procedure fully specifies the group of locality-preserving logical operators. We also present examples of how our procedure applies to codes with different boundary conditions, including color codes and toric codes, as well as more general codes such as Abelian quantum double models and codes with fermionic excitations in more than two dimensions.

  7. Object Oriented Modeling and Design

    NASA Technical Reports Server (NTRS)

    Shaykhian, Gholam Ali

    2007-01-01

    The Object Oriented Modeling and Design seminar is intended for software professionals and students, it covers the concepts and a language-independent graphical notation that can be used to analyze problem requirements, and design a solution to the problem. The seminar discusses the three kinds of object-oriented models class, state, and interaction. The class model represents the static structure of a system, the state model describes the aspects of a system that change over time as well as control behavior and the interaction model describes how objects collaborate to achieve overall results. Existing knowledge of object oriented programming may benefit the learning of modeling and good design. Specific expectations are: Create a class model, Read, recognize, and describe a class model, Describe association and link, Show abstract classes used with multiple inheritance, Explain metadata, reification and constraints, Group classes into a package, Read, recognize, and describe a state model, Explain states and transitions, Read, recognize, and describe interaction model, Explain Use cases and use case relationships, Show concurrency in activity diagram, Object interactions in sequence diagram.

  8. 40 CFR 60.2977 - What definitions must I know?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ...) Is not connected by road to a Class I municipal solid waste landfill, as defined by Alaska regulatory code 18 AAC 60.300(c) or, if connected by road, is located more than 50 miles from a Class I municipal... municipal solid waste landfill is a landfill that is not connected by road to a Class I municipal solid...

  9. 77 FR 40492 - Revocation of Class D Airspace; Andalusia, AL; and Amendment of Class E Airspace; Fort Rucker, AL

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-07-10

    ... Alabama Regional Airport at Bill Benton Field has closed, and amends Class E Airspace at Fort Rucker, AL, by recognizing the airport's name change to South Alabama Regional Airport at Bill Benton Field. This... reference action under title 1, Code of Federal Regulations, part 51, subject to the annual revision of FAA...

  10. Wavelet-based compression of M-FISH images.

    PubMed

    Hua, Jianping; Xiong, Zixiang; Wu, Qiang; Castleman, Kenneth R

    2005-05-01

    Multiplex fluorescence in situ hybridization (M-FISH) is a recently developed technology that enables multi-color chromosome karyotyping for molecular cytogenetic analysis. Each M-FISH image set consists of a number of aligned images of the same chromosome specimen captured at different optical wavelength. This paper presents embedded M-FISH image coding (EMIC), where the foreground objects/chromosomes and the background objects/images are coded separately. We first apply critically sampled integer wavelet transforms to both the foreground and the background. We then use object-based bit-plane coding to compress each object and generate separate embedded bitstreams that allow continuous lossy-to-lossless compression of the foreground and the background. For efficient arithmetic coding of bit planes, we propose a method of designing an optimal context model that specifically exploits the statistical characteristics of M-FISH images in the wavelet domain. Our experiments show that EMIC achieves nearly twice as much compression as Lempel-Ziv-Welch coding. EMIC also performs much better than JPEG-LS and JPEG-2000 for lossless coding. The lossy performance of EMIC is significantly better than that of coding each M-FISH image with JPEG-2000.

  11. Applications of Derandomization Theory in Coding

    NASA Astrophysics Data System (ADS)

    Cheraghchi, Mahdi

    2011-07-01

    Randomized techniques play a fundamental role in theoretical computer science and discrete mathematics, in particular for the design of efficient algorithms and construction of combinatorial objects. The basic goal in derandomization theory is to eliminate or reduce the need for randomness in such randomized constructions. In this thesis, we explore some applications of the fundamental notions in derandomization theory to problems outside the core of theoretical computer science, and in particular, certain problems related to coding theory. First, we consider the wiretap channel problem which involves a communication system in which an intruder can eavesdrop a limited portion of the transmissions, and construct efficient and information-theoretically optimal communication protocols for this model. Then we consider the combinatorial group testing problem. In this classical problem, one aims to determine a set of defective items within a large population by asking a number of queries, where each query reveals whether a defective item is present within a specified group of items. We use randomness condensers to explicitly construct optimal, or nearly optimal, group testing schemes for a setting where the query outcomes can be highly unreliable, as well as the threshold model where a query returns positive if the number of defectives pass a certain threshold. Finally, we design ensembles of error-correcting codes that achieve the information-theoretic capacity of a large class of communication channels, and then use the obtained ensembles for construction of explicit capacity achieving codes. [This is a shortened version of the actual abstract in the thesis.

  12. EVALUATION OF AN INDIVIDUALLY PACED COURSE FOR AIRBORNE RADIO CODE OPERATORS. FINAL REPORT.

    ERIC Educational Resources Information Center

    BALDWIN, ROBERT O.; JOHNSON, KIRK A.

    IN THIS STUDY COMPARISONS WERE MADE BETWEEN AN INDIVIDUALLY PACED VERSION OF THE AIRBORNE RADIO CODE OPERATOR (ARCO) COURSE AND TWO VERSIONS OF THE COURSE IN WHICH THE STUDENTS PROGRESSED AT A FIXED PACE. THE ARCO COURSE IS A CLASS C SCHOOL IN WHICH THE STUDENT LEARNS TO SEND AND RECEIVE MILITARY MESSAGES USING THE INTERNATIONAL MORSE CODE. THE…

  13. 14 CFR 217.10 - Instructions.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... and the other pertaining to on-flight markets. For example, the routing (A-B-C-D) consists of three..., Singapore A-3—Airport code Origin A-4—Airport code Destination A-5—Service class (mark an X) F G L P Q By aircraft type— B-1—Aircraft type code B-2—Revenue aircraft departures B-3—Revenue passengers transported B...

  14. 14 CFR 217.10 - Instructions.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... and the other pertaining to on-flight markets. For example, the routing (A-B-C-D) consists of three..., Singapore A-3—Airport code Origin A-4—Airport code Destination A-5—Service class (mark an X) F G L P Q By aircraft type— B-1—Aircraft type code B-2—Revenue aircraft departures B-3—Revenue passengers transported B...

  15. 14 CFR 217.10 - Instructions.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... and the other pertaining to on-flight markets. For example, the routing (A-B-C-D) consists of three..., Singapore A-3—Airport code Origin A-4—Airport code Destination A-5—Service class (mark an X) F G L P Q By aircraft type— B-1—Aircraft type code B-2—Revenue aircraft departures B-3—Revenue passengers transported B...

  16. Polymorphisms and Tissue Expression of the Feline Leukocyte Antigen Class I Loci FLAI-E, -H and -K

    PubMed Central

    Holmes, Jennifer C.; Holmer, Savannah G.; Ross, Peter; Buntzman, Adam S.; Frelinger, Jeffrey A.; Hess, Paul R.

    2013-01-01

    Cytotoxic CD8+ T-cell immunosurveillance for intracellular pathogens, such as viruses, is controlled by classical major histocompatibility complex (MHC) class Ia molecules, and ideally, these antiviral T-cell populations are defined by the specific peptide and restricting MHC allele. Surprisingly, despite the utility of the cat in modeling human viral immunity, little is known about the Feline Leukocyte Antigen class I complex (FLAI). Only a few coding sequences with uncertain locus origin and expression patterns have been reported. Of 19 class I genes, 3 loci - FLAI-E, -H and -K – are predicted to encode classical molecules, and our objective was to evaluate their status by analyzing polymorphisms and tissue expression. Using locus-specific, PCR-based genotyping, we amplified 33 FLAI-E, -H, and -K alleles from 12 cats of various breeds, identifying, for the first time, alleles across 3 distinct loci in a feline species. Alleles shared the expected polymorphic and invariant sites in the α1/α2 domains, and full-length cDNA clones possessed all characteristic class Ia exons. Alleles could be assigned to a specific locus with reasonable confidence, although there was evidence of potentially confounding interlocus recombination between FLAI-E and -K. Only FLAI-E, -H and -K-origin alleles were amplified from cDNAs of multiple tissue types. We also defined hypervariable regions across these genes, which permitted the assignment of names to both novel and established alleles. As predicted, FLAI-E, -H, and -K fulfill the major criteria of class Ia genes. These data represent a necessary prerequisite for studying epitope-specific antiviral CD8+ T-cell responses in cats. PMID:23812210

  17. Object-oriented sequence analysis: SCL--a C++ class library.

    PubMed

    Vahrson, W; Hermann, K; Kleffe, J; Wittig, B

    1996-04-01

    SCL (Sequence Class Library) is a class library written in the C++ programming language. Designed using object-oriented programming principles, SCL consists of classes of objects performing tasks typically needed for analyzing DNA or protein sequences. Among them are very flexible sequence classes, classes accessing databases in various formats, classes managing collections of sequences, as well as classes performing higher-level tasks like calculating a pairwise sequence alignment. SCL also includes classes that provide general programming support, like a dynamically growing array, sets, matrices, strings, classes performing file input/output, and utilities for error handling. By providing these components, SCL fosters an explorative programming style: experimenting with algorithms and alternative implementations is encouraged rather than punished. A description of SCL's overall structure as well as an overview of its classes is given. Important aspects of the work with SCL are discussed in the context of a sample program.

  18. Unaligned instruction relocation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bertolli, Carlo; O'Brien, John K.; Sallenave, Olivier H.

    In one embodiment, a computer-implemented method includes receiving source code to be compiled into an executable file for an unaligned instruction set architecture (ISA). Aligned assembled code is generated, by a computer processor. The aligned assembled code complies with an aligned ISA and includes aligned processor code for a processor and aligned accelerator code for an accelerator. A first linking pass is performed on the aligned assembled code, including relocating a first relocation target in the aligned accelerator code that refers to a first object outside the aligned accelerator code. Unaligned assembled code is generated in accordance with the unalignedmore » ISA and includes unaligned accelerator code for the accelerator and unaligned processor code for the processor. A second linking pass is performed on the unaligned assembled code, including relocating a second relocation target outside the unaligned accelerator code that refers to an object in the unaligned accelerator code.« less

  19. Unaligned instruction relocation

    DOEpatents

    Bertolli, Carlo; O'Brien, John K.; Sallenave, Olivier H.; Sura, Zehra N.

    2018-01-23

    In one embodiment, a computer-implemented method includes receiving source code to be compiled into an executable file for an unaligned instruction set architecture (ISA). Aligned assembled code is generated, by a computer processor. The aligned assembled code complies with an aligned ISA and includes aligned processor code for a processor and aligned accelerator code for an accelerator. A first linking pass is performed on the aligned assembled code, including relocating a first relocation target in the aligned accelerator code that refers to a first object outside the aligned accelerator code. Unaligned assembled code is generated in accordance with the unaligned ISA and includes unaligned accelerator code for the accelerator and unaligned processor code for the processor. A second linking pass is performed on the unaligned assembled code, including relocating a second relocation target outside the unaligned accelerator code that refers to an object in the unaligned accelerator code.

  20. Factors associated with prescribing restriction on oncology formulary drugs in Malaysia.

    PubMed

    Fatokun, Omotayo; Olawepo, Michael N

    2016-10-01

    Background Drugs listed on formularies are often subjected to a variety of utilization restriction measures. However, the degree of restriction is influenced by multiple factors, including the characteristics and attributes of the listed drugs. Objective To identify the factors that are associated with the levels of prescribing restriction on oncology formulary drugs in Malaysia. Setting Oncology formulary in Malaysia. Method The Malaysia Drug Code assigned to each of the drug products on the Malaysia Ministry of Health (MOH) drug formulary was used to identify oncology drugs belonging to WHO ATC class L (antineoplastic and immunomodulating agents). Main outcome measures Categories of prescribing restrictions, therapeutic class, drug type, administration mode, number of sources and the post-approval use period. Results Oncology drugs having a shorter post-approval use period (p < 0.001), biologic oncology drugs (p = 0.01) and oncology drugs belonging to immunosuppressant therapeutic class (p = 0.03) were all significantly associated with a greater likelihood of being subjected to a higher level of prescribing restriction. Conclusion This study suggests that safety concerns, costs and potentials for inappropriate use were the important considerations influencing a higher level of prescribing restriction placement on oncology drugs in the Malaysia MOH drug formulary.

  1. Association of HLA-A and Non-Classical HLA Class I Alleles

    PubMed Central

    Carlini, Federico; Ferreira, Virginia; Buhler, Stéphane; Tous, Audrey; Eliaou, Jean-François; René, Céline; Chiaroni, Jacques; Picard, Christophe; Di Cristofaro, Julie

    2016-01-01

    The HLA-A locus is surrounded by HLA class Ib genes: HLA-E, HLA-H, HLA-G and HLA-F. HLA class Ib molecules are involved in immuno-modulation with a central role for HLA-G and HLA-E, an emerging role for HLA-F and a yet unknown function for HLA-H. Thus, the principal objective of this study was to describe the main allelic associations between HLA-A and HLA-H, -G, -F and -E. Therefore, HLA-A, -E, -G, -H and -F coding polymorphisms, as well as HLA-G UnTranslated Region haplotypes (referred to as HLA-G UTRs), were explored in 191 voluntary blood donors. Allelic frequencies, Global Linkage Disequilibrium (GLD), Linkage Disequilibrium (LD) for specific pairs of alleles and two-loci haplotype frequencies were estimated. We showed that HLA-A, HLA-H, HLA-F, HLA-G and HLA-G UTRs were all in highly significant pairwise GLD, in contrast to HLA-E. Moreover, HLA-A displayed restricted associations with HLA-G UTR and HLA-H. We also confirmed several associations that were previously found to have a negative impact on transplantation outcome. In summary, our results suggest complex functional and clinical implications of the HLA-A genetic region. PMID:27701438

  2. Mechanical code comparator

    DOEpatents

    Peter, Frank J.; Dalton, Larry J.; Plummer, David W.

    2002-01-01

    A new class of mechanical code comparators is described which have broad potential for application in safety, surety, and security applications. These devices can be implemented as micro-scale electromechanical systems that isolate a secure or otherwise controlled device until an access code is entered. This access code is converted into a series of mechanical inputs to the mechanical code comparator, which compares the access code to a pre-input combination, entered previously into the mechanical code comparator by an operator at the system security control point. These devices provide extremely high levels of robust security. Being totally mechanical in operation, an access control system properly based on such devices cannot be circumvented by software attack alone.

  3. Incorporating Manual and Autonomous Code Generation

    NASA Technical Reports Server (NTRS)

    McComas, David

    1998-01-01

    Code can be generated manually or using code-generated software tools, but how do you interpret the two? This article looks at a design methodology that combines object-oriented design with autonomic code generation for attitude control flight software. Recent improvements in space flight computers are allowing software engineers to spend more time engineering the applications software. The application developed was the attitude control flight software for an astronomical satellite called the Microwave Anisotropy Probe (MAP). The MAP flight system is being designed, developed, and integrated at NASA's Goddard Space Flight Center. The MAP controls engineers are using Integrated Systems Inc.'s MATRIXx for their controls analysis. In addition to providing a graphical analysis for an environment, MATRIXx includes an autonomic code generation facility called AutoCode. This article examines the forces that shaped the final design and describes three highlights of the design process: (1) Defining the manual to autonomic code interface; (2) Applying object-oriented design to the manual flight code; (3) Implementing the object-oriented design in C.

  4. A suite of benchmark and challenge problems for enhanced geothermal systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    White, Mark; Fu, Pengcheng; McClure, Mark

    A diverse suite of numerical simulators is currently being applied to predict or understand the performance of enhanced geothermal systems (EGS). To build confidence and identify critical development needs for these analytical tools, the United States Department of Energy, Geothermal Technologies Office sponsored a Code Comparison Study (GTO-CCS), with participants from universities, industry, and national laboratories. A principal objective for the study was to create a community forum for improvement and verification of numerical simulators for EGS modeling. Teams participating in the study were those representing U.S. national laboratories, universities, and industries, and each team brought unique numerical simulation capabilitiesmore » to bear on the problems. Two classes of problems were developed during the study, benchmark problems and challenge problems. The benchmark problems were structured to test the ability of the collection of numerical simulators to solve various combinations of coupled thermal, hydrologic, geomechanical, and geochemical processes. This class of problems was strictly defined in terms of properties, driving forces, initial conditions, and boundary conditions. The challenge problems were based on the enhanced geothermal systems research conducted at Fenton Hill, near Los Alamos, New Mexico, between 1974 and 1995. The problems involved two phases of research, stimulation, development, and circulation in two separate reservoirs. The challenge problems had specific questions to be answered via numerical simulation in three topical areas: 1) reservoir creation/stimulation, 2) reactive and passive transport, and 3) thermal recovery. Whereas the benchmark class of problems were designed to test capabilities for modeling coupled processes under strictly specified conditions, the stated objective for the challenge class of problems was to demonstrate what new understanding of the Fenton Hill experiments could be realized via the application of modern numerical simulation tools by recognized expert practitioners. We present the suite of benchmark and challenge problems developed for the GTO-CCS, providing problem descriptions and sample solutions.« less

  5. Distribution of a Generic Mission Planning and Scheduling Toolkit for Astronomical Spacecraft

    NASA Technical Reports Server (NTRS)

    Kleiner, Steven C.

    1996-01-01

    Work is progressing as outlined in the proposal for this contract. A working planning and scheduling system has been documented and packaged and made available to the WIRE Small Explorer group at JPL, the FUSE group at JHU, the NASA/GSFC Laboratory for Astronomy and Solar Physics and the Advanced Planning and Scheduling Branch at STScI. The package is running successfully on the WIRE computer system. It is expected that the WIRE will reuse significant portions of the SWAS code in its system. This scheduling system itself was tested successfully against the spacecraft hardware in December 1995. A fully automatic scheduling module has been developed and is being added to the toolkit. In order to maximize reuse, the code is being reorganized during the current build into object-oriented class libraries. A paper describing the toolkit has been written and is included in the software distribution. We have experienced interference between the export and production versions of the toolkit. We will be requesting permission to reprogram funds in order to purchase a standalone PC onto which to offload the export version.

  6. Explicit robust schemes for implementation of a class of principal value-based constitutive models: Symbolic and numeric implementation

    NASA Technical Reports Server (NTRS)

    Arnold, S. M.; Saleeb, A. F.; Tan, H. Q.; Zhang, Y.

    1993-01-01

    The issue of developing effective and robust schemes to implement a class of the Ogden-type hyperelastic constitutive models is addressed. To this end, special purpose functions (running under MACSYMA) are developed for the symbolic derivation, evaluation, and automatic FORTRAN code generation of explicit expressions for the corresponding stress function and material tangent stiffness tensors. These explicit forms are valid over the entire deformation range, since the singularities resulting from repeated principal-stretch values have been theoretically removed. The required computational algorithms are outlined, and the resulting FORTRAN computer code is presented.

  7. Bounds on Block Error Probability for Multilevel Concatenated Codes

    NASA Technical Reports Server (NTRS)

    Lin, Shu; Moorthy, Hari T.; Stojanovic, Diana

    1996-01-01

    Maximum likelihood decoding of long block codes is not feasable due to large complexity. Some classes of codes are shown to be decomposable into multilevel concatenated codes (MLCC). For these codes, multistage decoding provides good trade-off between performance and complexity. In this paper, we derive an upper bound on the probability of block error for MLCC. We use this bound to evaluate difference in performance for different decompositions of some codes. Examples given show that a significant reduction in complexity can be achieved when increasing number of stages of decoding. Resulting performance degradation varies for different decompositions. A guideline is given for finding good m-level decompositions.

  8. An Object-Oriented Approach to Writing Computational Electromagnetics Codes

    NASA Technical Reports Server (NTRS)

    Zimmerman, Martin; Mallasch, Paul G.

    1996-01-01

    Presently, most computer software development in the Computational Electromagnetics (CEM) community employs the structured programming paradigm, particularly using the Fortran language. Other segments of the software community began switching to an Object-Oriented Programming (OOP) paradigm in recent years to help ease design and development of highly complex codes. This paper examines design of a time-domain numerical analysis CEM code using the OOP paradigm, comparing OOP code and structured programming code in terms of software maintenance, portability, flexibility, and speed.

  9. Improving the genome annotation of the acarbose producer Actinoplanes sp. SE50/110 by sequencing enriched 5'-ends of primary transcripts.

    PubMed

    Schwientek, Patrick; Neshat, Armin; Kalinowski, Jörn; Klein, Andreas; Rückert, Christian; Schneiker-Bekel, Susanne; Wendler, Sergej; Stoye, Jens; Pühler, Alfred

    2014-11-20

    Actinoplanes sp. SE50/110 is the producer of the alpha-glucosidase inhibitor acarbose, which is an economically relevant and potent drug in the treatment of type-2 diabetes mellitus. In this study, we present the detection of transcription start sites on this genome by sequencing enriched 5'-ends of primary transcripts. Altogether, 1427 putative transcription start sites were initially identified. With help of the annotated genome sequence, 661 transcription start sites were found to belong to the leader region of protein-coding genes with the surprising result that roughly 20% of these genes rank among the class of leaderless transcripts. Next, conserved promoter motifs were identified for protein-coding genes with and without leader sequences. The mapped transcription start sites were finally used to improve the annotation of the Actinoplanes sp. SE50/110 genome sequence. Concerning protein-coding genes, 41 translation start sites were corrected and 9 novel protein-coding genes could be identified. In addition to this, 122 previously undetermined non-coding RNA (ncRNA) genes of Actinoplanes sp. SE50/110 were defined. Focusing on antisense transcription start sites located within coding genes or their leader sequences, it was discovered that 96 of those ncRNA genes belong to the class of antisense RNA (asRNA) genes. The remaining 26 ncRNA genes were found outside of known protein-coding genes. Four chosen examples of prominent ncRNA genes, namely the transfer messenger RNA gene ssrA, the ribonuclease P class A RNA gene rnpB, the cobalamin riboswitch RNA gene cobRS, and the selenocysteine-specific tRNA gene selC, are presented in more detail. This study demonstrates that sequencing of enriched 5'-ends of primary transcripts and the identification of transcription start sites are valuable tools for advanced genome annotation of Actinoplanes sp. SE50/110 and most probably also for other bacteria. Copyright © 2014 Elsevier B.V. All rights reserved.

  10. Dissociation between awareness and spatial coding: evidence from unilateral neglect.

    PubMed

    Treccani, Barbara; Cubelli, Roberto; Sellaro, Roberta; Umiltà, Carlo; Della Sala, Sergio

    2012-04-01

    Prevalent theories about consciousness propose a causal relation between lack of spatial coding and absence of conscious experience: The failure to code the position of an object is assumed to prevent this object from entering consciousness. This is consistent with influential theories of unilateral neglect following brain damage, according to which spatial coding of neglected stimuli is defective, and this would keep their processing at the nonconscious level. Contrary to this view, we report evidence showing that spatial coding and consciousness can dissociate. A patient with left neglect, who was not aware of contralesional stimuli, was able to process their color and position. However, in contrast to (ipsilesional) consciously perceived stimuli, color and position of neglected stimuli were processed separately. We propose that individual object features, including position, can be processed without attention and consciousness and that conscious perception of an object depends on the binding of its features into an integrated percept.

  11. New quantum codes constructed from quaternary BCH codes

    NASA Astrophysics Data System (ADS)

    Xu, Gen; Li, Ruihu; Guo, Luobin; Ma, Yuena

    2016-10-01

    In this paper, we firstly study construction of new quantum error-correcting codes (QECCs) from three classes of quaternary imprimitive BCH codes. As a result, the improved maximal designed distance of these narrow-sense imprimitive Hermitian dual-containing quaternary BCH codes are determined to be much larger than the result given according to Aly et al. (IEEE Trans Inf Theory 53:1183-1188, 2007) for each different code length. Thus, families of new QECCs are newly obtained, and the constructed QECCs have larger distance than those in the previous literature. Secondly, we apply a combinatorial construction to the imprimitive BCH codes with their corresponding primitive counterpart and construct many new linear quantum codes with good parameters, some of which have parameters exceeding the finite Gilbert-Varshamov bound for linear quantum codes.

  12. Performance Theories for Sentence Coding: Some Quantitative Models

    ERIC Educational Resources Information Center

    Aaronson, Doris; And Others

    1977-01-01

    This study deals with the patterns of word-by-word reading times over a sentence when the subject must code the linguistic information sufficiently for immediate verbatim recall. A class of quantitative models is considered that would account for reading times at phrase breaks. (Author/RM)

  13. Dynamic state estimation based on Poisson spike trains—towards a theory of optimal encoding

    NASA Astrophysics Data System (ADS)

    Susemihl, Alex; Meir, Ron; Opper, Manfred

    2013-03-01

    Neurons in the nervous system convey information to higher brain regions by the generation of spike trains. An important question in the field of computational neuroscience is how these sensory neurons encode environmental information in a way which may be simply analyzed by subsequent systems. Many aspects of the form and function of the nervous system have been understood using the concepts of optimal population coding. Most studies, however, have neglected the aspect of temporal coding. Here we address this shortcoming through a filtering theory of inhomogeneous Poisson processes. We derive exact relations for the minimal mean squared error of the optimal Bayesian filter and, by optimizing the encoder, obtain optimal codes for populations of neurons. We also show that a class of non-Markovian, smooth stimuli are amenable to the same treatment, and provide results for the filtering and prediction error which hold for a general class of stochastic processes. This sets a sound mathematical framework for a population coding theory that takes temporal aspects into account. It also formalizes a number of studies which discussed temporal aspects of coding using time-window paradigms, by stating them in terms of correlation times and firing rates. We propose that this kind of analysis allows for a systematic study of temporal coding and will bring further insights into the nature of the neural code.

  14. An FPGA design of generalized low-density parity-check codes for rate-adaptive optical transport networks

    NASA Astrophysics Data System (ADS)

    Zou, Ding; Djordjevic, Ivan B.

    2016-02-01

    Forward error correction (FEC) is as one of the key technologies enabling the next-generation high-speed fiber optical communications. In this paper, we propose a rate-adaptive scheme using a class of generalized low-density parity-check (GLDPC) codes with a Hamming code as local code. We show that with the proposed unified GLDPC decoder architecture, a variable net coding gains (NCGs) can be achieved with no error floor at BER down to 10-15, making it a viable solution in the next-generation high-speed fiber optical communications.

  15. Requirements for construction of nuclear system components at elevated temperatures (supplement to ASME Code Cases 1592, 1593, 1594, 1595, and 1596)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    This standard provides rules for the construction of Class 1 nuclear components, parts, and appurtenances for use at elevated temperatures. This standard is a complete set of requirements only when used in conjunction with Section III of the ASME Boiler and Pressure Vessel Code (ASME Code) and addenda, ASME Code Cases 1592, 1593, 1594, 1595, and 1596, and RDT E 15-2NB. Unmodified paragraphs of the referenced Code Cases are not repeated in this standard but are a part of the requirements of this standard.

  16. Assessment of Current Jet Noise Prediction Capabilities

    NASA Technical Reports Server (NTRS)

    Hunter, Craid A.; Bridges, James E.; Khavaran, Abbas

    2008-01-01

    An assessment was made of the capability of jet noise prediction codes over a broad range of jet flows, with the objective of quantifying current capabilities and identifying areas requiring future research investment. Three separate codes in NASA s possession, representative of two classes of jet noise prediction codes, were evaluated, one empirical and two statistical. The empirical code is the Stone Jet Noise Module (ST2JET) contained within the ANOPP aircraft noise prediction code. It is well documented, and represents the state of the art in semi-empirical acoustic prediction codes where virtual sources are attributed to various aspects of noise generation in each jet. These sources, in combination, predict the spectral directivity of a jet plume. A total of 258 jet noise cases were examined on the ST2JET code, each run requiring only fractions of a second to complete. Two statistical jet noise prediction codes were also evaluated, JeNo v1, and Jet3D. Fewer cases were run for the statistical prediction methods because they require substantially more resources, typically a Reynolds-Averaged Navier-Stokes solution of the jet, volume integration of the source statistical models over the entire plume, and a numerical solution of the governing propagation equation within the jet. In the evaluation process, substantial justification of experimental datasets used in the evaluations was made. In the end, none of the current codes can predict jet noise within experimental uncertainty. The empirical code came within 2dB on a 1/3 octave spectral basis for a wide range of flows. The statistical code Jet3D was within experimental uncertainty at broadside angles for hot supersonic jets, but errors in peak frequency and amplitude put it out of experimental uncertainty at cooler, lower speed conditions. Jet3D did not predict changes in directivity in the downstream angles. The statistical code JeNo,v1 was within experimental uncertainty predicting noise from cold subsonic jets at all angles, but did not predict changes with heating of the jet and did not account for directivity changes at supersonic conditions. Shortcomings addressed here give direction for future work relevant to the statistical-based prediction methods. A full report will be released as a chapter in a NASA publication assessing the state of the art in aircraft noise prediction.

  17. Changes in mitochondrial genetic codes as phylogenetic characters: Two examples from the flatworms

    PubMed Central

    Telford, Maximilian J.; Herniou, Elisabeth A.; Russell, Robert B.; Littlewood, D. Timothy J.

    2000-01-01

    Shared molecular genetic characteristics other than DNA and protein sequences can provide excellent sources of phylogenetic information, particularly if they are complex and rare and are consequently unlikely to have arisen by chance convergence. We have used two such characters, arising from changes in mitochondrial genetic code, to define a clade within the Platyhelminthes (flatworms), the Rhabditophora. We have sampled 10 distinct classes within the Rhabditophora and find that all have the codon AAA coding for the amino acid Asn rather than the usual Lys and AUA for Ile rather than the usual Met. We find no evidence to support claims that the codon UAA codes for Tyr in the Platyhelminthes rather than the standard stop codon. The Rhabditophora are a very diverse group comprising the majority of the free-living turbellarian taxa and the parasitic Neodermata. In contrast, three other classes of turbellarian flatworm, the Acoela, Nemertodermatida, and Catenulida, have the standard invertebrate assignments for these codons and so are convincingly excluded from the rhabditophoran clade. We have developed a rapid computerized method for analyzing genetic codes and demonstrate the wide phylogenetic distribution of the standard invertebrate code as well as confirming already known metazoan deviations from it (ascidian, vertebrate, echinoderm/hemichordate). PMID:11027335

  18. Multimodal Discriminative Binary Embedding for Large-Scale Cross-Modal Retrieval.

    PubMed

    Wang, Di; Gao, Xinbo; Wang, Xiumei; He, Lihuo; Yuan, Bo

    2016-10-01

    Multimodal hashing, which conducts effective and efficient nearest neighbor search across heterogeneous data on large-scale multimedia databases, has been attracting increasing interest, given the explosive growth of multimedia content on the Internet. Recent multimodal hashing research mainly aims at learning the compact binary codes to preserve semantic information given by labels. The overwhelming majority of these methods are similarity preserving approaches which approximate pairwise similarity matrix with Hamming distances between the to-be-learnt binary hash codes. However, these methods ignore the discriminative property in hash learning process, which results in hash codes from different classes undistinguished, and therefore reduces the accuracy and robustness for the nearest neighbor search. To this end, we present a novel multimodal hashing method, named multimodal discriminative binary embedding (MDBE), which focuses on learning discriminative hash codes. First, the proposed method formulates the hash function learning in terms of classification, where the binary codes generated by the learned hash functions are expected to be discriminative. And then, it exploits the label information to discover the shared structures inside heterogeneous data. Finally, the learned structures are preserved for hash codes to produce similar binary codes in the same class. Hence, the proposed MDBE can preserve both discriminability and similarity for hash codes, and will enhance retrieval accuracy. Thorough experiments on benchmark data sets demonstrate that the proposed method achieves excellent accuracy and competitive computational efficiency compared with the state-of-the-art methods for large-scale cross-modal retrieval task.

  19. An object-oriented class library for medical software development.

    PubMed

    O'Kane, K C; McColligan, E E

    1996-12-01

    The objective of this research is the development of a Medical Object Library (MOL) consisting of reusable, inheritable, portable, extendable C++ classes that facilitate rapid development of medical software at reduced cost and increased functionality. The result of this research is a library of class objects that range in function from string and hierarchical file handling entities to high level, procedural agents that perform increasingly complex, integrated tasks. A system built upon these classes is compatible with any other system similarly constructed with respect to data definitions, semantics, data organization and storage. As new objects are built, they can be added to the class library for subsequent use. The MOL is a toolkit of software objects intended to support a common file access methodology, a unified medical record structure, consistent message processing, standard graphical display facilities and uniform data collection procedures. This work emphasizes the relationship that potentially exists between the structure of a hierarchical medical record and procedural language components by means of a hierarchical class library and tree structured file access facility. In doing so, it attempts to establish interest in and demonstrate the practicality of the hierarchical medical record model in the modern context of object oriented programming.

  20. Acquiring Word Class Distinctions in American Sign Language: Evidence from Handshape

    ERIC Educational Resources Information Center

    Brentari, Diane; Coppola, Marie; Jung, Ashley; Goldin-Meadow, Susan

    2013-01-01

    Handshape works differently in nouns versus a class of verbs in American Sign Language (ASL) and thus can serve as a cue to distinguish between these two word classes. Handshapes representing characteristics of the object itself ("object" handshapes) and handshapes representing how the object is handled ("handling" handshapes)…

  1. Constructions for finite-state codes

    NASA Technical Reports Server (NTRS)

    Pollara, F.; Mceliece, R. J.; Abdel-Ghaffar, K.

    1987-01-01

    A class of codes called finite-state (FS) codes is defined and investigated. These codes, which generalize both block and convolutional codes, are defined by their encoders, which are finite-state machines with parallel inputs and outputs. A family of upper bounds on the free distance of a given FS code is derived from known upper bounds on the minimum distance of block codes. A general construction for FS codes is then given, based on the idea of partitioning a given linear block into cosets of one of its subcodes, and it is shown that in many cases the FS codes constructed in this way have a d sub free which is as large as possible. These codes are found without the need for lengthy computer searches, and have potential applications for future deep-space coding systems. The issue of catastropic error propagation (CEP) for FS codes is also investigated.

  2. Integrated design of the CSI evolutionary structure: A verification of the design methodology

    NASA Technical Reports Server (NTRS)

    Maghami, Peiman G.; Joshi, S. M.; Elliott, Kenny B.; Walz, J. E.

    1993-01-01

    One of the main objectives of the Controls-Structures Interaction (CSI) program is to develop and evaluate integrated controls-structures design methodology for flexible space structures. Thus far, integrated design methodologies for a class of flexible spacecraft, which require fine attitude pointing and vibration suppression with no payload articulation, have been extensively investigated. Various integrated design optimization approaches, such as single-objective optimization, and multi-objective optimization, have been implemented with an array of different objectives and constraints involving performance and cost measures such as total mass, actuator mass, steady-state pointing performance, transient performance, control power, and many more. These studies have been performed using an integrated design software tool (CSI-DESIGN CODE) which is under development by the CSI-ADM team at the NASA Langley Research Center. To date, all of these studies, irrespective of the type of integrated optimization posed or objectives and constraints used, have indicated that integrated controls-structures design results in an overall spacecraft design which is considerably superior to designs obtained through a conventional sequential approach. Consequently, it is believed that validation of some of these results through fabrication and testing of a structure which is designed through an integrated design approach is warranted. The objective of this paper is to present and discuss the efforts that have been taken thus far for the validation of the integrated design methodology.

  3. Use of Generalized Fluid System Simulation Program (GFSSP) for Teaching and Performing Senior Design Projects at the Educational Institutions

    NASA Technical Reports Server (NTRS)

    Majumdar, A. K.; Hedayat, A.

    2015-01-01

    This paper describes the experience of the authors in using the Generalized Fluid System Simulation Program (GFSSP) in teaching Design of Thermal Systems class at University of Alabama in Huntsville. GFSSP is a finite volume based thermo-fluid system network analysis code, developed at NASA/Marshall Space Flight Center, and is extensively used in NASA, Department of Defense, and aerospace industries for propulsion system design, analysis, and performance evaluation. The educational version of GFSSP is freely available to all US higher education institutions. The main purpose of the paper is to illustrate the utilization of this user-friendly code for the thermal systems design and fluid engineering courses and to encourage the instructors to utilize the code for the class assignments as well as senior design projects.

  4. Physiomodel - an integrative physiology in Modelica.

    PubMed

    Matejak, Marek; Kofranek, Jiri

    2015-08-01

    Physiomodel (http://www.physiomodel.org) is our reimplementation and extension of an integrative physiological model called HumMod 1.6 (http://www.hummod.org) using our Physiolibrary (http://www.physiolibrary.org). The computer language Modelica is well-suited to exactly formalize integrative physiology. Modelica is an equation-based, and object-oriented language for hybrid ordinary differential equations (http:// www.modelica.org). Almost every physiological term can be defined as a class in this language and can be instantiated as many times as it occurs in the body. Each class has a graphical icon for use in diagrams. These diagrams are self-describing; the Modelica code generated from them is the full representation of the underlying mathematical model. Special Modelica constructs of physical connectors from Physiolibrary allow us to create diagrams that are analogies of electrical circuits with Kirchhoff's laws. As electric currents and electric potentials are connected in electrical domain, so are molar flows and concentrations in the chemical domain; volumetric flows and pressures in the hydraulic domain; flows of heat energy and temperatures in the thermal domain; and changes and amounts of members in the population domain.

  5. Hazardous Material Packaging and Transportation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hypes, Philip A.

    2016-02-04

    This is a student training course. Some course objectives are to: recognize and use standard international and US customary units to describe activities and exposure rates associated with radioactive material; determine whether a quantity of a single radionuclide meets the definition of a class 7 (radioactive) material; determine, for a given single radionuclide, the shipping quantity activity limits per 49 Code of Federal Regulations (CFR) 173.435; determine the appropriate radioactive material hazard class proper shipping name for a given material; determine when a single radionuclide meets the DOT definition of a hazardous substance; determine the appropriate packaging required for amore » given radioactive material; identify the markings to be placed on a package of radioactive material; determine the label(s) to apply to a given radioactive material package; identify the entry requirements for radioactive material labels; determine the proper placement for radioactive material label(s); identify the shipping paper entry requirements for radioactive material; select the appropriate placards for a given radioactive material shipment or vehicle load; and identify allowable transport limits and unacceptable transport conditions for radioactive material.« less

  6. Impact of the Choice of Normalization Method on Molecular Cancer Class Discovery Using Nonnegative Matrix Factorization.

    PubMed

    Yang, Haixuan; Seoighe, Cathal

    2016-01-01

    Nonnegative Matrix Factorization (NMF) has proved to be an effective method for unsupervised clustering analysis of gene expression data. By the nonnegativity constraint, NMF provides a decomposition of the data matrix into two matrices that have been used for clustering analysis. However, the decomposition is not unique. This allows different clustering results to be obtained, resulting in different interpretations of the decomposition. To alleviate this problem, some existing methods directly enforce uniqueness to some extent by adding regularization terms in the NMF objective function. Alternatively, various normalization methods have been applied to the factor matrices; however, the effects of the choice of normalization have not been carefully investigated. Here we investigate the performance of NMF for the task of cancer class discovery, under a wide range of normalization choices. After extensive evaluations, we observe that the maximum norm showed the best performance, although the maximum norm has not previously been used for NMF. Matlab codes are freely available from: http://maths.nuigalway.ie/~haixuanyang/pNMF/pNMF.htm.

  7. Discriminative Bayesian Dictionary Learning for Classification.

    PubMed

    Akhtar, Naveed; Shafait, Faisal; Mian, Ajmal

    2016-12-01

    We propose a Bayesian approach to learn discriminative dictionaries for sparse representation of data. The proposed approach infers probability distributions over the atoms of a discriminative dictionary using a finite approximation of Beta Process. It also computes sets of Bernoulli distributions that associate class labels to the learned dictionary atoms. This association signifies the selection probabilities of the dictionary atoms in the expansion of class-specific data. Furthermore, the non-parametric character of the proposed approach allows it to infer the correct size of the dictionary. We exploit the aforementioned Bernoulli distributions in separately learning a linear classifier. The classifier uses the same hierarchical Bayesian model as the dictionary, which we present along the analytical inference solution for Gibbs sampling. For classification, a test instance is first sparsely encoded over the learned dictionary and the codes are fed to the classifier. We performed experiments for face and action recognition; and object and scene-category classification using five public datasets and compared the results with state-of-the-art discriminative sparse representation approaches. Experiments show that the proposed Bayesian approach consistently outperforms the existing approaches.

  8. The Waveform Suite: A robust platform for accessing and manipulating seismic waveforms in MATLAB

    NASA Astrophysics Data System (ADS)

    Reyes, C. G.; West, M. E.; McNutt, S. R.

    2009-12-01

    The Waveform Suite, developed at the University of Alaska Geophysical Institute, is an open-source collection of MATLAB classes that provide a means to import, manipulate, display, and share waveform data while ensuring integrity of the data and stability for programs that incorporate them. Data may be imported from a variety of sources, such as Antelope, Winston databases, SAC files, SEISAN, .mat files, or other user-defined file formats. The waveforms being manipulated in MATLAB are isolated from their stored representations, relieving the overlying programs from the responsibility of understanding the specific format in which data is stored or retrieved. The waveform class provides an object oriented framework that simplifies manipulations to waveform data. Playing with data becomes easier because the tedious aspects of data manipulation have been automated. The user is able to change multiple waveforms simultaneously using standard mathematical operators and other syntactically familiar functions. Unlike MATLAB structs or workspace variables, the data stored within waveform class objects are protected from modification, and instead are accessed through standardized functions, such as get and set; these are already familiar to users of MATLAB’s graphical features. This prevents accidental or nonsensical modifications to the data, which in turn simplifies troubleshooting of complex programs. Upgrades to the internal structure of the waveform class are invisible to applications which use it, making maintenance easier. We demonstrate the Waveform Suite’s capabilities on seismic data from Okmok and Redoubt volcanoes. Years of data from Okmok were retrieved from Antelope and Winston databases. Using the Waveform Suite, we built a tremor-location program. Because the program was built on the Waveform Suite, modifying it to operate on real-time data from Redoubt involved only minimal code changes. The utility of the Waveform Suite as a foundation for large developments is demonstrated with the Correlation Toolbox for MATLAB. This mature package contains 50+ codes for carrying out various type of waveform correlation analyses (multiplet analysis, clustering, interferometry, …) This package is greatly strengthened by delegating numerous book-keeping and signal processing tasks to the underlying Waveform Suite. The Waveform Suite’s built-in tools for searching arbitrary directory/file structures is demonstrated with matched video and audio from the recent eruption of Redoubt Volcano. These tools were used to find subsets of photo images corresponding to specific seismic traces. Using Waveform’s audio file routines, matched video and audio were assembled to produce outreach-quality eruption products. The Waveform Suite is not designed as a ready-to-go replacement for more comprehensive packages such as SAC or AH. Rather, it is a suite of classes which provide core time series functionality in a MATLAB environment. It is designed to be a more robust alternative to the numerous ad hoc MATLAB formats that exist. Complex programs may be created upon the Waveform Suite’s framework, while existing programs may be modified to take advantage of the Waveform Suites capabilities.

  9. A new systems engineering approach to streamlined science and mission operations for the Far Ultraviolet Spectroscopic Explorer (FUSE)

    NASA Technical Reports Server (NTRS)

    Butler, Madeline J.; Sonneborn, George; Perkins, Dorothy C.

    1994-01-01

    The Mission Operations and Data Systems Directorate (MO&DSD, Code 500), the Space Sciences Directorate (Code 600), and the Flight Projects Directorate (Code 400) have developed a new approach to combine the science and mission operations for the FUSE mission. FUSE, the last of the Delta-class Explorer missions, will obtain high resolution far ultraviolet spectra (910 - 1220 A) of stellar and extragalactic sources to study the evolution of galaxies and conditions in the early universe. FUSE will be launched in 2000 into a 24-hour highly eccentric orbit. Science operations will be conducted in real time for 16-18 hours per day, in a manner similar to the operations performed today for the International Ultraviolet Explorer. In a radical departure from previous missions, the operations concept combines spacecraft and science operations and data processing functions in a single facility to be housed in the Laboratory for Astronomy and Solar Physics (Code 680). A small missions operations team will provide the spacecraft control, telescope operations and data handling functions in a facility designated as the Science and Mission Operations Center (SMOC). This approach will utilize the Transportable Payload Operations Control Center (TPOCC) architecture for both spacecraft and instrument commanding. Other concepts of integrated operations being developed by the Code 500 Renaissance Project will also be employed for the FUSE SMOC. The primary objective of this approach is to reduce development and mission operations costs. The operations concept, integration of mission and science operations, and extensive use of existing hardware and software tools will decrease both development and operations costs extensively. This paper describes the FUSE operations concept, discusses the systems engineering approach used for its development, and the software, hardware and management tools that will make its implementation feasible.

  10. Statistical Evaluation of the Rodin–Ohno Hypothesis: Sense/Antisense Coding of Ancestral Class I and II Aminoacyl-tRNA Synthetases

    PubMed Central

    Chandrasekaran, Srinivas Niranj; Yardimci, Galip Gürkan; Erdogan, Ozgün; Roach, Jeffrey; Carter, Charles W.

    2013-01-01

    We tested the idea that ancestral class I and II aminoacyl-tRNA synthetases arose on opposite strands of the same gene. We assembled excerpted 94-residue Urgenes for class I tryptophanyl-tRNA synthetase (TrpRS) and class II Histidyl-tRNA synthetase (HisRS) from a diverse group of species, by identifying and catenating three blocks coding for secondary structures that position the most highly conserved, active-site residues. The codon middle-base pairing frequency was 0.35 ± 0.0002 in all-by-all sense/antisense alignments for 211 TrpRS and 207 HisRS sequences, compared with frequencies between 0.22 ± 0.0009 and 0.27 ± 0.0005 for eight different representations of the null hypothesis. Clustering algorithms demonstrate further that profiles of middle-base pairing in the synthetase antisense alignments are correlated along the sequences from one species-pair to another, whereas this is not the case for similar operations on sets representing the null hypothesis. Most probable reconstructed sequences for ancestral nodes of maximum likelihood trees show that middle-base pairing frequency increases to approximately 0.42 ± 0.002 as bacterial trees approach their roots; ancestral nodes from trees including archaeal sequences show a less pronounced increase. Thus, contemporary and reconstructed sequences all validate important bioinformatic predictions based on descent from opposite strands of the same ancestral gene. They further provide novel evidence for the hypothesis that bacteria lie closer than archaea to the origin of translation. Moreover, the inverse polarity of genetic coding, together with a priori α-helix propensities suggest that in-frame coding on opposite strands leads to similar secondary structures with opposite polarity, as observed in TrpRS and HisRS crystal structures. PMID:23576570

  11. Deep Learning for Automated Extraction of Primary Sites from Cancer Pathology Reports

    DOE PAGES

    Qiu, John; Yoon, Hong-Jun; Fearn, Paul A.; ...

    2017-05-03

    Pathology reports are a primary source of information for cancer registries which process high volumes of free-text reports annually. Information extraction and coding is a manual, labor-intensive process. Here in this study we investigated deep learning and a convolutional neural network (CNN), for extracting ICDO- 3 topographic codes from a corpus of breast and lung cancer pathology reports. We performed two experiments, using a CNN and a more conventional term frequency vector approach, to assess the effects of class prevalence and inter-class transfer learning. The experiments were based on a set of 942 pathology reports with human expert annotations asmore » the gold standard. CNN performance was compared against a more conventional term frequency vector space approach. We observed that the deep learning models consistently outperformed the conventional approaches in the class prevalence experiment, resulting in micro and macro-F score increases of up to 0.132 and 0.226 respectively when class labels were well populated. Specifically, the best performing CNN achieved a micro-F score of 0.722 over 12 ICD-O-3 topography codes. Transfer learning provided a consistent but modest performance boost for the deep learning methods but trends were contingent on CNN method and cancer site. Finally, these encouraging results demonstrate the potential of deep learning for automated abstraction of pathology reports.« less

  12. Deep Learning for Automated Extraction of Primary Sites from Cancer Pathology Reports

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Qiu, John; Yoon, Hong-Jun; Fearn, Paul A.

    Pathology reports are a primary source of information for cancer registries which process high volumes of free-text reports annually. Information extraction and coding is a manual, labor-intensive process. Here in this study we investigated deep learning and a convolutional neural network (CNN), for extracting ICDO- 3 topographic codes from a corpus of breast and lung cancer pathology reports. We performed two experiments, using a CNN and a more conventional term frequency vector approach, to assess the effects of class prevalence and inter-class transfer learning. The experiments were based on a set of 942 pathology reports with human expert annotations asmore » the gold standard. CNN performance was compared against a more conventional term frequency vector space approach. We observed that the deep learning models consistently outperformed the conventional approaches in the class prevalence experiment, resulting in micro and macro-F score increases of up to 0.132 and 0.226 respectively when class labels were well populated. Specifically, the best performing CNN achieved a micro-F score of 0.722 over 12 ICD-O-3 topography codes. Transfer learning provided a consistent but modest performance boost for the deep learning methods but trends were contingent on CNN method and cancer site. Finally, these encouraging results demonstrate the potential of deep learning for automated abstraction of pathology reports.« less

  13. Lossy to lossless object-based coding of 3-D MRI data.

    PubMed

    Menegaz, Gloria; Thiran, Jean-Philippe

    2002-01-01

    We propose a fully three-dimensional (3-D) object-based coding system exploiting the diagnostic relevance of the different regions of the volumetric data for rate allocation. The data are first decorrelated via a 3-D discrete wavelet transform. The implementation via the lifting steps scheme allows to map integer-to-integer values, enabling lossless coding, and facilitates the definition of the object-based inverse transform. The coding process assigns disjoint segments of the bitstream to the different objects, which can be independently accessed and reconstructed at any up-to-lossless quality. Two fully 3-D coding strategies are considered: embedded zerotree coding (EZW-3D) and multidimensional layered zero coding (MLZC), both generalized for region of interest (ROI)-based processing. In order to avoid artifacts along region boundaries, some extra coefficients must be encoded for each object. This gives rise to an overheading of the bitstream with respect to the case where the volume is encoded as a whole. The amount of such extra information depends on both the filter length and the decomposition depth. The system is characterized on a set of head magnetic resonance images. Results show that MLZC and EZW-3D have competitive performances. In particular, the best MLZC mode outperforms the others state-of-the-art techniques on one of the datasets for which results are available in the literature.

  14. The Impact of Codes of Conduct on Stakeholders

    ERIC Educational Resources Information Center

    Newman, Wayne R.

    2015-01-01

    The purpose of this study was to determine how an urban school district's code of conduct aligned with actual school/class behaviors, and how stakeholders perceived the ability of this document to achieve its number one goal: safe and productive learning environments. Twenty participants including students, teachers, parents, and administrators…

  15. 40 CFR 147.200 - State-administered program-Class I, III, IV, and V wells.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... the program administered by the Arkansas Department of Pollution Control and Ecology approved by EPA... Code, Department of Pollution Control and Ecology, promulgated January 22, 1982; (4) General Rule and... Management Code, Department of Pollution Control and Ecology, promulgated August 21, 1981. (b) The Memorandum...

  16. 40 CFR 147.200 - State-administered program-Class I, III, IV, and V wells.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... the program administered by the Arkansas Department of Pollution Control and Ecology approved by EPA... Code, Department of Pollution Control and Ecology, promulgated January 22, 1982; (4) General Rule and... Management Code, Department of Pollution Control and Ecology, promulgated August 21, 1981. (b) The Memorandum...

  17. Detecting Runtime Anomalies in AJAX Applications through Trace Analysis

    DTIC Science & Technology

    2011-08-10

    statements by adding the instrumentation to the GWT UI classes, leaving the user code untouched. Some content management frameworks such as Drupal [12...Google web toolkit.” http://code.google.com/webtoolkit/. [12] “Form generation – drupal api.” http://api.drupal.org/api/group/form_api/6. 9

  18. 40 CFR 147.200 - State-administered program-Class I, III, IV, and V wells.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... the program administered by the Arkansas Department of Pollution Control and Ecology approved by EPA... Code, Department of Pollution Control and Ecology, promulgated January 22, 1982; (4) General Rule and... Management Code, Department of Pollution Control and Ecology, promulgated August 21, 1981. (b) The Memorandum...

  19. 40 CFR 147.200 - State-administered program-Class I, III, IV, and V wells.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... the program administered by the Arkansas Department of Pollution Control and Ecology approved by EPA... Code, Department of Pollution Control and Ecology, promulgated January 22, 1982; (4) General Rule and... Management Code, Department of Pollution Control and Ecology, promulgated August 21, 1981. (b) The Memorandum...

  20. 40 CFR 147.200 - State-administered program-Class I, III, IV, and V wells.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... the program administered by the Arkansas Department of Pollution Control and Ecology approved by EPA... Code, Department of Pollution Control and Ecology, promulgated January 22, 1982; (4) General Rule and... Management Code, Department of Pollution Control and Ecology, promulgated August 21, 1981. (b) The Memorandum...

  1. Computer Viruses: Prevention, Detection, and Treatment

    DTIC Science & Technology

    1990-03-12

    executed, also carries out its covert function, potentially undetected. This class of attack earned the term "Trojan horse" from the original of Greek ... mythology , signifying a gift which conceals a malicious purpose. 1 cause harm. The offending code may be present in a code segment the user "touches," which

  2. An Object Oriented Analysis Method for Ada and Embedded Systems

    DTIC Science & Technology

    1989-12-01

    expansion of the paradligm from the coding anld desiningactivities into the earlier activity of reurmnsalyi.Ts hpl, begins by discussing the application of...response time: 0.1 seconds.I Step le: Identify Known Restrictions on the Software.I " The cruise control system object code must fit within 16K of mem- orv...application of object-oriented techniques to the coding and desigll phases of the life cycle, as well as various approaches to requirements analysis. 3

  3. "Object Lesson": Using Family Heirlooms to Engage Students in Art History

    ERIC Educational Resources Information Center

    Rose, Marice

    2012-01-01

    This first written assignment of the semester for the author's undergraduate introductory art history class--an essay where students describe and reflect upon the significance of a family heirloom--is instrumental in meeting class objectives. The author's objectives in this class are for students: (1) to broaden their conception of what art is…

  4. Classic-Ada(TM)

    NASA Technical Reports Server (NTRS)

    Valley, Lois

    1989-01-01

    The SPS product, Classic-Ada, is a software tool that supports object-oriented Ada programming with powerful inheritance and dynamic binding. Object Oriented Design (OOD) is an easy, natural development paradigm, but it is not supported by Ada. Following the DOD Ada mandate, SPS developed Classic-Ada to provide a tool which supports OOD and implements code in Ada. It consists of a design language, a code generator and a toolset. As a design language, Classic-Ada supports the object-oriented principles of information hiding, data abstraction, dynamic binding, and inheritance. It also supports natural reuse and incremental development through inheritance, code factoring, and Ada, Classic-Ada, dynamic binding and static binding in the same program. Only nine new constructs were added to Ada to provide object-oriented design capabilities. The Classic-Ada code generator translates user application code into fully compliant, ready-to-run, standard Ada. The Classic-Ada toolset is fully supported by SPS and consists of an object generator, a builder, a dictionary manager, and a reporter. Demonstrations of Classic-Ada and the Classic-Ada Browser were given at the workshop.

  5. Error threshold for color codes and random three-body Ising models.

    PubMed

    Katzgraber, Helmut G; Bombin, H; Martin-Delgado, M A

    2009-08-28

    We study the error threshold of color codes, a class of topological quantum codes that allow a direct implementation of quantum Clifford gates suitable for entanglement distillation, teleportation, and fault-tolerant quantum computation. We map the error-correction process onto a statistical mechanical random three-body Ising model and study its phase diagram via Monte Carlo simulations. The obtained error threshold of p(c) = 0.109(2) is very close to that of Kitaev's toric code, showing that enhanced computational capabilities do not necessarily imply lower resistance to noise.

  6. Transformation of two and three-dimensional regions by elliptic systems

    NASA Technical Reports Server (NTRS)

    Mastin, C. Wayne

    1993-01-01

    During this contract period, our work has focused on improvements to elliptic grid generation methods. There are two principle objectives in this project. One objective is to make the elliptic methods more reliable and efficient, and the other is to construct a modular code that can be incorporated into the National Grid Project (NGP), or any other grid generation code. Progress has been made in meeting both of these objectives. The two objectives are actually complementary. As the code development for the NGP progresses, we see many areas where improvements in algorithms can be made.

  7. Performance Comparison of Orthogonal and Quasi-orthogonal Codes in Quasi-Synchronous Cellular CDMA Communication

    NASA Astrophysics Data System (ADS)

    Jos, Sujit; Kumar, Preetam; Chakrabarti, Saswat

    Orthogonal and quasi-orthogonal codes are integral part of any DS-CDMA based cellular systems. Orthogonal codes are ideal for use in perfectly synchronous scenario like downlink cellular communication. Quasi-orthogonal codes are preferred over orthogonal codes in the uplink communication where perfect synchronization cannot be achieved. In this paper, we attempt to compare orthogonal and quasi-orthogonal codes in presence of timing synchronization error. This will give insight into the synchronization demands in DS-CDMA systems employing the two classes of sequences. The synchronization error considered is smaller than chip duration. Monte-Carlo simulations have been carried out to verify the analytical and numerical results.

  8. Error-trellis Syndrome Decoding Techniques for Convolutional Codes

    NASA Technical Reports Server (NTRS)

    Reed, I. S.; Truong, T. K.

    1984-01-01

    An error-trellis syndrome decoding technique for convolutional codes is developed. This algorithm is then applied to the entire class of systematic convolutional codes and to the high-rate, Wyner-Ash convolutional codes. A special example of the one-error-correcting Wyner-Ash code, a rate 3/4 code, is treated. The error-trellis syndrome decoding method applied to this example shows in detail how much more efficient syndrome decoding is than Viterbi decoding if applied to the same problem. For standard Viterbi decoding, 64 states are required, whereas in the example only 7 states are needed. Also, within the 7 states required for decoding, many fewer transitions are needed between the states.

  9. Error-trellis syndrome decoding techniques for convolutional codes

    NASA Technical Reports Server (NTRS)

    Reed, I. S.; Truong, T. K.

    1985-01-01

    An error-trellis syndrome decoding technique for convolutional codes is developed. This algorithm is then applied to the entire class of systematic convolutional codes and to the high-rate, Wyner-Ash convolutional codes. A special example of the one-error-correcting Wyner-Ash code, a rate 3/4 code, is treated. The error-trellis syndrome decoding method applied to this example shows in detail how much more efficient syndrome decordig is than Viterbi decoding if applied to the same problem. For standard Viterbi decoding, 64 states are required, whereas in the example only 7 states are needed. Also, within the 7 states required for decoding, many fewer transitions are needed between the states.

  10. A class Hierarchical, object-oriented approach to virtual memory management

    NASA Technical Reports Server (NTRS)

    Russo, Vincent F.; Campbell, Roy H.; Johnston, Gary M.

    1989-01-01

    The Choices family of operating systems exploits class hierarchies and object-oriented programming to facilitate the construction of customized operating systems for shared memory and networked multiprocessors. The software is being used in the Tapestry laboratory to study the performance of algorithms, mechanisms, and policies for parallel systems. Described here are the architectural design and class hierarchy of the Choices virtual memory management system. The software and hardware mechanisms and policies of a virtual memory system implement a memory hierarchy that exploits the trade-off between response times and storage capacities. In Choices, the notion of a memory hierarchy is captured by abstract classes. Concrete subclasses of those abstractions implement a virtual address space, segmentation, paging, physical memory management, secondary storage, and remote (that is, networked) storage. Captured in the notion of a memory hierarchy are classes that represent memory objects. These classes provide a storage mechanism that contains encapsulated data and have methods to read or write the memory object. Each of these classes provides specializations to represent the memory hierarchy.

  11. Techniques for the analysis of data from coded-mask X-ray telescopes

    NASA Technical Reports Server (NTRS)

    Skinner, G. K.; Ponman, T. J.; Hammersley, A. P.; Eyles, C. J.

    1987-01-01

    Several techniques useful in the analysis of data from coded-mask telescopes are presented. Methods of handling changes in the instrument pointing direction are reviewed and ways of using FFT techniques to do the deconvolution considered. Emphasis is on techniques for optimally-coded systems, but it is shown that the range of systems included in this class can be extended through the new concept of 'partial cycle averaging'.

  12. Standard Anatomic Terminologies: Comparison for Use in a Health Information Exchange–Based Prior Computed Tomography (CT) Alerting System

    PubMed Central

    Lowry, Tina; Vreeman, Daniel J; Loo, George T; Delman, Bradley N; Thum, Frederick L; Slovis, Benjamin H; Shapiro, Jason S

    2017-01-01

    Background A health information exchange (HIE)–based prior computed tomography (CT) alerting system may reduce avoidable CT imaging by notifying ordering clinicians of prior relevant studies when a study is ordered. For maximal effectiveness, a system would alert not only for prior same CTs (exams mapped to the same code from an exam name terminology) but also for similar CTs (exams mapped to different exam name terminology codes but in the same anatomic region) and anatomically proximate CTs (exams in adjacent anatomic regions). Notification of previous same studies across an HIE requires mapping of local site CT codes to a standard terminology for exam names (such as Logical Observation Identifiers Names and Codes [LOINC]) to show that two studies with different local codes and descriptions are equivalent. Notifying of prior similar or proximate CTs requires an additional mapping of exam codes to anatomic regions, ideally coded by an anatomic terminology. Several anatomic terminologies exist, but no prior studies have evaluated how well they would support an alerting use case. Objective The aim of this study was to evaluate the fitness of five existing standard anatomic terminologies to support similar or proximate alerts of an HIE-based prior CT alerting system. Methods We compared five standard anatomic terminologies (Foundational Model of Anatomy, Systematized Nomenclature of Medicine Clinical Terms, RadLex, LOINC, and LOINC/Radiological Society of North America [RSNA] Radiology Playbook) to an anatomic framework created specifically for our use case (Simple ANatomic Ontology for Proximity or Similarity [SANOPS]), to determine whether the existing terminologies could support our use case without modification. On the basis of an assessment of optimal terminology features for our purpose, we developed an ordinal anatomic terminology utility classification. We mapped samples of 100 random and the 100 most frequent LOINC CT codes to anatomic regions in each terminology, assigned utility classes for each mapping, and statistically compared each terminology’s utility class rankings. We also constructed seven hypothetical alerting scenarios to illustrate the terminologies’ differences. Results Both RadLex and the LOINC/RSNA Radiology Playbook anatomic terminologies ranked significantly better (P<.001) than the other standard terminologies for the 100 most frequent CTs, but no terminology ranked significantly better than any other for 100 random CTs. Hypothetical scenarios illustrated instances where no standard terminology would support appropriate proximate or similar alerts, without modification. Conclusions LOINC/RSNA Radiology Playbook and RadLex’s anatomic terminologies appear well suited to support proximate or similar alerts for commonly ordered CTs, but for less commonly ordered tests, modification of the existing terminologies with concepts and relations from SANOPS would likely be required. Our findings suggest SANOPS may serve as a framework for enhancing anatomic terminologies in support of other similar use cases. PMID:29242174

  13. A unified framework of unsupervised subjective optimized bit allocation for multiple video object coding

    NASA Astrophysics Data System (ADS)

    Chen, Zhenzhong; Han, Junwei; Ngan, King Ngi

    2005-10-01

    MPEG-4 treats a scene as a composition of several objects or so-called video object planes (VOPs) that are separately encoded and decoded. Such a flexible video coding framework makes it possible to code different video object with different distortion scale. It is necessary to analyze the priority of the video objects according to its semantic importance, intrinsic properties and psycho-visual characteristics such that the bit budget can be distributed properly to video objects to improve the perceptual quality of the compressed video. This paper aims to provide an automatic video object priority definition method based on object-level visual attention model and further propose an optimization framework for video object bit allocation. One significant contribution of this work is that the human visual system characteristics are incorporated into the video coding optimization process. Another advantage is that the priority of the video object can be obtained automatically instead of fixing weighting factors before encoding or relying on the user interactivity. To evaluate the performance of the proposed approach, we compare it with traditional verification model bit allocation and the optimal multiple video object bit allocation algorithms. Comparing with traditional bit allocation algorithms, the objective quality of the object with higher priority is significantly improved under this framework. These results demonstrate the usefulness of this unsupervised subjective quality lifting framework.

  14. Coding the presence of visual objects in a recurrent neural network of visual cortex.

    PubMed

    Zwickel, Timm; Wachtler, Thomas; Eckhorn, Reinhard

    2007-01-01

    Before we can recognize a visual object, our visual system has to segregate it from its background. This requires a fast mechanism for establishing the presence and location of objects independently of their identity. Recently, border-ownership neurons were recorded in monkey visual cortex which might be involved in this task [Zhou, H., Friedmann, H., von der Heydt, R., 2000. Coding of border ownership in monkey visual cortex. J. Neurosci. 20 (17), 6594-6611]. In order to explain the basic mechanisms required for fast coding of object presence, we have developed a neural network model of visual cortex consisting of three stages. Feed-forward and lateral connections support coding of Gestalt properties, including similarity, good continuation, and convexity. Neurons of the highest area respond to the presence of an object and encode its position, invariant of its form. Feedback connections to the lowest area facilitate orientation detectors activated by contours belonging to potential objects, and thus generate the experimentally observed border-ownership property. This feedback control acts fast and significantly improves the figure-ground segregation required for the consecutive task of object recognition.

  15. 49 CFR 176.176 - Signals.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 49 Transportation 2 2010-10-01 2010-10-01 false Signals. 176.176 Section 176.176 Transportation... Class 1 (Explosive) Materials Handling Class 1 (explosive) Materials in Port § 176.176 Signals. When... exhibit the following signals: (a) By day, flag “B” (Bravo) of the international code of signals; and (b...

  16. 49 CFR 176.176 - Signals.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 49 Transportation 2 2011-10-01 2011-10-01 false Signals. 176.176 Section 176.176 Transportation... Class 1 (Explosive) Materials Handling Class 1 (explosive) Materials in Port § 176.176 Signals. When... exhibit the following signals: (a) By day, flag “B” (Bravo) of the international code of signals; and (b...

  17. Coping with Teen-Age Parents.

    ERIC Educational Resources Information Center

    Slavick, Carol A.

    In 1968 the California Education Code section on physically handicapped minors was amended to include pregnant girls. This change was intended to give school districts the responsibility and the funds to develop special classes or schools for teenage pregmant girls. This special class makes it possible to provide more educational materials,…

  18. 29 CFR 1926.407 - Hazardous (classified) locations.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... Electrical Code, lists or defines hazardous gases, vapors, and dusts by “Groups” characterized by their... the class, group, and operating temperature or temperature range, based on operation in a 40-degree C... be marked to indicate the group. (C) Fixed general-purpose equipment in Class I locations, other than...

  19. 29 CFR 1926.407 - Hazardous (classified) locations.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... Electrical Code, lists or defines hazardous gases, vapors, and dusts by “Groups” characterized by their... the class, group, and operating temperature or temperature range, based on operation in a 40-degree C... be marked to indicate the group. (C) Fixed general-purpose equipment in Class I locations, other than...

  20. 49 CFR 176.176 - Signals.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 49 Transportation 2 2012-10-01 2012-10-01 false Signals. 176.176 Section 176.176 Transportation... Class 1 (Explosive) Materials Handling Class 1 (explosive) Materials in Port § 176.176 Signals. When... exhibit the following signals: (a) By day, flag “B” (Bravo) of the international code of signals; and (b...

  1. 49 CFR 176.176 - Signals.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 49 Transportation 2 2013-10-01 2013-10-01 false Signals. 176.176 Section 176.176 Transportation... Class 1 (Explosive) Materials Handling Class 1 (explosive) Materials in Port § 176.176 Signals. When... exhibit the following signals: (a) By day, flag “B” (Bravo) of the international code of signals; and (b...

  2. 49 CFR 176.176 - Signals.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 49 Transportation 2 2014-10-01 2014-10-01 false Signals. 176.176 Section 176.176 Transportation... Class 1 (Explosive) Materials Handling Class 1 (explosive) Materials in Port § 176.176 Signals. When... exhibit the following signals: (a) By day, flag “B” (Bravo) of the international code of signals; and (b...

  3. 49 CFR 217.7 - Operating rules; filing and recordkeeping.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ..., Washington, DC 20590, one copy of its code of operating rules, timetables, and timetable special instructions which were in effect on November 21, 1994. Each Class I railroad, each Class II railroad, and each..., and timetable special instructions before it commences operations. (b) After November 21, 1994, each...

  4. Classes of Heart Failure

    MedlinePlus

    ... Class Objective Assessment A No objective evidence of cardiovascular disease. No symptoms and no limitation in ordinary physical activity. B Objective evidence of minimal cardiovascular disease. Mild symptoms and slight limitation during ordinary activity. ...

  5. Pea Chaperones under Centrifugation

    NASA Astrophysics Data System (ADS)

    Talalaiev, Oleksandr

    2008-06-01

    Etiolated Pisum sativum seedlings were subjected to altered g-forces by centrifugation (3-14g). By using semiquantitative RT-PCR, we studied transcripts of pea genes coding for chaperones that are representatives of small heat shock proteins (sHsps) family. Four members from the different classes of sHsps: cytosolic Hsp17.7 and Hsp18.1 (class I and class II accordingly), chloroplast Hsp21 (class III) and endoplasmic reticulum Hsp22.7 (class IV) were investigated. We conclude that exposure to 3, 7, 10 and 14g for 1h did not affect the level of sHsp transcripts.

  6. Towards an Artificial Space Object Taxonomy

    NASA Astrophysics Data System (ADS)

    Wilkins, M.; Schumacher, P.; Jah, M.; Pfeffer, A.

    2013-09-01

    Object recognition is the first step in positively identifying a resident space object (RSO), i.e. assigning an RSO to a category such as GPS satellite or space debris. Object identification is the process of deciding that two RSOs are in fact one and the same. Provided we have appropriately defined a satellite taxonomy that allows us to place a given RSO into a particular class of object without any ambiguity, one can assess the probability of assignment to a particular class by determining how well the object satisfies the unique criteria of belonging to that class. Ultimately, tree-based taxonomies delineate unique signatures by defining the minimum amount of information required to positively identify a RSO. Therefore, taxonomic trees can be used to depict hypotheses in a Bayesian object recognition and identification process. This work describes a new RSO taxonomy along with specific reasoning behind the choice of groupings. An alternative taxonomy was recently presented at the Sixth Conference on Space Debris in Darmstadt, Germany. [1] The best example of a taxonomy that enjoys almost universal scientific acceptance is the classical Linnaean biological taxonomy. A strength of Linnaean taxonomy is that it can be used to organize the different kinds of living organisms, simply and practically. Every species can be given a unique name. This uniqueness and stability are a result of the acceptance by biologists specializing in taxonomy, not merely of the binomial names themselves. Fundamentally, the taxonomy is governed by rules for the use of these names, and these are laid down in formal Nomenclature Codes. We seek to provide a similar formal nomenclature system for RSOs through a defined tree-based taxonomy structure. Each categorization, beginning with the most general or inclusive, at any level is called a taxon. Taxon names are defined by a type, which can be a specimen or a taxon of lower rank, and a diagnosis, a statement intended to supply characters that differentiate the taxon from others with which it is likely to be confused. Each taxon will have a set of uniquely distinguishing features that will allow one to place a given object into a specific group without any ambiguity. When a new object does not fall into a specific taxon that is already defined, the entire tree structure will need to be evaluated to determine if a new taxon should be created. Ultimately, an online learning process to facilitate tree growth would be desirable. One can assess the probability of assignment to a particular taxon by determining how well the object satisfies the unique criteria of belonging to that taxon. Therefore, we can use taxonomic trees in a Bayesian process to assign prior probabilities to each of our object recognition and identification hypotheses. We will show that this taxonomy is robust by demonstrating specific stressing classification examples. We will also demonstrate how to implement this taxonomy in Figaro, an open source probabilistic programming language.

  7. Unitals and ovals of symmetric block designs in LDPC and space-time coding

    NASA Astrophysics Data System (ADS)

    Andriamanalimanana, Bruno R.

    2004-08-01

    An approach to the design of LDPC (low density parity check) error-correction and space-time modulation codes involves starting with known mathematical and combinatorial structures, and deriving code properties from structure properties. This paper reports on an investigation of unital and oval configurations within generic symmetric combinatorial designs, not just classical projective planes, as the underlying structure for classes of space-time LDPC outer codes. Of particular interest are the encoding and iterative (sum-product) decoding gains that these codes may provide. Various small-length cases have been numerically implemented in Java and Matlab for a number of channel models.

  8. Automatic Certification of Kalman Filters for Reliable Code Generation

    NASA Technical Reports Server (NTRS)

    Denney, Ewen; Fischer, Bernd; Schumann, Johann; Richardson, Julian

    2005-01-01

    AUTOFILTER is a tool for automatically deriving Kalman filter code from high-level declarative specifications of state estimation problems. It can generate code with a range of algorithmic characteristics and for several target platforms. The tool has been designed with reliability of the generated code in mind and is able to automatically certify that the code it generates is free from various error classes. Since documentation is an important part of software assurance, AUTOFILTER can also automatically generate various human-readable documents, containing both design and safety related information. We discuss how these features address software assurance standards such as DO-178B.

  9. Deterministic quantum dense coding networks

    NASA Astrophysics Data System (ADS)

    Roy, Saptarshi; Chanda, Titas; Das, Tamoghna; Sen(De), Aditi; Sen, Ujjwal

    2018-07-01

    We consider the scenario of deterministic classical information transmission between multiple senders and a single receiver, when they a priori share a multipartite quantum state - an attempt towards building a deterministic dense coding network. Specifically, we prove that in the case of two or three senders and a single receiver, generalized Greenberger-Horne-Zeilinger (gGHZ) states are not beneficial for sending classical information deterministically beyond the classical limit, except when the shared state is the GHZ state itself. On the other hand, three- and four-qubit generalized W (gW) states with specific parameters as well as the four-qubit Dicke states can provide a quantum advantage of sending the information in deterministic dense coding. Interestingly however, numerical simulations in the three-qubit scenario reveal that the percentage of states from the GHZ-class that are deterministic dense codeable is higher than that of states from the W-class.

  10. Selective inhibitors of trypanosomal uridylyl transferase RET1 establish druggability of RNA post-transcriptional modifications

    PubMed Central

    Cording, Amy; Gormally, Michael; Bond, Peter J.; Carrington, Mark; Balasubramanian, Shankar; Miska, Eric A.; Thomas, Beth

    2017-01-01

    ABSTRACT Non-coding RNAs are crucial regulators for a vast array of cellular processes and have been implicated in human disease. These biological processes represent a hitherto untapped resource in our fight against disease. In this work we identify small molecule inhibitors of a non-coding RNA uridylylation pathway. The TUTase family of enzymes is important for modulating non-coding RNA pathways in both human cancer and pathogen systems. We demonstrate that this new class of drug target can be accessed with traditional drug discovery techniques. Using the Trypanosoma brucei TUTase, RET1, we identify TUTase inhibitors and lay the groundwork for the use of this new target class as a therapeutic opportunity for the under-served disease area of African Trypanosomiasis. In a broader sense this work demonstrates the therapeutic potential for targeting RNA post-transcriptional modifications with small molecules in human disease. PMID:26786754

  11. Selective inhibitors of trypanosomal uridylyl transferase RET1 establish druggability of RNA post-transcriptional modifications.

    PubMed

    Cording, Amy; Gormally, Michael; Bond, Peter J; Carrington, Mark; Balasubramanian, Shankar; Miska, Eric A; Thomas, Beth

    2017-05-04

    Non-coding RNAs are crucial regulators for a vast array of cellular processes and have been implicated in human disease. These biological processes represent a hitherto untapped resource in our fight against disease. In this work we identify small molecule inhibitors of a non-coding RNA uridylylation pathway. The TUTase family of enzymes is important for modulating non-coding RNA pathways in both human cancer and pathogen systems. We demonstrate that this new class of drug target can be accessed with traditional drug discovery techniques. Using the Trypanosoma brucei TUTase, RET1, we identify TUTase inhibitors and lay the groundwork for the use of this new target class as a therapeutic opportunity for the under-served disease area of African Trypanosomiasis. In a broader sense this work demonstrates the therapeutic potential for targeting RNA post-transcriptional modifications with small molecules in human disease.

  12. Rate-distortion optimized tree-structured compression algorithms for piecewise polynomial images.

    PubMed

    Shukla, Rahul; Dragotti, Pier Luigi; Do, Minh N; Vetterli, Martin

    2005-03-01

    This paper presents novel coding algorithms based on tree-structured segmentation, which achieve the correct asymptotic rate-distortion (R-D) behavior for a simple class of signals, known as piecewise polynomials, by using an R-D based prune and join scheme. For the one-dimensional case, our scheme is based on binary-tree segmentation of the signal. This scheme approximates the signal segments using polynomial models and utilizes an R-D optimal bit allocation strategy among the different signal segments. The scheme further encodes similar neighbors jointly to achieve the correct exponentially decaying R-D behavior (D(R) - c(o)2(-c1R)), thus improving over classic wavelet schemes. We also prove that the computational complexity of the scheme is of O(N log N). We then show the extension of this scheme to the two-dimensional case using a quadtree. This quadtree-coding scheme also achieves an exponentially decaying R-D behavior, for the polygonal image model composed of a white polygon-shaped object against a uniform black background, with low computational cost of O(N log N). Again, the key is an R-D optimized prune and join strategy. Finally, we conclude with numerical results, which show that the proposed quadtree-coding scheme outperforms JPEG2000 by about 1 dB for real images, like cameraman, at low rates of around 0.15 bpp.

  13. Reliability and Validity Study of the Chamorro Assisted Gait Scale for People with Sprained Ankles, Walking with Forearm Crutches

    PubMed Central

    Ridao-Fernández, Carmen; Ojeda, Joaquín; Benítez-Lugo, Marisa; Sevillano, José Luis

    2016-01-01

    Objective The aim of this study was to design and validate a functional assessment scale for assisted gait with forearm crutches (Chamorro Assisted Gait Scale—CHAGS) and to assess its reliability in people with sprained ankles. Design Thirty subjects who suffered from sprained ankle (anterior talofibular ligament first and second degree) were included in the study. A modified Delphi technique was used to obtain the content validity. The selected items were: pelvic and scapular girdle dissociation(1), deviation of Center of Gravity(2), crutch inclination(3), steps rhythm(4), symmetry of step length(5), cross support(6), simultaneous support of foot and crutch(7), forearm off(8), facing forward(9) and fluency(10). Two raters twice visualized the gait of the sample subjects which were recorded. The criterion-related validity was determined by correlation between CHAGS and Coding of eight criteria of qualitative gait analysis (Viel Coding). Internal consistency and inter and intra-rater reliability were also tested. Results CHAGS obtained a high and negative correlation with Viel Coding. We obtained a good internal consistency and the intra-class correlation coefficients oscillated between 0.97 and 0.99, while the minimal detectable changes were acceptable. Conclusion CHAGS scale is a valid and reliable tool for assessing assisted gait with crutches in people with sprained ankles to perform partial relief of lower limbs. PMID:27168236

  14. A central compact object in Kes 79: the hypercritical regime and neutrino expectation

    NASA Astrophysics Data System (ADS)

    Bernal, C. G.; Fraija, N.

    2016-11-01

    We present magnetohydrodynamical simulations of a strong accretion on to magnetized proto-neutron stars for the Kesteven 79 (Kes 79) scenario. The supernova remnant Kes 79, observed with the Chandra ACIS-I instrument during approximately 8.3 h, is located in the constellation Aquila at a distance of 7.1 kpc in the galactic plane. It is a galactic and a very young object with an estimate age of 6 kyr. The Chandra image has revealed, for the first time, a point-like source at the centre of the remnant. The Kes 79 compact remnant belongs to a special class of objects, the so-called central compact objects (CCOs), which exhibits no evidence for a surrounding pulsar wind nebula. In this work, we show that the submergence of the magnetic field during the hypercritical phase can explain such behaviour for Kes 79 and others CCOs. The simulations of such regime were carried out with the adaptive-mesh-refinement code FLASH in two spatial dimensions, including radiative loss by neutrinos and an adequate equation of state for such regime. From the simulations, we estimate that the number of thermal neutrinos expected on the Hyper-Kamiokande Experiment is 733 ± 364. In addition, we compute the flavour ratio on Earth for a progenitor model.

  15. Evaluating Coding Accuracy in General Surgery Residents' Accreditation Council for Graduate Medical Education Procedural Case Logs.

    PubMed

    Balla, Fadi; Garwe, Tabitha; Motghare, Prasenjeet; Stamile, Tessa; Kim, Jennifer; Mahnken, Heidi; Lees, Jason

    The Accreditation Council for Graduate Medical Education (ACGME) case log captures resident operative experience based on Current Procedural Terminology (CPT) codes and is used to track operative experience during residency. With increasing emphasis on resident operative experiences, coding is more important than ever. It has been shown in other surgical specialties at similar institutions that the residents' ACGME case log may not accurately reflect their operative experience. What barriers may influence this remains unclear. As the only objective measure of resident operative experience, an accurate case log is paramount in representing one's operative experience. This study aims to determine the accuracy of procedural coding by general surgical residents at a single institution. Data were collected from 2 consecutive graduating classes of surgical residents' ACGME case logs from 2008 to 2014. A total of 5799 entries from 7 residents were collected. The CPT codes entered by residents were compared to departmental billing records submitted by the attending surgeon for each procedure. Assigned CPT codes by institutional American Academy of Professional Coders certified abstract coders were considered the "gold standard." A total of 4356 (75.12%) of 5799 entries were identified in billing records. Excel 2010 and SAS 9.3 were used for analysis. In the event of multiple codes for the same patient, any match between resident codes and billing record codes was considered a "correct" entry. A 4-question survey was distributed to all current general surgical residents at our institution for feedback on coding habits, limitations to accurate coding, and opinions on ACGME case log representation of their operative experience. All 7 residents had a low percentage of correctly entered CPT codes. The overall accuracy proportion for all residents was 52.82% (range: 43.32%-60.07%). Only 1 resident showed significant improvement in accuracy during his/her training (p = 0.0043). The survey response rate was 100%. Survey results indicated that inability to find the precise code within the ACGME search interface and unfamiliarity with available CPT codes were by far the most common perceived barriers to accuracy. Survey results also indicated that most residents (74%) believe that they code accurately most of the time and agree that their case log would accurately represent their operative experience (66.6%). This is the first study to evaluate correctness of residents' ACGME case logs in general surgery. The degree of inaccuracy found here necessitates further investigation into the etiology of these discrepancies. Instruction on coding practices should also benefit the residents after graduation. Optimizing communication among attendings and residents, improving ACGME coding search interface, and implementing consistent coding practices could improve accuracy giving a more realistic view of residents' operative experience. Published by Elsevier Inc.

  16. Identification of small non-coding RNA classes expressed in swine whole blood during HP-PRRSV infection

    USDA-ARS?s Scientific Manuscript database

    It has been established that reduced susceptibility to porcine reproductive and respiratory syndrome virus (PRRSV) has a genetic component. This genetic component may take the form of small non-coding RNAs (sncRNA), which are molecules that function as regulators of gene expression. Various sncRNAs ...

  17. Identification and characterization of long non-coding RNAs in rainbow trout eggs

    USDA-ARS?s Scientific Manuscript database

    Long non-coding RNAs (lncRNAs) are in general considered as a diverse class of transcripts longer than 200 nucleotides that structurally resemble mRNAs but do not encode proteins. Recent advances in RNA sequencing (RNA-Seq) and bioinformatics methods have provided an opportunity to indentify and ana...

  18. Up to code: does your company's conduct meet world-class standards?

    PubMed

    Paine, Lynn; Deshpandé, Rohit; Margolis, Joshua D; Bettcher, Kim Eric

    2005-12-01

    Codes of conduct have long been a feature of corporate life. Today, they are arguably a legal necessity--at least for public companies with a presence in the United States. But the issue goes beyond U.S. legal and regulatory requirements. Sparked by corruption and excess of various types, dozens of industry, government, investor, and multisector groups worldwide have proposed codes and guidelines to govern corporate behavior. These initiatives reflect an increasingly global debate on the nature of corporate legitimacy. Given the legal, organizational, reputational, and strategic considerations, few companies will want to be without a code. But what should it say? Apart from a handful of essentials spelled out in Sarbanes-Oxley regulations and NYSE rules, authoritative guidance is sorely lacking. In search of some reference points for managers, the authors undertook a systematic analysis of a select group of codes. In this article, they present their findings in the form of a "codex," a reference source on code content. The Global Business Standards Codex contains a set of overarching principles as well as a set of conduct standards for putting those principles into practice. The GBS Codex is not intended to be adopted as is, but is meant to be used as a benchmark by those wishing to create their own world-class code. The provisions of the codex must be customized to a company's specific business and situation; individual companies' codes will include their own distinctive elements as well. What the codex provides is a starting point grounded in ethical fundamentals and aligned with an emerging global consensus on basic standards of corporate behavior.

  19. Identification of small non-coding RNA classes expressed in swine whole blood during HP-PRRSV infection.

    PubMed

    Fleming, Damarius S; Miller, Laura C

    2018-04-01

    It has been established that reduced susceptibility to porcine reproductive and respiratory syndrome virus (PRRSV) has a genetic component. This genetic component may take the form of small non-coding RNAs (sncRNA), which are molecules that function as regulators of gene expression. Various sncRNAs have emerged as having an important role in the immune system in humans. The study uses transcriptomic read counts to profile the type and quantity of both well and lesser characterized sncRNAs, such as microRNAs and small nucleolar RNAs to identify and quantify the classes of sncRNA expressed in whole blood between healthy and highly pathogenic PRRSV-infected pigs. Our results returned evidence on nine classes of sncRNA, four of which were consistently statistically significantly different based on Fisher's Exact Test, that can be detected and possibly interrogated for their effect on host dysregulation during PRRSV infections. Published by Elsevier Inc.

  20. a Framework for Distributed Mixed Language Scientific Applications

    NASA Astrophysics Data System (ADS)

    Quarrie, D. R.

    The Object Management Group has defined an architecture (CORBA) for distributed object applications based on an Object Request Broker and Interface Definition Language. This project builds upon this architecture to establish a framework for the creation of mixed language scientific applications. A prototype compiler has been written that generates FORTRAN 90 or Eiffel stubs and skeletons and the required C++ glue code from an input IDL file that specifies object interfaces. This generated code can be used directly for non-distributed mixed language applications or in conjunction with the C++ code generated from a commercial IDL compiler for distributed applications. A feasibility study is presently underway to see whether a fully integrated software development environment for distributed, mixed-language applications can be created by modifying the back-end code generator of a commercial CASE tool to emit IDL.

  1. Big Software for SmallSats: Adapting cFS to CubeSat Missions

    NASA Technical Reports Server (NTRS)

    Cudmore, Alan P.; Crum, Gary Alex; Sheikh, Salman; Marshall, James

    2015-01-01

    Expanding capabilities and mission objectives for SmallSats and CubeSats is driving the need for reliable, reusable, and robust flight software. While missions are becoming more complicated and the scientific goals more ambitious, the level of acceptable risk has decreased. Design challenges are further compounded by budget and schedule constraints that have not kept pace. NASA's Core Flight Software System (cFS) is an open source solution which enables teams to build flagship satellite level flight software within a CubeSat schedule and budget. NASA originally developed cFS to reduce mission and schedule risk for flagship satellite missions by increasing code reuse and reliability. The Lunar Reconnaissance Orbiter, which launched in 2009, was the first of a growing list of Class B rated missions to use cFS.

  2. Trellis phase codes for power-bandwith efficient satellite communications

    NASA Technical Reports Server (NTRS)

    Wilson, S. G.; Highfill, J. H.; Hsu, C. D.; Harkness, R.

    1981-01-01

    Support work on improved power and spectrum utilization on digital satellite channels was performed. Specific attention is given to the class of signalling schemes known as continuous phase modulation (CPM). The specific work described in this report addresses: analytical bounds on error probability for multi-h phase codes, power and bandwidth characterization of 4-ary multi-h codes, and initial results of channel simulation to assess the impact of band limiting filters and nonlinear amplifiers on CPM performance.

  3. An object-oriented approach to nested data parallelism

    NASA Technical Reports Server (NTRS)

    Sheffler, Thomas J.; Chatterjee, Siddhartha

    1994-01-01

    This paper describes an implementation technique for integrating nested data parallelism into an object-oriented language. Data-parallel programming employs sets of data called 'collections' and expresses parallelism as operations performed over the elements of a collection. When the elements of a collection are also collections, then there is the possibility for 'nested data parallelism.' Few current programming languages support nested data parallelism however. In an object-oriented framework, a collection is a single object. Its type defines the parallel operations that may be applied to it. Our goal is to design and build an object-oriented data-parallel programming environment supporting nested data parallelism. Our initial approach is built upon three fundamental additions to C++. We add new parallel base types by implementing them as classes, and add a new parallel collection type called a 'vector' that is implemented as a template. Only one new language feature is introduced: the 'foreach' construct, which is the basis for exploiting elementwise parallelism over collections. The strength of the method lies in the compilation strategy, which translates nested data-parallel C++ into ordinary C++. Extracting the potential parallelism in nested 'foreach' constructs is called 'flattening' nested parallelism. We show how to flatten 'foreach' constructs using a simple program transformation. Our prototype system produces vector code which has been successfully run on workstations, a CM-2, and a CM-5.

  4. Multi-class geospatial object detection based on a position-sensitive balancing framework for high spatial resolution remote sensing imagery

    NASA Astrophysics Data System (ADS)

    Zhong, Yanfei; Han, Xiaobing; Zhang, Liangpei

    2018-04-01

    Multi-class geospatial object detection from high spatial resolution (HSR) remote sensing imagery is attracting increasing attention in a wide range of object-related civil and engineering applications. However, the distribution of objects in HSR remote sensing imagery is location-variable and complicated, and how to accurately detect the objects in HSR remote sensing imagery is a critical problem. Due to the powerful feature extraction and representation capability of deep learning, the deep learning based region proposal generation and object detection integrated framework has greatly promoted the performance of multi-class geospatial object detection for HSR remote sensing imagery. However, due to the translation caused by the convolution operation in the convolutional neural network (CNN), although the performance of the classification stage is seldom influenced, the localization accuracies of the predicted bounding boxes in the detection stage are easily influenced. The dilemma between translation-invariance in the classification stage and translation-variance in the object detection stage has not been addressed for HSR remote sensing imagery, and causes position accuracy problems for multi-class geospatial object detection with region proposal generation and object detection. In order to further improve the performance of the region proposal generation and object detection integrated framework for HSR remote sensing imagery object detection, a position-sensitive balancing (PSB) framework is proposed in this paper for multi-class geospatial object detection from HSR remote sensing imagery. The proposed PSB framework takes full advantage of the fully convolutional network (FCN), on the basis of a residual network, and adopts the PSB framework to solve the dilemma between translation-invariance in the classification stage and translation-variance in the object detection stage. In addition, a pre-training mechanism is utilized to accelerate the training procedure and increase the robustness of the proposed algorithm. The proposed algorithm is validated with a publicly available 10-class object detection dataset.

  5. New nonbinary quantum codes with larger distance constructed from BCH codes over 𝔽q2

    NASA Astrophysics Data System (ADS)

    Xu, Gen; Li, Ruihu; Fu, Qiang; Ma, Yuena; Guo, Luobin

    2017-03-01

    This paper concentrates on construction of new nonbinary quantum error-correcting codes (QECCs) from three classes of narrow-sense imprimitive BCH codes over finite field 𝔽q2 (q ≥ 3 is an odd prime power). By a careful analysis on properties of cyclotomic cosets in defining set T of these BCH codes, the improved maximal designed distance of these narrow-sense imprimitive Hermitian dual-containing BCH codes is determined to be much larger than the result given according to Aly et al. [S. A. Aly, A. Klappenecker and P. K. Sarvepalli, IEEE Trans. Inf. Theory 53, 1183 (2007)] for each different code length. Thus families of new nonbinary QECCs are constructed, and the newly obtained QECCs have larger distance than those in previous literature.

  6. Use of ATC to describe duplicate medications in primary care prescriptions.

    PubMed

    Lim, Chiao Mei; Aryani Md Yusof, Faridah; Selvarajah, Sharmini; Lim, Teck Onn

    2011-10-01

    We aimed to demonstrate the suitability of the Anatomical Therapeutic Chemical Classification (ATC) to describe duplicate drugs and duplicate drug classes in prescription data and describe the pattern of duplicates from public and private primary care clinics of Kuala Lumpur, Malaysia. We analyzed prescription data year 2005 from all 14 public clinics in Kuala Lumpur with 12,157 prescriptions, and a sample of 188 private clinics with 25,612 prescriptions. As ATC Level 5 code represents the molecule and Level 4 represents the pharmacological subgroup, we used repetitions of codes in the same prescription to describe duplicate drugs or duplicate drug classes and compared them between the public and private clinics. At Level 4 ATC, prescriptions with duplicates drug classes were 1.46% of all prescriptions in private and 0.04% in public clinics. At Level 5 ATC, prescriptions with duplicate drugs were 1.81% for private and 0.95% for public clinics. In private clinics at Level 5, 73.3% of prescriptions with duplicates involved systemic combination drugs; at Level 4, 40.3% involved systemic combination drugs. In the public sector at Level 5, 95.7% of prescriptions with duplicates involved topical products. Repetitions of the same ATC codes were mostly useful to describe duplicate medications; however, we recommend avoid using ATC codes for tropical products for this purpose due to ambiguity. Combination products were often involved in duplicate prescribing; redesign of these products might improve prescribing quality. Duplicates occurred more often in private clinics than public clinics in Malaysia.

  7. FEELnc: a tool for long non-coding RNA annotation and its application to the dog transcriptome.

    PubMed

    Wucher, Valentin; Legeai, Fabrice; Hédan, Benoît; Rizk, Guillaume; Lagoutte, Lætitia; Leeb, Tosso; Jagannathan, Vidhya; Cadieu, Edouard; David, Audrey; Lohi, Hannes; Cirera, Susanna; Fredholm, Merete; Botherel, Nadine; Leegwater, Peter A J; Le Béguec, Céline; Fieten, Hille; Johnson, Jeremy; Alföldi, Jessica; André, Catherine; Lindblad-Toh, Kerstin; Hitte, Christophe; Derrien, Thomas

    2017-05-05

    Whole transcriptome sequencing (RNA-seq) has become a standard for cataloguing and monitoring RNA populations. One of the main bottlenecks, however, is to correctly identify the different classes of RNAs among the plethora of reconstructed transcripts, particularly those that will be translated (mRNAs) from the class of long non-coding RNAs (lncRNAs). Here, we present FEELnc (FlExible Extraction of LncRNAs), an alignment-free program that accurately annotates lncRNAs based on a Random Forest model trained with general features such as multi k-mer frequencies and relaxed open reading frames. Benchmarking versus five state-of-the-art tools shows that FEELnc achieves similar or better classification performance on GENCODE and NONCODE data sets. The program also provides specific modules that enable the user to fine-tune classification accuracy, to formalize the annotation of lncRNA classes and to identify lncRNAs even in the absence of a training set of non-coding RNAs. We used FEELnc on a real data set comprising 20 canine RNA-seq samples produced by the European LUPA consortium to substantially expand the canine genome annotation to include 10 374 novel lncRNAs and 58 640 mRNA transcripts. FEELnc moves beyond conventional coding potential classifiers by providing a standardized and complete solution for annotating lncRNAs and is freely available at https://github.com/tderrien/FEELnc. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  8. Exploring University Teacher Perceptions about Out-of-Class Teamwork

    ERIC Educational Resources Information Center

    Ruiz-Esparza Barajas, Elizabeth; Medrano Vela, Cecilia Araceli; Zepeda Huerta, Jesús Helbert Karim

    2016-01-01

    This study reports on the first stage of a larger joint research project undertaken by five universities in Mexico to explore university teachers' thinking about out-of-class teamwork. Data from interviews were analyzed using open and axial coding. Although results suggest a positive perception towards teamwork, the study unveiled important…

  9. A Discourse Analysis of the Online Mathematics Classroom

    ERIC Educational Resources Information Center

    Offenholley, Kathleen H.

    2012-01-01

    Thirteen online mathematics classes were analyzed using a discourse coding system created by Bellack et al. (1966). Findings suggest that the ratio of teacher-to-student discourse is far lower in online than in face-to-face classes and varies widely from instructor to instructor. A strong positive correlation was shown between instructor posts and…

  10. A New Class of Pandiagonal Squares

    ERIC Educational Resources Information Center

    Loly, P. D.; Steeds, M. J.

    2005-01-01

    An interesting class of purely pandiagonal, i.e. non-magic, whole number (integer) squares of orders (row/column dimension) of the powers of two which are related to Gray codes and square Karnaugh maps has been identified. Treated as matrices these squares possess just two non-zero eigenvalues. The construction of these squares has been automated…

  11. 40 CFR 147.300 - State-administered program-Class II wells.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ..., call 202-741-6030, or go to: http://www.archives.gov/federal_register/code_of_federal_regulations/ibr... Colorado Assistant Attorney General to the Acting Regional Counsel, EPA Region VIII, “Re: Class II Well...) Letter from Colorado Assistant Attorney General to the Acting Regional Counsel, EPA Region VIII, “Re...

  12. 40 CFR 147.300 - State-administered program-Class II wells.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ..., call 202-741-6030, or go to: http://www.archives.gov/federal_register/code_of_federal_regulations/ibr... Colorado Assistant Attorney General to the Acting Regional Counsel, EPA Region VIII, “Re: Class II Well...) Letter from Colorado Assistant Attorney General to the Acting Regional Counsel, EPA Region VIII, “Re...

  13. 40 CFR 147.300 - State-administered program-Class II wells.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ..., call 202-741-6030, or go to: http://www.archives.gov/federal_register/code_of_federal_regulations/ibr... Colorado Assistant Attorney General to the Acting Regional Counsel, EPA Region VIII, “Re: Class II Well...) Letter from Colorado Assistant Attorney General to the Acting Regional Counsel, EPA Region VIII, “Re...

  14. 40 CFR 147.300 - State-administered program-Class II wells.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ..., call 202-741-6030, or go to: http://www.archives.gov/federal_register/code_of_federal_regulations/ibr... Colorado Assistant Attorney General to the Acting Regional Counsel, EPA Region VIII, “Re: Class II Well...) Letter from Colorado Assistant Attorney General to the Acting Regional Counsel, EPA Region VIII, “Re...

  15. Ensemble coding remains accurate under object and spatial visual working memory load.

    PubMed

    Epstein, Michael L; Emmanouil, Tatiana A

    2017-10-01

    A number of studies have provided evidence that the visual system statistically summarizes large amounts of information that would exceed the limitations of attention and working memory (ensemble coding). However the necessity of working memory resources for ensemble coding has not yet been tested directly. In the current study, we used a dual task design to test the effect of object and spatial visual working memory load on size averaging accuracy. In Experiment 1, we tested participants' accuracy in comparing the mean size of two sets under various levels of object visual working memory load. Although the accuracy of average size judgments depended on the difference in mean size between the two sets, we found no effect of working memory load. In Experiment 2, we tested the same average size judgment while participants were under spatial visual working memory load, again finding no effect of load on averaging accuracy. Overall our results reveal that ensemble coding can proceed unimpeded and highly accurately under both object and spatial visual working memory load, providing further evidence that ensemble coding reflects a basic perceptual process distinct from that of individual object processing.

  16. A Study of Incivility in the Iranian Nursing Training System Based on Educators and Students’ Experiences: A Quantitative Content Analysis

    PubMed Central

    Rad, Mostafa; Ildarabadi, Es-hagh; Moharreri, Fatemeh; Moonaghi, Hossein Karimi

    2015-01-01

    Background: It is absolutely essential to know the negative impacts incivility in students and educators may have on the creation of a suitable teaching-learning environment. Better education of to-be nurses would improve their service to patients and society in the future. There has been no research in Iran so far on this particular case. This study examines the experiences of uncivil or disrespectful behavior from the standpoint of educators and students. Methodology & Methods: A quantitative content analysis was carried out to study manuscripts presented in the form of open questionnaires. To this end, data produced from detailed answers from 640 students and educators were inputted into the computer and line-by-line and sentence-by-sentence coding was done. After that, implied codes were added, the categories were revealed, and finally counting frequency of code in categories was carried out. Results: The most important categories that students considered uncivil behavior were waste of class time, distraction, incompetence in managing the class, discrimination, bad assessment, insult and threat on behalf of the educators. In contrast to their view, what the educators thought of as disrespectful included class disorder, humiliation of other students, irregular attendance of classes, bad sitting postures, non-observance of Islamic standards, and coming unprepared to the class by students. Conclusion: From the viewpoint of students and educators, incivility is present towards one another in the academic environment. This study determines the most important forms of the same from their stand point. Since disrespectful and threatening behavior has a significant impact on learning environment, we highly recommend a thorough examination to be carried out in future studies on the origin and the managing strategies of such behaviors. PMID:25716390

  17. Sequences of 95 human MHC haplotypes reveal extreme coding variation in genes other than highly polymorphic HLA class I and II

    PubMed Central

    Norman, Paul J.; Norberg, Steven J.; Guethlein, Lisbeth A.; Nemat-Gorgani, Neda; Royce, Thomas; Wroblewski, Emily E.; Dunn, Tamsen; Mann, Tobias; Alicata, Claudia; Hollenbach, Jill A.; Chang, Weihua; Shults Won, Melissa; Gunderson, Kevin L.; Abi-Rached, Laurent; Ronaghi, Mostafa; Parham, Peter

    2017-01-01

    The most polymorphic part of the human genome, the MHC, encodes over 160 proteins of diverse function. Half of them, including the HLA class I and II genes, are directly involved in immune responses. Consequently, the MHC region strongly associates with numerous diseases and clinical therapies. Notoriously, the MHC region has been intractable to high-throughput analysis at complete sequence resolution, and current reference haplotypes are inadequate for large-scale studies. To address these challenges, we developed a method that specifically captures and sequences the 4.8-Mbp MHC region from genomic DNA. For 95 MHC homozygous cell lines we assembled, de novo, a set of high-fidelity contigs and a sequence scaffold, representing a mean 98% of the target region. Included are six alternative MHC reference sequences of the human genome that we completed and refined. Characterization of the sequence and structural diversity of the MHC region shows the approach accurately determines the sequences of the highly polymorphic HLA class I and HLA class II genes and the complex structural diversity of complement factor C4A/C4B. It has also uncovered extensive and unexpected diversity in other MHC genes; an example is MUC22, which encodes a lung mucin and exhibits more coding sequence alleles than any HLA class I or II gene studied here. More than 60% of the coding sequence alleles analyzed were previously uncharacterized. We have created a substantial database of robust reference MHC haplotype sequences that will enable future population scale studies of this complicated and clinically important region of the human genome. PMID:28360230

  18. Wide-Field Infrared Survey Explorer Observations of Young Stellar Objects in the Lynds 1509 Dark Cloud in Auriga

    NASA Technical Reports Server (NTRS)

    Liu, Wilson M.; Padgett, Deborah L.; Terebey, Susan; Angione, John; Rebull, Luisa M.; McCollum, Bruce; Fajardo-Acosta, Sergio; Leisawitz, David

    2015-01-01

    The Wide-Field Infrared Survey Explorer (WISE) has uncovered a striking cluster of young stellar object (YSO) candidates associated with the L1509 dark cloud in Auriga. The WISE observations, at 3.4, 4.6, 12, and 22 microns, show a number of objects with colors consistent with YSOs, and their spectral energy distributions suggest the presence of circumstellar dust emission, including numerous Class I, flat spectrum, and Class II objects. In general, the YSOs in L1509 are much more tightly clustered than YSOs in other dark clouds in the Taurus-Auriga star forming region, with Class I and flat spectrum objects confined to the densest aggregates, and Class II objects more sparsely distributed. We estimate a most probable distance of 485-700 pc, and possibly as far as the previously estimated distance of 2 kpc.

  19. Objects, Numbers, Fingers, Space: Clustering of Ventral and Dorsal Functions in Young Children and Adults

    ERIC Educational Resources Information Center

    Chinello, Alessandro; Cattani, Veronica; Bonfiglioli, Claudia; Dehaene, Stanislas; Piazza, Manuela

    2013-01-01

    In the primate brain, sensory information is processed along two partially segregated cortical streams: the ventral stream, mainly coding for objects' shape and identity, and the dorsal stream, mainly coding for objects' quantitative information (including size, number, and spatial position). Neurophysiological measures indicate that such…

  20. A Scalable Architecture of a Structured LDPC Decoder

    NASA Technical Reports Server (NTRS)

    Lee, Jason Kwok-San; Lee, Benjamin; Thorpe, Jeremy; Andrews, Kenneth; Dolinar, Sam; Hamkins, Jon

    2004-01-01

    We present a scalable decoding architecture for a certain class of structured LDPC codes. The codes are designed using a small (n,r) protograph that is replicated Z times to produce a decoding graph for a (Z x n, Z x r) code. Using this architecture, we have implemented a decoder for a (4096,2048) LDPC code on a Xilinx Virtex-II 2000 FPGA, and achieved decoding speeds of 31 Mbps with 10 fixed iterations. The implemented message-passing algorithm uses an optimized 3-bit non-uniform quantizer that operates with 0.2dB implementation loss relative to a floating point decoder.

  1. Game-Based Learning Theory

    NASA Technical Reports Server (NTRS)

    Laughlin, Daniel

    2008-01-01

    Persistent Immersive Synthetic Environments (PISE) are not just connection points, they are meeting places. They are the new public squares, village centers, malt shops, malls and pubs all rolled into one. They come with a sense of 'thereness" that engages the mind like a real place does. Learning starts as a real code. The code defines "objects." The objects exist in computer space, known as the "grid." The objects and space combine to create a "place." A "world" is created, Before long, the grid and code becomes obscure, and the "world maintains focus.

  2. The DaveMLTranslator: An Interface for DAVE-ML Aerodynamic Models

    NASA Technical Reports Server (NTRS)

    Hill, Melissa A.; Jackson, E. Bruce

    2007-01-01

    It can take weeks or months to incorporate a new aerodynamic model into a vehicle simulation and validate the performance of the model. The Dynamic Aerospace Vehicle Exchange Markup Language (DAVE-ML) has been proposed as a means to reduce the time required to accomplish this task by defining a standard format for typical components of a flight dynamic model. The purpose of this paper is to describe an object-oriented C++ implementation of a class that interfaces a vehicle subsystem model specified in DAVE-ML and a vehicle simulation. Using the DaveMLTranslator class, aerodynamic or other subsystem models can be automatically imported and verified at run-time, significantly reducing the elapsed time between receipt of a DAVE-ML model and its integration into a simulation environment. The translator performs variable initializations, data table lookups, and mathematical calculations for the aerodynamic build-up, and executes any embedded static check-cases for verification. The implementation is efficient, enabling real-time execution. Simple interface code for the model inputs and outputs is the only requirement to integrate the DaveMLTranslator as a vehicle aerodynamic model. The translator makes use of existing table-lookup utilities from the Langley Standard Real-Time Simulation in C++ (LaSRS++). The design and operation of the translator class is described and comparisons with existing, conventional, C++ aerodynamic models of the same vehicle are given.

  3. Object class segmentation of RGB-D video using recurrent convolutional neural networks.

    PubMed

    Pavel, Mircea Serban; Schulz, Hannes; Behnke, Sven

    2017-04-01

    Object class segmentation is a computer vision task which requires labeling each pixel of an image with the class of the object it belongs to. Deep convolutional neural networks (DNN) are able to learn and take advantage of local spatial correlations required for this task. They are, however, restricted by their small, fixed-sized filters, which limits their ability to learn long-range dependencies. Recurrent Neural Networks (RNN), on the other hand, do not suffer from this restriction. Their iterative interpretation allows them to model long-range dependencies by propagating activity. This property is especially useful when labeling video sequences, where both spatial and temporal long-range dependencies occur. In this work, a novel RNN architecture for object class segmentation is presented. We investigate several ways to train such a network. We evaluate our models on the challenging NYU Depth v2 dataset for object class segmentation and obtain competitive results. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. Object-Oriented Approach to Integrating Database Semantics. Volume 4.

    DTIC Science & Technology

    1987-12-01

    schemata for; 1. Object Classification Shema -- Entities 2. Object Structure and Relationship Schema -- Relations 3. Operation Classification and... relationships are represented in a database is non- intuitive for naive users. *It is difficult to access and combine information in multiple databases. In this...from the CURRENT-.CLASSES table. Choosing a selected item do-selects it. Choose 0 to exit. 1. STUDENTS 2. CUR~RENT-..CLASSES 3. MANAGMNT -.CLASS

  5. An object oriented code for simulating supersymmetric Yang-Mills theories

    NASA Astrophysics Data System (ADS)

    Catterall, Simon; Joseph, Anosh

    2012-06-01

    We present SUSY_LATTICE - a C++ program that can be used to simulate certain classes of supersymmetric Yang-Mills (SYM) theories, including the well known N=4 SYM in four dimensions, on a flat Euclidean space-time lattice. Discretization of SYM theories is an old problem in lattice field theory. It has resisted solution until recently when new ideas drawn from orbifold constructions and topological field theories have been brought to bear on the question. The result has been the creation of a new class of lattice gauge theories in which the lattice action is invariant under one or more supersymmetries. The resultant theories are local, free of doublers and also possess exact gauge-invariance. In principle they form the basis for a truly non-perturbative definition of the continuum SYM theories. In the continuum limit they reproduce versions of the SYM theories formulated in terms of twisted fields, which on a flat space-time is just a change of the field variables. In this paper, we briefly review these ideas and then go on to provide the details of the C++ code. We sketch the design of the code, with particular emphasis being placed on SYM theories with N=(2,2) in two dimensions and N=4 in three and four dimensions, making one-to-one comparisons between the essential components of the SYM theories and their corresponding counterparts appearing in the simulation code. The code may be used to compute several quantities associated with the SYM theories such as the Polyakov loop, mean energy, and the width of the scalar eigenvalue distributions. Program summaryProgram title: SUSY_LATTICE Catalogue identifier: AELS_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AELS_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC license, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 9315 No. of bytes in distributed program, including test data, etc.: 95 371 Distribution format: tar.gz Programming language: C++ Computer: PCs and Workstations Operating system: Any, tested on Linux machines Classification:: 11.6 Nature of problem: To compute some of the observables of supersymmetric Yang-Mills theories such as supersymmetric action, Polyakov/Wilson loops, scalar eigenvalues and Pfaffian phases. Solution method: We use the Rational Hybrid Monte Carlo algorithm followed by a Leapfrog evolution and a Metropolis test. The input parameters of the model are read in from a parameter file. Restrictions: This code applies only to supersymmetric gauge theories with extended supersymmetry, which undergo the process of maximal twisting. (See Section 2 of the manuscript for details.) Running time: From a few minutes to several hours depending on the amount of statistics needed.

  6. Uplink Coding

    NASA Technical Reports Server (NTRS)

    Andrews, Ken; Divsalar, Dariush; Dolinar, Sam; Moision, Bruce; Hamkins, Jon; Pollara, Fabrizio

    2007-01-01

    This slide presentation reviews the objectives, meeting goals and overall NASA goals for the NASA Data Standards Working Group. The presentation includes information on the technical progress surrounding the objective, short LDPC codes, and the general results on the Pu-Pw tradeoff.

  7. 76 FR 36231 - American Society of Mechanical Engineers (ASME) Codes and New and Revised ASME Code Cases

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-06-21

    ... exceed the test facility limits and reduces the number of functional tests for specific valve designs... addresses reducing the number of functional tests for specific valve designs. The NRC has identified no... the required test pressure for the new Class 1 incompressible-fluid, pressure-relief valve designs...

  8. Final Evaluation of MIPS M/500

    DTIC Science & Technology

    1987-11-01

    recognizing common subexpressions by changing the code to read: acke (n,m) If (, - 0) return *+I; return a ker(n-1, 0 ? 1 aaker (n,.-1)); I the total code...INSTITUTE JPO PTTTSBURCH. PA 15213 N/A N/A N/O 11 TITLE (Inciude Security Class.iication) Final Evaluation of MIPS M/500 12. PERSONAL AUTHOR(S) Daniel V

  9. The Prediction of Performance in Navy Signalman Class "A" School. TAEG Report No. 90.

    ERIC Educational Resources Information Center

    Mew, Dorothy V.

    A study designed to develop a selection model for the prediction of Signalman performance in sending and receiving Morse code and to evaluate training strategies was conducted with 180 Navy and Coast Guard enlisted men. Trainees were taught to send Morse code using innovative training materials (mnemonics and guided practice). High and average…

  10. Research in Computational Aeroscience Applications Implemented on Advanced Parallel Computing Systems

    NASA Technical Reports Server (NTRS)

    Wigton, Larry

    1996-01-01

    Improving the numerical linear algebra routines for use in new Navier-Stokes codes, specifically Tim Barth's unstructured grid code, with spin-offs to TRANAIR is reported. A fast distance calculation routine for Navier-Stokes codes using the new one-equation turbulence models is written. The primary focus of this work was devoted to improving matrix-iterative methods. New algorithms have been developed which activate the full potential of classical Cray-class computers as well as distributed-memory parallel computers.

  11. CrossTalk, The Journal of Defense Software Engineering. Volume 27, Number 4. July/August 2014

    DTIC Science & Technology

    2014-07-01

    hires him for the work.” To do this with a development project, isolate parts of the sys- tem the development of which is not obvious and develop...class. Coupling makes it more difficult to isolate units of code for testing. The interdependence also makes code comprehension more difficult and...coupled, it is more difficult or impossible to isolate the module under test to ensure the test is focusing only on the desired code and producing

  12. Analysis of quantum error correction with symmetric hypergraph states

    NASA Astrophysics Data System (ADS)

    Wagner, T.; Kampermann, H.; Bruß, D.

    2018-03-01

    Graph states have been used to construct quantum error correction codes for independent errors. Hypergraph states generalize graph states, and symmetric hypergraph states have been shown to allow for the correction of correlated errors. In this paper, it is shown that symmetric hypergraph states are not useful for the correction of independent errors, at least for up to 30 qubits. Furthermore, error correction for error models with protected qubits is explored. A class of known graph codes for this scenario is generalized to hypergraph codes.

  13. Introducing the fit-criteria assessment plot - A visualisation tool to assist class enumeration in group-based trajectory modelling.

    PubMed

    Klijn, Sven L; Weijenberg, Matty P; Lemmens, Paul; van den Brandt, Piet A; Lima Passos, Valéria

    2017-10-01

    Background and objective Group-based trajectory modelling is a model-based clustering technique applied for the identification of latent patterns of temporal changes. Despite its manifold applications in clinical and health sciences, potential problems of the model selection procedure are often overlooked. The choice of the number of latent trajectories (class-enumeration), for instance, is to a large degree based on statistical criteria that are not fail-safe. Moreover, the process as a whole is not transparent. To facilitate class enumeration, we introduce a graphical summary display of several fit and model adequacy criteria, the fit-criteria assessment plot. Methods An R-code that accepts universal data input is presented. The programme condenses relevant group-based trajectory modelling output information of model fit indices in automated graphical displays. Examples based on real and simulated data are provided to illustrate, assess and validate fit-criteria assessment plot's utility. Results Fit-criteria assessment plot provides an overview of fit criteria on a single page, placing users in an informed position to make a decision. Fit-criteria assessment plot does not automatically select the most appropriate model but eases the model assessment procedure. Conclusions Fit-criteria assessment plot is an exploratory, visualisation tool that can be employed to assist decisions in the initial and decisive phase of group-based trajectory modelling analysis. Considering group-based trajectory modelling's widespread resonance in medical and epidemiological sciences, a more comprehensive, easily interpretable and transparent display of the iterative process of class enumeration may foster group-based trajectory modelling's adequate use.

  14. OSIRIS - an object-oriented parallel 3D PIC code for modeling laser and particle beam-plasma interaction

    NASA Astrophysics Data System (ADS)

    Hemker, Roy

    1999-11-01

    The advances in computational speed make it now possible to do full 3D PIC simulations of laser plasma and beam plasma interactions, but at the same time the increased complexity of these problems makes it necessary to apply modern approaches like object oriented programming to the development of simulation codes. We report here on our progress in developing an object oriented parallel 3D PIC code using Fortran 90. In its current state the code contains algorithms for 1D, 2D, and 3D simulations in cartesian coordinates and for 2D cylindrically-symmetric geometry. For all of these algorithms the code allows for a moving simulation window and arbitrary domain decomposition for any number of dimensions. Recent 3D simulation results on the propagation of intense laser and electron beams through plasmas will be presented.

  15. How Object-Specific Are Object Files? Evidence for Integration by Location

    ERIC Educational Resources Information Center

    van Dam, Wessel O.; Hommel, Bernhard

    2010-01-01

    Given the distributed representation of visual features in the human brain, binding mechanisms are necessary to integrate visual information about the same perceptual event. It has been assumed that feature codes are bound into object files--pointers to the neural codes of the features of a given event. The present study investigated the…

  16. Object-Oriented Programming in High Schools the Turing Way.

    ERIC Educational Resources Information Center

    Holt, Richard C.

    This paper proposes an approach to introducing object-oriented concepts to high school computer science students using the Object-Oriented Turing (OOT) language. Students can learn about basic object-oriented (OO) principles such as classes and inheritance by using and expanding a collection of classes that draw pictures like circles and happy…

  17. Pareto-optimal multi-objective dimensionality reduction deep auto-encoder for mammography classification.

    PubMed

    Taghanaki, Saeid Asgari; Kawahara, Jeremy; Miles, Brandon; Hamarneh, Ghassan

    2017-07-01

    Feature reduction is an essential stage in computer aided breast cancer diagnosis systems. Multilayer neural networks can be trained to extract relevant features by encoding high-dimensional data into low-dimensional codes. Optimizing traditional auto-encoders works well only if the initial weights are close to a proper solution. They are also trained to only reduce the mean squared reconstruction error (MRE) between the encoder inputs and the decoder outputs, but do not address the classification error. The goal of the current work is to test the hypothesis that extending traditional auto-encoders (which only minimize reconstruction error) to multi-objective optimization for finding Pareto-optimal solutions provides more discriminative features that will improve classification performance when compared to single-objective and other multi-objective approaches (i.e. scalarized and sequential). In this paper, we introduce a novel multi-objective optimization of deep auto-encoder networks, in which the auto-encoder optimizes two objectives: MRE and mean classification error (MCE) for Pareto-optimal solutions, rather than just MRE. These two objectives are optimized simultaneously by a non-dominated sorting genetic algorithm. We tested our method on 949 X-ray mammograms categorized into 12 classes. The results show that the features identified by the proposed algorithm allow a classification accuracy of up to 98.45%, demonstrating favourable accuracy over the results of state-of-the-art methods reported in the literature. We conclude that adding the classification objective to the traditional auto-encoder objective and optimizing for finding Pareto-optimal solutions, using evolutionary multi-objective optimization, results in producing more discriminative features. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. A satellite-asteroid mystery and a possible early flux of scattered C-class asteroids

    NASA Technical Reports Server (NTRS)

    Hartmann, William K.

    1987-01-01

    The C spectral class implied by the neutral spectra and low albedo of probably capture-originated satellites orbiting Saturn, Jupiter, and Mars is noted to contradict evidence that class-C objects are native only to the outer half of the asteroid belt. It is presently suggested that Jupiter resonances may have scattered a high flux of C-type objects out of the belt as well as throughout the primordial solar system, at the close of planet accretion, when extended atmospheres could figure in their capture. The largest scattered object fluxes come from the resonance regions primarily populated by C-class objects, lending support to the Pollack et al. (1979) capture scenario invoking extended protoatmospheres.

  19. Metrics and tools for consistent cohort discovery and financial analyses post-transition to ICD-10-CM

    PubMed Central

    Boyd, Andrew D; ‘John’ Li, Jianrong; Kenost, Colleen; Joese, Binoy; Min Yang, Young; Kalagidis, Olympia A; Zenku, Ilir; Saner, Donald; Bahroos, Neil; Lussier, Yves A

    2015-01-01

    In the United States, International Classification of Disease Clinical Modification (ICD-9-CM, the ninth revision) diagnosis codes are commonly used to identify patient cohorts and to conduct financial analyses related to disease. In October 2015, the healthcare system of the United States will transition to ICD-10-CM (the tenth revision) diagnosis codes. One challenge posed to clinical researchers and other analysts is conducting diagnosis-related queries across datasets containing both coding schemes. Further, healthcare administrators will manage growth, trends, and strategic planning with these dually-coded datasets. The majority of the ICD-9-CM to ICD-10-CM translations are complex and nonreciprocal, creating convoluted representations and meanings. Similarly, mapping back from ICD-10-CM to ICD-9-CM is equally complex, yet different from mapping forward, as relationships are likewise nonreciprocal. Indeed, 10 of the 21 top clinical categories are complex as 78% of their diagnosis codes are labeled as “convoluted” by our analyses. Analysis and research related to external causes of morbidity, injury, and poisoning will face the greatest challenges due to 41 745 (90%) convolutions and a decrease in the number of codes. We created a web portal tool and translation tables to list all ICD-9-CM diagnosis codes related to the specific input of ICD-10-CM diagnosis codes and their level of complexity: “identity” (reciprocal), “class-to-subclass,” “subclass-to-class,” “convoluted,” or “no mapping.” These tools provide guidance on ambiguous and complex translations to reveal where reports or analyses may be challenging to impossible. Web portal: http://www.lussierlab.org/transition-to-ICD9CM/ Tables annotated with levels of translation complexity: http://www.lussierlab.org/publications/ICD10to9 PMID:25681260

  20. Code Verification Capabilities and Assessments in Support of ASC V&V Level 2 Milestone #6035

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Doebling, Scott William; Budzien, Joanne Louise; Ferguson, Jim Michael

    This document provides a summary of the code verification activities supporting the FY17 Level 2 V&V milestone entitled “Deliver a Capability for V&V Assessments of Code Implementations of Physics Models and Numerical Algorithms in Support of Future Predictive Capability Framework Pegposts.” The physics validation activities supporting this milestone are documented separately. The objectives of this portion of the milestone are: 1) Develop software tools to support code verification analysis; 2) Document standard definitions of code verification test problems; and 3) Perform code verification assessments (focusing on error behavior of algorithms). This report and a set of additional standalone documents servemore » as the compilation of results demonstrating accomplishment of these objectives.« less

  1. Monte Carlo Techniques for Nuclear Systems - Theory Lectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, Forrest B.

    These are lecture notes for a Monte Carlo class given at the University of New Mexico. The following topics are covered: course information; nuclear eng. review & MC; random numbers and sampling; computational geometry; collision physics; tallies and statistics; eigenvalue calculations I; eigenvalue calculations II; eigenvalue calculations III; variance reduction; parallel Monte Carlo; parameter studies; fission matrix and higher eigenmodes; doppler broadening; Monte Carlo depletion; HTGR modeling; coupled MC and T/H calculations; fission energy deposition. Solving particle transport problems with the Monte Carlo method is simple - just simulate the particle behavior. The devil is in the details, however. Thesemore » lectures provide a balanced approach to the theory and practice of Monte Carlo simulation codes. The first lectures provide an overview of Monte Carlo simulation methods, covering the transport equation, random sampling, computational geometry, collision physics, and statistics. The next lectures focus on the state-of-the-art in Monte Carlo criticality simulations, covering the theory of eigenvalue calculations, convergence analysis, dominance ratio calculations, bias in Keff and tallies, bias in uncertainties, a case study of a realistic calculation, and Wielandt acceleration techniques. The remaining lectures cover advanced topics, including HTGR modeling and stochastic geometry, temperature dependence, fission energy deposition, depletion calculations, parallel calculations, and parameter studies. This portion of the class focuses on using MCNP to perform criticality calculations for reactor physics and criticality safety applications. It is an intermediate level class, intended for those with at least some familiarity with MCNP. Class examples provide hands-on experience at running the code, plotting both geometry and results, and understanding the code output. The class includes lectures & hands-on computer use for a variety of Monte Carlo calculations. Beginning MCNP users are encouraged to review LA-UR-09-00380, "Criticality Calculations with MCNP: A Primer (3nd Edition)" (available at http:// mcnp.lanl.gov under "Reference Collection") prior to the class. No Monte Carlo class can be complete without having students write their own simple Monte Carlo routines for basic random sampling, use of the random number generator, and simplified particle transport simulation.« less

  2. MHC class I-associated peptides derive from selective regions of the human genome.

    PubMed

    Pearson, Hillary; Daouda, Tariq; Granados, Diana Paola; Durette, Chantal; Bonneil, Eric; Courcelles, Mathieu; Rodenbrock, Anja; Laverdure, Jean-Philippe; Côté, Caroline; Mader, Sylvie; Lemieux, Sébastien; Thibault, Pierre; Perreault, Claude

    2016-12-01

    MHC class I-associated peptides (MAPs) define the immune self for CD8+ T lymphocytes and are key targets of cancer immunosurveillance. Here, the goals of our work were to determine whether the entire set of protein-coding genes could generate MAPs and whether specific features influence the ability of discrete genes to generate MAPs. Using proteogenomics, we have identified 25,270 MAPs isolated from the B lymphocytes of 18 individuals who collectively expressed 27 high-frequency HLA-A,B allotypes. The entire MAP repertoire presented by these 27 allotypes covered only 10% of the exomic sequences expressed in B lymphocytes. Indeed, 41% of expressed protein-coding genes generated no MAPs, while 59% of genes generated up to 64 MAPs, often derived from adjacent regions and presented by different allotypes. We next identified several features of transcripts and proteins associated with efficient MAP production. From these data, we built a logistic regression model that predicts with good accuracy whether a gene generates MAPs. Our results show preferential selection of MAPs from a limited repertoire of proteins with distinctive features. The notion that the MHC class I immunopeptidome presents only a small fraction of the protein-coding genome for monitoring by the immune system has profound implications in autoimmunity and cancer immunology.

  3. MHC class I–associated peptides derive from selective regions of the human genome

    PubMed Central

    Pearson, Hillary; Granados, Diana Paola; Durette, Chantal; Bonneil, Eric; Courcelles, Mathieu; Rodenbrock, Anja; Laverdure, Jean-Philippe; Côté, Caroline; Thibault, Pierre

    2016-01-01

    MHC class I–associated peptides (MAPs) define the immune self for CD8+ T lymphocytes and are key targets of cancer immunosurveillance. Here, the goals of our work were to determine whether the entire set of protein-coding genes could generate MAPs and whether specific features influence the ability of discrete genes to generate MAPs. Using proteogenomics, we have identified 25,270 MAPs isolated from the B lymphocytes of 18 individuals who collectively expressed 27 high-frequency HLA-A,B allotypes. The entire MAP repertoire presented by these 27 allotypes covered only 10% of the exomic sequences expressed in B lymphocytes. Indeed, 41% of expressed protein-coding genes generated no MAPs, while 59% of genes generated up to 64 MAPs, often derived from adjacent regions and presented by different allotypes. We next identified several features of transcripts and proteins associated with efficient MAP production. From these data, we built a logistic regression model that predicts with good accuracy whether a gene generates MAPs. Our results show preferential selection of MAPs from a limited repertoire of proteins with distinctive features. The notion that the MHC class I immunopeptidome presents only a small fraction of the protein-coding genome for monitoring by the immune system has profound implications in autoimmunity and cancer immunology. PMID:27841757

  4. Genetic Code Optimization for Cotranslational Protein Folding: Codon Directional Asymmetry Correlates with Antiparallel Betasheets, tRNA Synthetase Classes.

    PubMed

    Seligmann, Hervé; Warthi, Ganesh

    2017-01-01

    A new codon property, codon directional asymmetry in nucleotide content (CDA), reveals a biologically meaningful genetic code dimension: palindromic codons (first and last nucleotides identical, codon structure XZX) are symmetric (CDA = 0), codons with structures ZXX/XXZ are 5'/3' asymmetric (CDA = - 1/1; CDA = - 0.5/0.5 if Z and X are both purines or both pyrimidines, assigning negative/positive (-/+) signs is an arbitrary convention). Negative/positive CDAs associate with (a) Fujimoto's tetrahedral codon stereo-table; (b) tRNA synthetase class I/II (aminoacylate the 2'/3' hydroxyl group of the tRNA's last ribose, respectively); and (c) high/low antiparallel (not parallel) betasheet conformation parameters. Preliminary results suggest CDA-whole organism associations (body temperature, developmental stability, lifespan). Presumably, CDA impacts spatial kinetics of codon-anticodon interactions, affecting cotranslational protein folding. Some synonymous codons have opposite CDA sign (alanine, leucine, serine, and valine), putatively explaining how synonymous mutations sometimes affect protein function. Correlations between CDA and tRNA synthetase classes are weaker than between CDA and antiparallel betasheet conformation parameters. This effect is stronger for mitochondrial genetic codes, and potentially drives mitochondrial codon-amino acid reassignments. CDA reveals information ruling nucleotide-protein relations embedded in reversed (not reverse-complement) sequences (5'-ZXX-3'/5'-XXZ-3').

  5. 40 CFR 147.2551 - State-administered program-Class II wells.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ..., call 202-741-6030, or go to: http://www.archives.gov/federal_register/code_of_federal_regulations/ibr..., “Re: Application for Primacy in the Regulation of Class II Injection Wells,” March 8, 1982; (5) Letter from State Oil and Gas Supervisor, Wyoming Oil and Gas Conservation Commission, to EPA Region VIII, “Re...

  6. 40 CFR 147.2551 - State-administered program-Class II wells.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ..., call 202-741-6030, or go to: http://www.archives.gov/federal_register/code_of_federal_regulations/ibr..., “Re: Application for Primacy in the Regulation of Class II Injection Wells,” March 8, 1982; (5) Letter from State Oil and Gas Supervisor, Wyoming Oil and Gas Conservation Commission, to EPA Region VIII, “Re...

  7. 40 CFR 147.2551 - State-administered program-Class II wells.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ..., call 202-741-6030, or go to: http://www.archives.gov/federal_register/code_of_federal_regulations/ibr..., “Re: Application for Primacy in the Regulation of Class II Injection Wells,” March 8, 1982; (5) Letter from State Oil and Gas Supervisor, Wyoming Oil and Gas Conservation Commission, to EPA Region VIII, “Re...

  8. 40 CFR 147.2551 - State-administered program-Class II wells.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ..., call 202-741-6030, or go to: http://www.archives.gov/federal_register/code_of_federal_regulations/ibr..., “Re: Application for Primacy in the Regulation of Class II Injection Wells,” March 8, 1982; (5) Letter from State Oil and Gas Supervisor, Wyoming Oil and Gas Conservation Commission, to EPA Region VIII, “Re...

  9. 40 CFR 147.2551 - State-administered program-Class II wells.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ..., call 202-741-6030, or go to: http://www.archives.gov/federal_register/code_of_federal_regulations/ibr..., “Re: Application for Primacy in the Regulation of Class II Injection Wells,” March 8, 1982; (5) Letter from State Oil and Gas Supervisor, Wyoming Oil and Gas Conservation Commission, to EPA Region VIII, “Re...

  10. High Performance Object-Oriented Scientific Programming in Fortran 90

    NASA Technical Reports Server (NTRS)

    Norton, Charles D.; Decyk, Viktor K.; Szymanski, Boleslaw K.

    1997-01-01

    We illustrate how Fortran 90 supports object-oriented concepts by example of plasma particle computations on the IBM SP. Our experience shows that Fortran 90 and object-oriented methodology give high performance while providing a bridge from Fortran 77 legacy codes to modern programming principles. All of our object-oriented Fortran 90 codes execute more quickly thatn the equeivalent C++ versions, yet the abstraction modelling capabilities used for scentific programming are comparably powereful.

  11. Modified-hybrid optical neural network filter for multiple object recognition within cluttered scenes

    NASA Astrophysics Data System (ADS)

    Kypraios, Ioannis; Young, Rupert C. D.; Chatwin, Chris R.

    2009-08-01

    Motivated by the non-linear interpolation and generalization abilities of the hybrid optical neural network filter between the reference and non-reference images of the true-class object we designed the modifiedhybrid optical neural network filter. We applied an optical mask to the hybrid optical neural network's filter input. The mask was built with the constant weight connections of a randomly chosen image included in the training set. The resulted design of the modified-hybrid optical neural network filter is optimized for performing best in cluttered scenes of the true-class object. Due to the shift invariance properties inherited by its correlator unit the filter can accommodate multiple objects of the same class to be detected within an input cluttered image. Additionally, the architecture of the neural network unit of the general hybrid optical neural network filter allows the recognition of multiple objects of different classes within the input cluttered image by modifying the output layer of the unit. We test the modified-hybrid optical neural network filter for multiple objects of the same and of different classes' recognition within cluttered input images and video sequences of cluttered scenes. The filter is shown to exhibit with a single pass over the input data simultaneously out-of-plane rotation, shift invariance and good clutter tolerance. It is able to successfully detect and classify correctly the true-class objects within background clutter for which there has been no previous training.

  12. Toward Developing a Universal Code of Ethics for Adult Educators.

    ERIC Educational Resources Information Center

    Siegel, Irwin H.

    2000-01-01

    Presents conflicting viewpoints on a universal code of ethics for adult educators. Suggests objectives of a code (guidance for practice, policymaking direction, common reference point, shared values). Outlines content and methods for implementing a code. (SK)

  13. NAS Parallel Benchmarks. 2.4

    NASA Technical Reports Server (NTRS)

    VanderWijngaart, Rob; Biegel, Bryan A. (Technical Monitor)

    2002-01-01

    We describe a new problem size, called Class D, for the NAS Parallel Benchmarks (NPB), whose MPI source code implementation is being released as NPB 2.4. A brief rationale is given for how the new class is derived. We also describe the modifications made to the MPI (Message Passing Interface) implementation to allow the new class to be run on systems with 32-bit integers, and with moderate amounts of memory. Finally, we give the verification values for the new problem size.

  14. The Long Non-Coding RNA Transcriptome Landscape in CHO Cells Under Batch and Fed-Batch Conditions.

    PubMed

    Vito, Davide; Smales, C Mark

    2018-05-21

    The role of non-coding RNAs in determining growth, productivity and recombinant product quality attributes in Chinese hamster ovary (CHO) cells has received much attention in recent years, exemplified by studies into microRNAs in particular. However, other classes of non-coding RNAs have received less attention. One such class are the non-coding RNAs known collectively as long non-coding RNAs (lncRNAs). We have undertaken the first landscape analysis of the lncRNA transcriptome in CHO using a mouse based microarray that also provided for the surveillance of the coding transcriptome. We report on those lncRNAs present in a model host CHO cell line under batch and fed-batch conditions on two different days and relate the expression of different lncRNAs to each other. We demonstrate that the mouse microarray was suitable for the detection and analysis of thousands of CHO lncRNAs and validated a number of these by qRT-PCR. We then further analysed the data to identify those lncRNAs whose expression changed the most between growth and stationary phases of culture or between batch and fed-batch culture to identify potential lncRNA targets for further functional studies with regard to their role in controlling growth of CHO cells. We discuss the implications for the publication of this rich dataset and how this may be used by the community. This article is protected by copyright. All rights reserved.

  15. Objective speech quality assessment and the RPE-LTP coding algorithm in different noise and language conditions.

    PubMed

    Hansen, J H; Nandkumar, S

    1995-01-01

    The formulation of reliable signal processing algorithms for speech coding and synthesis require the selection of a prior criterion of performance. Though coding efficiency (bits/second) or computational requirements can be used, a final performance measure must always include speech quality. In this paper, three objective speech quality measures are considered with respect to quality assessment for American English, noisy American English, and noise-free versions of seven languages. The purpose is to determine whether objective quality measures can be used to quantify changes in quality for a given voice coding method, with a known subjective performance level, as background noise or language conditions are changed. The speech coding algorithm chosen is regular-pulse excitation with long-term prediction (RPE-LTP), which has been chosen as the standard voice compression algorithm for the European Digital Mobile Radio system. Three areas are considered for objective quality assessment which include: (i) vocoder performance for American English in a noise-free environment, (ii) speech quality variation for three additive background noise sources, and (iii) noise-free performance for seven languages which include English, Japanese, Finnish, German, Hindi, Spanish, and French. It is suggested that although existing objective quality measures will never replace subjective testing, they can be a useful means of assessing changes in performance, identifying areas for improvement in algorithm design, and augmenting subjective quality tests for voice coding/compression algorithms in noise-free, noisy, and/or non-English applications.

  16. 43 CFR 11.64 - Injury determination phase-testing and sampling methods.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    .... In developing these objectives, the availability of information from response actions relating to the...), test cases proving the code works, and any alteration of previously documented code made to adapt the... computer code (if any), test cases proving the code works, and any alteration of previously documented code...

  17. 43 CFR 11.64 - Injury determination phase-testing and sampling methods.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    .... In developing these objectives, the availability of information from response actions relating to the...), test cases proving the code works, and any alteration of previously documented code made to adapt the... computer code (if any), test cases proving the code works, and any alteration of previously documented code...

  18. 43 CFR 11.64 - Injury determination phase-testing and sampling methods.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    .... In developing these objectives, the availability of information from response actions relating to the...), test cases proving the code works, and any alteration of previously documented code made to adapt the... computer code (if any), test cases proving the code works, and any alteration of previously documented code...

  19. H-P adaptive methods for finite element analysis of aerothermal loads in high-speed flows

    NASA Technical Reports Server (NTRS)

    Chang, H. J.; Bass, J. M.; Tworzydlo, W.; Oden, J. T.

    1993-01-01

    The commitment to develop the National Aerospace Plane and Maneuvering Reentry Vehicles has generated resurgent interest in the technology required to design structures for hypersonic flight. The principal objective of this research and development effort has been to formulate and implement a new class of computational methodologies for accurately predicting fine scale phenomena associated with this class of problems. The initial focus of this effort was to develop optimal h-refinement and p-enrichment adaptive finite element methods which utilize a-posteriori estimates of the local errors to drive the adaptive methodology. Over the past year this work has specifically focused on two issues which are related to overall performance of a flow solver. These issues include the formulation and implementation (in two dimensions) of an implicit/explicit flow solver compatible with the hp-adaptive methodology, and the design and implementation of computational algorithm for automatically selecting optimal directions in which to enrich the mesh. These concepts and algorithms have been implemented in a two-dimensional finite element code and used to solve three hypersonic flow benchmark problems (Holden Mach 14.1, Edney shock on shock interaction Mach 8.03, and the viscous backstep Mach 4.08).

  20. Approaches in highly parameterized inversion - PEST++, a Parameter ESTimation code optimized for large environmental models

    USGS Publications Warehouse

    Welter, David E.; Doherty, John E.; Hunt, Randall J.; Muffels, Christopher T.; Tonkin, Matthew J.; Schreuder, Willem A.

    2012-01-01

    An object-oriented parameter estimation code was developed to incorporate benefits of object-oriented programming techniques for solving large parameter estimation modeling problems. The code is written in C++ and is a formulation and expansion of the algorithms included in PEST, a widely used parameter estimation code written in Fortran. The new code is called PEST++ and is designed to lower the barriers of entry for users and developers while providing efficient algorithms that can accommodate large, highly parameterized problems. This effort has focused on (1) implementing the most popular features of PEST in a fashion that is easy for novice or experienced modelers to use and (2) creating a software design that is easy to extend; that is, this effort provides a documented object-oriented framework designed from the ground up to be modular and extensible. In addition, all PEST++ source code and its associated libraries, as well as the general run manager source code, have been integrated in the Microsoft Visual Studio® 2010 integrated development environment. The PEST++ code is designed to provide a foundation for an open-source development environment capable of producing robust and efficient parameter estimation tools for the environmental modeling community into the future.

  1. Towards a generalized computational fluid dynamics technique for all Mach numbers

    NASA Technical Reports Server (NTRS)

    Walters, R. W.; Slack, D. C.; Godfrey, A. G.

    1993-01-01

    Currently there exists no single unified approach for efficiently and accurately solving computational fluid dynamics (CFD) problems across the Mach number regime, from truly low speed incompressible flows to hypersonic speeds. There are several CFD codes that have evolved into sophisticated prediction tools with a wide variety of features including multiblock capabilities, generalized chemistry and thermodynamics models among other features. However, as these codes evolve, the demand placed on the end user also increases simply because of the myriad of features that are incorporated into these codes. In order for a user to be able to solve a wide range of problems, several codes may be needed requiring the user to be familiar with the intricacies of each code and their rather complicated input files. Moreover, the cost of training users and maintaining several codes becomes prohibitive. The objective of the current work is to extend the compressible, characteristic-based, thermochemical nonequilibrium Navier-Stokes code GASP to very low speed flows and simultaneously improve convergence at all speeds. Before this work began, the practical speed range of GASP was Mach numbers on the order of 0.1 and higher. In addition, a number of new techniques have been developed for more accurate physical and numerical modeling. The primary focus has been on the development of optimal preconditioning techniques for the Euler and the Navier-Stokes equations with general finite-rate chemistry models and both equilibrium and nonequilibrium thermodynamics models. We began with the work of Van Leer, Lee, and Roe for inviscid, one-dimensional perfect gases and extended their approach to include three-dimensional reacting flows. The basic steps required to accomplish this task were a transformation to stream-aligned coordinates, the formulation of the preconditioning matrix, incorporation into both explicit and implicit temporal integration schemes, and modification of the numerical flux formulae. In addition, we improved the convergence rate of the implicit time integration schemes in GASP through the use of inner iteration strategies and the use of the GMRES (General Minimized Resisual) which belongs to the class of algorithms referred to as Krylov subspace iteration. Finally, we significantly improved the practical utility of GASP through the addition of mesh sequencing, a technique in which computations begin on a coarse grid and get interpolated onto successively finer grids. The fluid dynamic problems of interest to the propulsion community involve complex flow physics spanning different velocity regimes and possibly involving chemical reactions. This class of problems results in widely disparate time scales causing numerical stiffness. Even in the absence of chemical reactions, eigenvalue stiffness manifests itself at transonic and very low speed flows which can be quantified by the large condition number of the system and evidenced by slow convergence rates. This results in the need for thorough numerical analysis and subsequent implementation of sophisticated numerical techniques for these difficult yet practical problems. As a result of this work, we have been able to extend the range of applicability of compressible codes to very low speed inviscid flows (M = .001) and reacting flows.

  2. Association of low-frequency and rare coding-sequence variants with blood lipids and Coronary Heart Disease in 56,000 whites and blacks

    USDA-ARS?s Scientific Manuscript database

    Low-frequency coding DNA sequence variants in the proprotein convertase subtilisin/kexin type 9 gene (PCSK9) lower plasma low-density lipoprotein cholesterol (LDL-C), protect against risk of coronary heart disease (CHD), and have prompted the development of a new class of therapeutics. It is uncerta...

  3. Patterns of Revision in Online Writing: A Study of Wikipedia's Featured Articles

    ERIC Educational Resources Information Center

    Jones, John

    2008-01-01

    This study examines the revision histories of 10 Wikipedia articles nominated for the site's Featured Article Class (FAC), its highest quality rating, 5 of which achieved FAC and 5 of which did not. The revisions to each article were coded, and the coding results were combined with a descriptive analysis of two representative articles in order to…

  4. Read-Alouds in Kindergarten Classrooms: A Moment-by-Moment Approach to Analyzing Teacher-Child Interactions

    ERIC Educational Resources Information Center

    Mascareño, Mayra; Deunk, Marjolein I.; Snow, Catherine E.; Bosker, Roel J.

    2017-01-01

    The aim of the study was to explore teacher-child interaction in 24 whole-class read-aloud sessions in Chilean kindergarten classrooms serving children from low socioeconomic backgrounds. Fifteen sessions focused on story meaning, and nine focused on language coding/decoding. We coded teacher and child turns for their function (i.e., teacher…

  5. 78 FR 49512 - Clean Water Act Class II: Proposed Administrative Settlement, Penalty Assessment and Opportunity...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-08-14

    ....gov Web site is an ``anonymous access'' system, which means the EPA will not know your identity or... request exemptions in accordance with Ala. Admin. Code r. 335-3-14- 01(1) and (5). Respondent operated... Title 129 of Neb. Admin. Code 17-001.01. Respondent operated an emergency generator at its facility...

  6. Insights into HLA-G Genetics Provided by Worldwide Haplotype Diversity

    PubMed Central

    Castelli, Erick C.; Ramalho, Jaqueline; Porto, Iane O. P.; Lima, Thálitta H. A.; Felício, Leandro P.; Sabbagh, Audrey; Donadi, Eduardo A.; Mendes-Junior, Celso T.

    2014-01-01

    Human leukocyte antigen G (HLA-G) belongs to the family of non-classical HLA class I genes, located within the major histocompatibility complex (MHC). HLA-G has been the target of most recent research regarding the function of class I non-classical genes. The main features that distinguish HLA-G from classical class I genes are (a) limited protein variability, (b) alternative splicing generating several membrane bound and soluble isoforms, (c) short cytoplasmic tail, (d) modulation of immune response (immune tolerance), and (e) restricted expression to certain tissues. In the present work, we describe the HLA-G gene structure and address the HLA-G variability and haplotype diversity among several populations around the world, considering each of its major segments [promoter, coding, and 3′ untranslated region (UTR)]. For this purpose, we developed a pipeline to reevaluate the 1000Genomes data and recover miscalled or missing genotypes and haplotypes. It became clear that the overall structure of the HLA-G molecule has been maintained during the evolutionary process and that most of the variation sites found in the HLA-G coding region are either coding synonymous or intronic mutations. In addition, only a few frequent and divergent extended haplotypes are found when the promoter, coding, and 3′UTRs are evaluated together. The divergence is particularly evident for the regulatory regions. The population comparisons confirmed that most of the HLA-G variability has originated before human dispersion from Africa and that the allele and haplotype frequencies have probably been shaped by strong selective pressures. PMID:25339953

  7. An object-oriented data reduction system in Fortran

    NASA Technical Reports Server (NTRS)

    Bailey, J.

    1992-01-01

    A data reduction system for the AAO two-degree field project is being developed using an object-oriented approach. Rather than use an object-oriented language (such as C++) the system is written in Fortran and makes extensive use of existing subroutine libraries provided by the UK Starlink project. Objects are created using the extensible N-dimensional Data Format (NDF) which itself is based on the Hierarchical Data System (HDS). The software consists of a class library, with each class corresponding to a Fortran subroutine with a standard calling sequence. The methods of the classes provide operations on NDF objects at a similar level of functionality to the applications of conventional data reduction systems. However, because they are provided as callable subroutines, they can be used as building blocks for more specialist applications. The class library is not dependent on a particular software environment thought it can be used effectively in ADAM applications. It can also be used from standalone Fortran programs. It is intended to develop a graphical user interface for use with the class library to form the 2dF data reduction system.

  8. Coarse-coded higher-order neural networks for PSRI object recognition. [position, scale, and rotation invariant

    NASA Technical Reports Server (NTRS)

    Spirkovska, Lilly; Reid, Max B.

    1993-01-01

    A higher-order neural network (HONN) can be designed to be invariant to changes in scale, translation, and inplane rotation. Invariances are built directly into the architecture of a HONN and do not need to be learned. Consequently, fewer training passes and a smaller training set are required to learn to distinguish between objects. The size of the input field is limited, however, because of the memory required for the large number of interconnections in a fully connected HONN. By coarse coding the input image, the input field size can be increased to allow the larger input scenes required for practical object recognition problems. We describe a coarse coding technique and present simulation results illustrating its usefulness and its limitations. Our simulations show that a third-order neural network can be trained to distinguish between two objects in a 4096 x 4096 pixel input field independent of transformations in translation, in-plane rotation, and scale in less than ten passes through the training set. Furthermore, we empirically determine the limits of the coarse coding technique in the object recognition domain.

  9. Robust Pedestrian Tracking and Recognition from FLIR Video: A Unified Approach via Sparse Coding

    PubMed Central

    Li, Xin; Guo, Rui; Chen, Chao

    2014-01-01

    Sparse coding is an emerging method that has been successfully applied to both robust object tracking and recognition in the vision literature. In this paper, we propose to explore a sparse coding-based approach toward joint object tracking-and-recognition and explore its potential in the analysis of forward-looking infrared (FLIR) video to support nighttime machine vision systems. A key technical contribution of this work is to unify existing sparse coding-based approaches toward tracking and recognition under the same framework, so that they can benefit from each other in a closed-loop. On the one hand, tracking the same object through temporal frames allows us to achieve improved recognition performance through dynamical updating of template/dictionary and combining multiple recognition results; on the other hand, the recognition of individual objects facilitates the tracking of multiple objects (i.e., walking pedestrians), especially in the presence of occlusion within a crowded environment. We report experimental results on both the CASIAPedestrian Database and our own collected FLIR video database to demonstrate the effectiveness of the proposed joint tracking-and-recognition approach. PMID:24961216

  10. The chloroplast tRNALys(UUU) gene from mustard (Sinapis alba) contains a class II intron potentially coding for a maturase-related polypeptide.

    PubMed

    Neuhaus, H; Link, G

    1987-01-01

    The trnK gene endocing the tRNALys(UUU) has been located on mustard (Sinapis alba) chloroplast DNA, 263 bp upstream of the psbA gene on the same strand. The nucleotide sequence of the trnK gene and its flanking regions as well as the putative transcription start and termination sites are shown. The 5' end of the transcript lies 121 bp upstream of the 5' tRNA coding region and is preceded by procaryotic-type "-10" and "-35" sequence elements, while the 3' end maps 2.77 kb downstream to a DNA region with possible stemloop secondary structure. The anticodon loop of the tRNALys is interrupted by a 2,574 bp intron containing a long open reading frame, which codes for 524 amino acids. Based on conserved stem and loop structures, this intron has characteristic features of a class II intron. A region near the carboxyl terminus of the derived polypeptide appears structurally related to maturases.

  11. Wavelet-based scalable L-infinity-oriented compression.

    PubMed

    Alecu, Alin; Munteanu, Adrian; Cornelis, Jan P H; Schelkens, Peter

    2006-09-01

    Among the different classes of coding techniques proposed in literature, predictive schemes have proven their outstanding performance in near-lossless compression. However, these schemes are incapable of providing embedded L(infinity)-oriented compression, or, at most, provide a very limited number of potential L(infinity) bit-stream truncation points. We propose a new multidimensional wavelet-based L(infinity)-constrained scalable coding framework that generates a fully embedded L(infinity)-oriented bit stream and that retains the coding performance and all the scalability options of state-of-the-art L2-oriented wavelet codecs. Moreover, our codec instantiation of the proposed framework clearly outperforms JPEG2000 in L(infinity) coding sense.

  12. An object-based visual attention model for robotic applications.

    PubMed

    Yu, Yuanlong; Mann, George K I; Gosine, Raymond G

    2010-10-01

    By extending integrated competition hypothesis, this paper presents an object-based visual attention model, which selects one object of interest using low-dimensional features, resulting that visual perception starts from a fast attentional selection procedure. The proposed attention model involves seven modules: learning of object representations stored in a long-term memory (LTM), preattentive processing, top-down biasing, bottom-up competition, mediation between top-down and bottom-up ways, generation of saliency maps, and perceptual completion processing. It works in two phases: learning phase and attending phase. In the learning phase, the corresponding object representation is trained statistically when one object is attended. A dual-coding object representation consisting of local and global codings is proposed. Intensity, color, and orientation features are used to build the local coding, and a contour feature is employed to constitute the global coding. In the attending phase, the model preattentively segments the visual field into discrete proto-objects using Gestalt rules at first. If a task-specific object is given, the model recalls the corresponding representation from LTM and deduces the task-relevant feature(s) to evaluate top-down biases. The mediation between automatic bottom-up competition and conscious top-down biasing is then performed to yield a location-based saliency map. By combination of location-based saliency within each proto-object, the proto-object-based saliency is evaluated. The most salient proto-object is selected for attention, and it is finally put into the perceptual completion processing module to yield a complete object region. This model has been applied into distinct tasks of robots: detection of task-specific stationary and moving objects. Experimental results under different conditions are shown to validate this model.

  13. Imitation Learning Errors Are Affected by Visual Cues in Both Performance and Observation Phases.

    PubMed

    Mizuguchi, Takashi; Sugimura, Ryoko; Shimada, Hideaki; Hasegawa, Takehiro

    2017-08-01

    Mechanisms of action imitation were examined. Previous studies have suggested that success or failure of imitation is determined at the point of observing an action. In other words, cognitive processing after observation is not related to the success of imitation; 20 university students participated in each of three experiments in which they observed a series of object manipulations consisting of four elements (hands, tools, object, and end points) and then imitated the manipulations. In Experiment 1, a specific intially observed element was color coded, and the specific manipulated object at the imitation stage was identically color coded; participants accurately imitated the color coded element. In Experiment 2, a specific element was color coded at the observation but not at the imitation stage, and there were no effects of color coding on imitation. In Experiment 3, participants were verbally instructed to attend to a specific element at the imitation stage, but the verbal instructions had no effect. Thus, the success of imitation may not be determined at the stage of observing an action and color coding can provide a clue for imitation at the imitation stage.

  14. Association of obesity with healthcare resource utilization and costs in a commercial population.

    PubMed

    Kamble, Pravin S; Hayden, Jennifer; Collins, Jenna; Harvey, Raymond A; Suehs, Brandon; Renda, Andrew; Hammer, Mette; Huang, Joanna; Bouchard, Jonathan

    2018-05-10

    To examine the association of obesity with healthcare resource utilization (HRU) and costs among commercially insured individuals. This retrospective observational cohort study used administrative claims from 1 January 2007 to 1 December 2013. The ICD-9-CM status codes (V85 hierarchy) from 2008 to 2012 classified body mass index (BMI) into the World Health Organizations' BMI categories. The date of first observed BMI code was defined as the index date and continuous eligibility for one year pre- and post- index date was ensured. Post-index claims determined individuals' HRU and costs. Sampling weights developed using the entropy balance method and National Health and Nutrition Examination Survey data ensured representation of the US adult commercially insured population. Baseline characteristics were described across BMI classes and associations between BMI categories, and outcomes were examined using multivariable regression. The cohort included 9651 individuals with BMI V85 codes. After weighting, the BMI distribution was: normal (31.1%), overweight (33.4%), obese class I (22.0%), obese class II (8.1%) and obese class III (5.4%). Increasing BMI was associated with greater prevalence of cardiometabolic conditions, including hypertension, type 2 diabetes and metabolic syndrome. The use of antihypertensives, antihyperlipidemics, antidiabetics, analgesics and antidepressants rose with increasing BMI. Greater BMI level was associated with increased inpatient, emergency department and outpatient utilization, and higher total healthcare, medical and pharmacy costs. Increasing BMI was associated with higher prevalence of cardiometabolic conditions and higher HRU and costs. There is an urgent need to address the epidemic of obesity and its clinical and economic impacts.

  15. A dynamic code for economic object valuation in prefrontal cortex neurons

    PubMed Central

    Tsutsui, Ken-Ichiro; Grabenhorst, Fabian; Kobayashi, Shunsuke; Schultz, Wolfram

    2016-01-01

    Neuronal reward valuations provide the physiological basis for economic behaviour. Yet, how such valuations are converted to economic decisions remains unclear. Here we show that the dorsolateral prefrontal cortex (DLPFC) implements a flexible value code based on object-specific valuations by single neurons. As monkeys perform a reward-based foraging task, individual DLPFC neurons signal the value of specific choice objects derived from recent experience. These neuronal object values satisfy principles of competitive choice mechanisms, track performance fluctuations and follow predictions of a classical behavioural model (Herrnstein’s matching law). Individual neurons dynamically encode both, the updating of object values from recently experienced rewards, and their subsequent conversion to object choices during decision-making. Decoding from unselected populations enables a read-out of motivational and decision variables not emphasized by individual neurons. These findings suggest a dynamic single-neuron and population value code in DLPFC that advances from reward experiences to economic object values and future choices. PMID:27618960

  16. Cosmological signatures of ultralight dark matter with an axionlike potential

    NASA Astrophysics Data System (ADS)

    Cedeño, Francisco X. Linares; González-Morales, Alma X.; Ureña-López, L. Arturo

    2017-09-01

    Nonlinearities in a realistic axion field potential may play an important role in the cosmological dynamics. In this paper we use the Boltzmann code class to solve the background and linear perturbations evolution of an axion field and contrast our results with those of CDM and the free axion case. We conclude that there is a slight delay in the onset of the axion field oscillations when nonlinearities in the axion potential are taken into account. Besides, we identify a tachyonic instability of linear modes resulting in the presence of a bump in the power spectrum at small scales. Some comments are in turn about the true source of the tachyonic instability, how the parameters of the axionlike potential can be constrained by Ly-α observations, and the consequences in the stability of self-gravitating objects made of axions.

  17. On codes with multi-level error-correction capabilities

    NASA Technical Reports Server (NTRS)

    Lin, Shu

    1987-01-01

    In conventional coding for error control, all the information symbols of a message are regarded equally significant, and hence codes are devised to provide equal protection for each information symbol against channel errors. However, in some occasions, some information symbols in a message are more significant than the other symbols. As a result, it is desired to devise codes with multilevel error-correcting capabilities. Another situation where codes with multi-level error-correcting capabilities are desired is in broadcast communication systems. An m-user broadcast channel has one input and m outputs. The single input and each output form a component channel. The component channels may have different noise levels, and hence the messages transmitted over the component channels require different levels of protection against errors. Block codes with multi-level error-correcting capabilities are also known as unequal error protection (UEP) codes. Structural properties of these codes are derived. Based on these structural properties, two classes of UEP codes are constructed.

  18. FPGA-based rate-adaptive LDPC-coded modulation for the next generation of optical communication systems.

    PubMed

    Zou, Ding; Djordjevic, Ivan B

    2016-09-05

    In this paper, we propose a rate-adaptive FEC scheme based on LDPC codes together with its software reconfigurable unified FPGA architecture. By FPGA emulation, we demonstrate that the proposed class of rate-adaptive LDPC codes based on shortening with an overhead from 25% to 42.9% provides a coding gain ranging from 13.08 dB to 14.28 dB at a post-FEC BER of 10-15 for BPSK transmission. In addition, the proposed rate-adaptive LDPC coding combined with higher-order modulations have been demonstrated including QPSK, 8-QAM, 16-QAM, 32-QAM, and 64-QAM, which covers a wide range of signal-to-noise ratios. Furthermore, we apply the unequal error protection by employing different LDPC codes on different bits in 16-QAM and 64-QAM, which results in additional 0.5dB gain compared to conventional LDPC coded modulation with the same code rate of corresponding LDPC code.

  19. Testing the Formation Mechanism of Sub-Stellar Objects in Lupus (A SOLA Team Study)

    NASA Astrophysics Data System (ADS)

    De Gregorio-Monsalvo, Itziar; Lopez, C.; Takahashi, S.; Santamaria-Miranda

    2017-06-01

    The international SOLA team (Soul of Lupus with ALMA) has identified a set of pre- and proto-stellar candidates in Lupus 1 and 3 of substellar nature using 1.1mm ASTE/AzTEC maps and our optical to submillimeter database. We have observed with ALMA the most promising pre- and proto-brown dwarfs candidates. Our aims are to provide insights on how substellar objects form and evolve, from the equivalent to the pre-stellar cores to the Class II stage in the low mass regime of star formation. Our sample comprises 33 pre-stellar objects, 7 Class 0 and I objects, and 22 Class II objects.

  20. Meanline Analysis of Turbines with Choked Flow in the Object-Oriented Turbomachinery Analysis Code

    NASA Technical Reports Server (NTRS)

    Hendricks, Eric S.

    2016-01-01

    The Object-Oriented Turbomachinery Analysis Code (OTAC) is a new meanline/streamline turbomachinery modeling tool being developed at NASA GRC. During the development process, a limitation of the code was discovered in relation to the analysis of choked flow in axial turbines. This paper describes the relevant physics for choked flow as well as the changes made to OTAC to enable analysis in this flow regime.

  1. GCS component development cycle

    NASA Astrophysics Data System (ADS)

    Rodríguez, Jose A.; Macias, Rosa; Molgo, Jordi; Guerra, Dailos; Pi, Marti

    2012-09-01

    The GTC1 is an optical-infrared 10-meter segmented mirror telescope at the ORM observatory in Canary Islands (Spain). First light was at 13/07/2007 and since them it is in the operation phase. The GTC control system (GCS) is a distributed object & component oriented system based on RT-CORBA8 and it is responsible for the management and operation of the telescope, including its instrumentation. GCS has used the Rational Unified process (RUP9) in its development. RUP is an iterative software development process framework. After analysing (use cases) and designing (UML10) any of GCS subsystems, an initial component description of its interface is obtained and from that information a component specification is written. In order to improve the code productivity, GCS has adopted the code generation to transform this component specification into the skeleton of component classes based on a software framework, called Device Component Framework. Using the GCS development tools, based on javadoc and gcc, in only one step, the component is generated, compiled and deployed to be tested for the first time through our GUI inspector. The main advantages of this approach are the following: It reduces the learning curve of new developers and the development error rate, allows a systematic use of design patterns in the development and software reuse, speeds up the deliverables of the software product and massively increase the timescale, design consistency and design quality, and eliminates the future refactoring process required for the code.

  2. Location-coding account versus affordance-activation account in handle-to-hand correspondence effects: Evidence of Simon-like effects based on the coding of action direction.

    PubMed

    Pellicano, Antonello; Koch, Iring; Binkofski, Ferdinand

    2017-09-01

    An increasing number of studies have shown a close link between perception and action, which is supposed to be responsible for the automatic activation of actions compatible with objects' properties, such as the orientation of their graspable parts. It has been observed that left and right hand responses to objects (e.g., cups) are faster and more accurate if the handle orientation corresponds to the response location than when it does not. Two alternative explanations have been proposed for this handle-to-hand correspondence effect : location coding and affordance activation. The aim of the present study was to provide disambiguating evidence on the origin of this effect by employing object sets for which the visually salient portion was separated from, and opposite to the graspable 1, and vice versa. Seven experiments were conducted employing both single objects and object pairs as visual stimuli to enhance the contextual information about objects' graspability and usability. Notwithstanding these manipulations intended to favor affordance activation, results fully supported the location-coding account displaying significant Simon-like effects that involved the orientation of the visually salient portion of the object stimulus and the location of the response. Crucially, we provided evidence of Simon-like effects based on higher-level cognitive, iconic representations of action directions rather than based on lower-level spatial coding of the pure position of protruding portions of the visual stimuli. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  3. Virtual Doors to Brick and Mortar Learning

    NASA Astrophysics Data System (ADS)

    Shaw, M. S.; Gay, P. L.; Meyer, D. T.; Zamfirescu, J. D.; Smith, J. E.; MIT Educational Studies Program Team

    2005-12-01

    The MIT Educational Studies Program (ESP) has spent the past year developing an online gateway for outreach programs. The website has a five-fold purpose: to introduce the organization to potential students, teachers, volunteers and collaborators; to allow teachers to create, design and interact with classes and to allow students to register for and dialogue with these classes; to provide an online forum for continuing dialogue; and to provide organizers a wiki for documenting program administration. What makes our site unique is the free and flexible nature of our easily edited and expanded code. In its standard installation, teachers setup classes, and administrators can approve/edit classes and make classes visible in an online catalogues. Student registration is completely customizable - students can register for self-selected classes, or they can register for a program and later get placed into teacher-selected classes. Free wiki software allows users to interactively create and edit documentation and knowledgebases. This allows administrators to track online what has been done while at the same time creating instant documentation for future programs. The online forum is a place where students can go after our programs end to learn more, interact with their classmates, and continue dialogues started in our classrooms. We also use the forum to get feedback on past and future programs. The ease with which the software handles program creation, registration, communications and more allows programs for roughly 3000 students per year to be handled by about 20 volunteering undergraduates. By combining all these elements - promotion, class creation, program registration, an organizational wiki, and student forums - we create a one-stop virtual entryway into face-to-face learning that allows students to continue their experience after they leave the classroom. The code for this site is available for free upon request to other organizations.

  4. Carbapenemases in Enterobacteriaceae: types and molecular epidemiology.

    PubMed

    Martínez-Martínez, Luis; González-López, Juan José

    2014-12-01

    The most important mechanism of carbapenem resistance in Enterobacteriaceae is the production of carbapenemases, although resistance can also result from the synergistic activity between AmpC-type or (to a lesser extent) extended-spectrum beta-lactamases combined with decreased outer membrane permeability. Three major molecular classes of carbapenemases are recognized: A, B and D. Classes A and D are serine-beta-lactamases, whereas class B are metallo-beta-lactamases (their hydrolytic activity depends on the presence of zinc). In addition to carbapenems, carbapenemases also hydrolyze other beta-lactams, but the concrete substrate profile depends on the enzyme type. In general terms, class A enzymes are to some extent inhibited by clavulanic acid, and class B enzymes do not affect monobactams and are inhibited by zinc chelators. Given Enterobacteriaceae producing carbapenemases usually also contain gene coding for other mechanisms of resistance to beta-lactams, it is not unusual for the organisms to present complex beta-lactam resistance phenotypes. Additionally, these organisms frequently contain other genes that confer resistance to quinolones, aminoglycosides, tetracyclines, sulphonamides and other families of antimicrobial agents, which cause multiresistance or even panresistance. Currently, the most important type of class A carbapenemases are KPC enzymes, whereas VIM, IMP and (particularly) NDM in class B and OXA-48 (and related) in class D are the more relevant enzymes. Whereas some enzymes are encoded by chromosomal genes, most carbapenemases are plasmid-mediated (with genes frequently located in integrons), which favors the dissemination of the enzymes. Detailed information of the genetic platforms and the context of the genes coding for the most relevant enzymes will be presented in this review. Copyright © 2014 Elsevier España, S.L.U. All rights reserved.

  5. Multi-functional acetyl-CoA carboxylase from Brassica napus is encoded by a multi-gene family: indication for plastidic localization of at least one isoform.

    PubMed

    Schulte, W; Töpfer, R; Stracke, R; Schell, J; Martini, N

    1997-04-01

    Three genes coding for different multifunctional acetyl-CoA carboxylase (ACCase; EC 6.4.1.2) isoenzymes from Brassica napus were isolated and divided into two major classes according to structural features in their 5' regions: class I comprises two genes with an additional coding exon of approximately 300 bp at the 5' end, and class II is represented by one gene carrying an intron of 586 bp in its 5' untranslated region. Fusion of the peptide sequence encoded by the additional first exon of a class I ACCase gene to the jellyfish Aequorea victoria green fluorescent protein (GFP) and transient expression in tobacco protoplasts targeted GFP to the chloroplasts. In contrast to the deduced primary structure of the biotin carboxylase domain encoded by the class I gene, the corresponding amino acid sequence of the class II ACCase shows higher identity with that of the Arabidopsis ACCase, both lacking a transit peptide. The Arabidopsis ACCase has been proposed to be a cytosolic isoenzyme. These observations indicate that the two classes of ACCase genes encode plastidic and cytosolic isoforms of multi-functional, eukaryotic type, respectively, and that B. napus contains at least one multi-functional ACCase besides the multi-subunit, prokaryotic type located in plastids. Southern blot analysis of genomic DNA from B. napus, Brassica rapa, and Brassica oleracea, the ancestors of amphidiploid rapeseed, using a fragment of a multi-functional ACCase gene as a probe revealed that ACCase is encoded by a multi-gene family of at least five members.

  6. Using an audience response system (ARS) in a face-to-face and distance education CPT/HCPCS coding course.

    PubMed

    Harris, Susie T; Zeng, Xiaoming

    2010-01-01

    We report the use of an audience response system (ARS) in an undergraduate health information management course. The ARS converts a standard PowerPoint presentation into an interactive learning system that engages students in active participation, and it allows instructors to display questions, surveys, opinion polls, and games. We used the ARS in a 2008 course, Health Services Coding, for lecture and test reviews. The class consisted of 15 students; nine were on-campus students and six were distance education students. All of the responding students agreed that the ARS software facilitated learning and made lectures and reviews more interesting and interactive, and they preferred taking a class using an ARS.

  7. EVALUATION OF ULTRASONIC PHASED-ARRAY FOR DETECTION OF PLANAR FLAWS IN HIGH-DENSITY POLYETHYLENE (HDPE) BUTT-FUSION JOINTS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Prowant, Matthew S.; Denslow, Kayte M.; Moran, Traci L.

    2016-09-21

    The desire to use high-density polyethylene (HDPE) piping in buried Class 3 service and cooling water systems in nuclear power plants is primarily motivated by the material’s high resistance to corrosion relative to that of steel and metal alloys. The rules for construction of Class 3 HDPE pressure piping systems were originally published in Code Case N-755 and were recently incorporated into the American Society of Mechanical Engineers Boiler and Pressure Vessel Code (ASME BPVC) Section III as Mandatory Appendix XXVI (2015 Edition). The requirements for HDPE examination are guided by criteria developed for metal pipe and are based onmore » industry-led HDPE research or conservative calculations.« less

  8. Is a Genome a Codeword of an Error-Correcting Code?

    PubMed Central

    Kleinschmidt, João H.; Silva-Filho, Márcio C.; Bim, Edson; Herai, Roberto H.; Yamagishi, Michel E. B.; Palazzo, Reginaldo

    2012-01-01

    Since a genome is a discrete sequence, the elements of which belong to a set of four letters, the question as to whether or not there is an error-correcting code underlying DNA sequences is unavoidable. The most common approach to answering this question is to propose a methodology to verify the existence of such a code. However, none of the methodologies proposed so far, although quite clever, has achieved that goal. In a recent work, we showed that DNA sequences can be identified as codewords in a class of cyclic error-correcting codes known as Hamming codes. In this paper, we show that a complete intron-exon gene, and even a plasmid genome, can be identified as a Hamming code codeword as well. Although this does not constitute a definitive proof that there is an error-correcting code underlying DNA sequences, it is the first evidence in this direction. PMID:22649495

  9. English Vocabulary Instruction in Six Early Childhood Classrooms in Hong Kong

    ERIC Educational Resources Information Center

    Lau, Carrie; Rao, Nirmala

    2013-01-01

    Vocabulary instruction during English language learning was observed for one week in six classrooms (three K2 classes for four-year olds and three K3 classes for five-year olds) from three kindergartens in two districts of Hong Kong. From 23 sessions of observations and 535 minutes of data, field notes were coded to identify instances of…

  10. 77 FR 21510 - Proposed Revocation of Class D Airspace; Andalusia, AL and Proposed Amendment of Class E Airspace...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-04-10

    ... Andalusia, AL, as the Air Traffic Control Tower at South Alabama Regional Airport at Bill Benton Field has... Alabama Regional Airport at Bill Benton Field. This action also would update the geographic coordinates of... 1, Code of Federal Regulations, part 51, subject to the annual revision of FAA, Order 7400.9 and...

  11. Effect of separate sampling on classification accuracy.

    PubMed

    Shahrokh Esfahani, Mohammad; Dougherty, Edward R

    2014-01-15

    Measurements are commonly taken from two phenotypes to build a classifier, where the number of data points from each class is predetermined, not random. In this 'separate sampling' scenario, the data cannot be used to estimate the class prior probabilities. Moreover, predetermined class sizes can severely degrade classifier performance, even for large samples. We employ simulations using both synthetic and real data to show the detrimental effect of separate sampling on a variety of classification rules. We establish propositions related to the effect on the expected classifier error owing to a sampling ratio different from the population class ratio. From these we derive a sample-based minimax sampling ratio and provide an algorithm for approximating it from the data. We also extend to arbitrary distributions the classical population-based Anderson linear discriminant analysis minimax sampling ratio derived from the discriminant form of the Bayes classifier. All the codes for synthetic data and real data examples are written in MATLAB. A function called mmratio, whose output is an approximation of the minimax sampling ratio of a given dataset, is also written in MATLAB. All the codes are available at: http://gsp.tamu.edu/Publications/supplementary/shahrokh13b.

  12. Conserved pattern of embryonic actin gene expression in several sea urchins and a sand dollar.

    PubMed

    Bushman, F D; Crain, W R

    1983-08-01

    An examination of the size and relative abundance of actin-coding RNA in embryos of four sea urchins (Strongylocentrotus purpuratus, Strongylocentrotus droebachiensis, Arbacia punctulata, Lytechinus variegatus) and one sand dollar (Echinarachnius parma) reveals a generally conserved program of expression. In each species the relative abundance of these sequences is low in early embryos and begins to rise during late cleavage or blastula stages. In the four sea urchins, actin-coding RNAs increase between approximately 9- and 35-fold by pluteus or an earlier stage, and in the sand dollar about 5.5-fold by blastula. A major actin-coding RNA class of 2.0-2.2 kilobases (kb) is found in each species. A smaller actin-coding RNA class, which accumulates during embryogenesis, is also present in S. purpuratus (1.8 kb), S. droebachiensis (1.9 kb), and A. punctulata (1.6 kb), but apparently absent in L. variegatus and E. parma. In S. droebachiensis, actin-coding RNA is relatively abundant in unfertilized eggs and drops sharply by the 16-cell stage. This is in contrast to the other sea urchins where the actin message content is relatively low in eggs and does not change substantially in the embryos throughout early cleavage. The observations in this study suggest that the pattern of embryonic expression of at least some members of this gene family is ancient and conserved.

  13. Studying Absenteeism in Principles of Macroeconomics: Do Attendance Policies Make a Difference?

    ERIC Educational Resources Information Center

    Self, Sharmistha

    2012-01-01

    The primary objective of this article is to see if and how attendance policy influences class attendance in undergraduate-level principles of macroeconomics classes. The second objective, which is related to the first, is to examine whether the nature of the attendance policy matters in terms of its impact on class attendance behavior. The results…

  14. Actualizing the Environment: A Study of First-Year Composition Student MOO Activity.

    ERIC Educational Resources Information Center

    English, Joel A.

    This paper describes the use of technology in a first year college writing class. The class utilizes a multi-user object-oriented domain (MOO) which allows participants to talk, perform actions, thoughts, and emotions, manipulate objects and furniture, and altogether control the online environment. The class holds discussions on the computer in…

  15. On the optimality of code options for a universal noiseless coder

    NASA Technical Reports Server (NTRS)

    Yeh, Pen-Shu; Rice, Robert F.; Miller, Warner

    1991-01-01

    A universal noiseless coding structure was developed that provides efficient performance over an extremely broad range of source entropy. This is accomplished by adaptively selecting the best of several easily implemented variable length coding algorithms. Custom VLSI coder and decoder modules capable of processing over 20 million samples per second are currently under development. The first of the code options used in this module development is shown to be equivalent to a class of Huffman code under the Humblet condition, other options are shown to be equivalent to the Huffman codes of a modified Laplacian symbol set, at specified symbol entropy values. Simulation results are obtained on actual aerial imagery, and they confirm the optimality of the scheme. On sources having Gaussian or Poisson distributions, coder performance is also projected through analysis and simulation.

  16. Spectroscopic and physical parameters of Galactic O-type stars. III. Mass discrepancy and rotational mixing

    NASA Astrophysics Data System (ADS)

    Markova, N.; Puls, J.; Langer, N.

    2018-05-01

    Context. Massive stars play a key role in the evolution of galaxies and our Universe. Aims: Our goal is to compare observed and predicted properties of single Galactic O stars to identify and constrain uncertain physical parameters and processes in stellar evolution and atmosphere models. Methods: We used a sample of 53 objects of all luminosity classes and with spectral types from O3 to O9.7. For 30 of these, we determined the main photospheric and wind parameters, including projected rotational rates accounting for macroturbulence, and He and N surface abundances, using optical spectroscopy and applying the model atmosphere code FASTWIND. For the remaining objects, similar data from the literature, based on analyses by means of the CMFGEN code, were used instead. The properties of our sample were then compared to published predictions based on two grids of single massive star evolution models that include rotationally induced mixing. Results: Any of the considered model grids face problem in simultaneously reproducing the stellar masses, equatorial gravities, surface abundances, and rotation rates of our sample stars. The spectroscopic masses derived for objects below 30 M⊙ tend to be smaller than the evolutionary ones, no matter which of the two grids have been used as a reference. While this result may indicate the need to improve the model atmosphere calculations (e.g. regarding the treatment of turbulent pressure), our analysis shows that the established mass problem cannot be fully explained in terms of inaccurate parameters obtained by quantitative spectroscopy or inadequate model values of Vrot on the zero age main sequence. Within each luminosity class, we find a close correlation of N surface abundance and luminosity, and a stronger N enrichment in more massive and evolved O stars. Additionally, we also find a correlation of the surface nitrogen and helium abundances. The large number of nitrogen-enriched stars above 30 M⊙ argues for rotationally induced mixing as the most likely explanation. However, none of the considered models can match the observed trends correctly, especially in the high mass regime. Conclusions: We confirm mass discrepancy for objects in the low mass O-star regime. We conclude that the rotationally induced mixing of helium to the stellar surface is too strong in some of the models. We also suggest that present inadequacies of the models to represent the N enrichment in more massive stars with relatively slow rotation might be related (among other issues) to problematic efficiencies of rotational mixing. We are left with a picture in which invoking binarity and magnetic fields is required to achieve a more complete agreement of the observed surface properties of a population of massive main-sequence stars with corresponding evolutionary models.

  17. Tourette Association Chapters

    MedlinePlus

    ... 1 [read] => 1 [upload_files] => 1 [chapter] => 1 ) [filter] => ) Alaska Chapter View WP_User Object ( [data] => stdClass ... 1 [read] => 1 [upload_files] => 1 [chapter] => 1 ) [filter] => ) Arizona Chapter View WP_User Object ( [data] => stdClass ...

  18. Object-oriented code SUR for plasma kinetic simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Levchenko, V.D.; Sigov, Y.S.

    1995-12-31

    We have developed a self-consistent simulation code based on object-oriented model of plasma (OOMP) for solving the Vlasov/Poisson (V/P), Vlasov/Maxwell (V/M), Bhatnagar-Gross-Krook (BGK) as well as Fokker-Planck (FP) kinetic equations. The application of an object-oriented approach (OOA) to simulation of plasmas and plasma-like media by means of splitting methods permits to uniformly describe and solve the wide circle of plasma kinetics problems, including those being very complicated: many-dimensional, relativistic, with regard for collisions, specific boundary conditions etc. This paper gives the brief description of possibilities of the SUR code, as a concrete realization of OOMP.

  19. Coding of visual object features and feature conjunctions in the human brain.

    PubMed

    Martinovic, Jasna; Gruber, Thomas; Müller, Matthias M

    2008-01-01

    Object recognition is achieved through neural mechanisms reliant on the activity of distributed coordinated neural assemblies. In the initial steps of this process, an object's features are thought to be coded very rapidly in distinct neural assemblies. These features play different functional roles in the recognition process--while colour facilitates recognition, additional contours and edges delay it. Here, we selectively varied the amount and role of object features in an entry-level categorization paradigm and related them to the electrical activity of the human brain. We found that early synchronizations (approx. 100 ms) increased quantitatively when more image features had to be coded, without reflecting their qualitative contribution to the recognition process. Later activity (approx. 200-400 ms) was modulated by the representational role of object features. These findings demonstrate that although early synchronizations may be sufficient for relatively crude discrimination of objects in visual scenes, they cannot support entry-level categorization. This was subserved by later processes of object model selection, which utilized the representational value of object features such as colour or edges to select the appropriate model and achieve identification.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gougar, Hans

    This document outlines the development of a high fidelity, best estimate nuclear power plant severe transient simulation capability that will complement or enhance the integral system codes historically used for licensing and analysis of severe accidents. As with other tools in the Risk Informed Safety Margin Characterization (RISMC) Toolkit, the ultimate user of Enhanced Severe Transient Analysis and Prevention (ESTAP) capability is the plant decision-maker; the deliverable to that customer is a modern, simulation-based safety analysis capability, applicable to a much broader class of safety issues than is traditional Light Water Reactor (LWR) licensing analysis. Currently, the RISMC pathway’s majormore » emphasis is placed on developing RELAP-7, a next-generation safety analysis code, and on showing how to use RELAP-7 to analyze margin from a modern point of view: that is, by characterizing margin in terms of the probabilistic spectra of the “loads” applied to systems, structures, and components (SSCs), and the “capacity” of those SSCs to resist those loads without failing. The first objective of the ESTAP task, and the focus of one task of this effort, is to augment RELAP-7 analyses with user-selected multi-dimensional, multi-phase models of specific plant components to simulate complex phenomena that may lead to, or exacerbate, severe transients and core damage. Such phenomena include: coolant crossflow between PWR assemblies during a severe reactivity transient, stratified single or two-phase coolant flow in primary coolant piping, inhomogeneous mixing of emergency coolant water or boric acid with hot primary coolant, and water hammer. These are well-documented phenomena associated with plant transients but that are generally not captured in system codes. They are, however, generally limited to specific components, structures, and operating conditions. The second ESTAP task is to similarly augment a severe (post-core damage) accident integral analyses code with high fidelity simulations that would allow investigation of multi-dimensional, multi-phase containment phenomena that are only treated approximately in established codes.« less

  1. Bats' avoidance of real and virtual objects: implications for the sonar coding of object size.

    PubMed

    Goerlitz, Holger R; Genzel, Daria; Wiegrebe, Lutz

    2012-01-01

    Fast movement in complex environments requires the controlled evasion of obstacles. Sonar-based obstacle evasion involves analysing the acoustic features of object-echoes (e.g., echo amplitude) that correlate with this object's physical features (e.g., object size). Here, we investigated sonar-based obstacle evasion in bats emerging in groups from their day roost. Using video-recordings, we first show that the bats evaded a small real object (ultrasonic loudspeaker) despite the familiar flight situation. Secondly, we studied the sonar coding of object size by adding a larger virtual object. The virtual object echo was generated by real-time convolution of the bats' calls with the acoustic impulse response of a large spherical disc and played from the loudspeaker. Contrary to the real object, the virtual object did not elicit evasive flight, despite the spectro-temporal similarity of real and virtual object echoes. Yet, their spatial echo features differ: virtual object echoes lack the spread of angles of incidence from which the echoes of large objects arrive at a bat's ears (sonar aperture). We hypothesise that this mismatch of spectro-temporal and spatial echo features caused the lack of virtual object evasion and suggest that the sonar aperture of object echoscapes contributes to the sonar coding of object size. Copyright © 2011 Elsevier B.V. All rights reserved.

  2. Information Environments

    NASA Technical Reports Server (NTRS)

    Follen, Gregory J.; Naiman, Cynthia

    2003-01-01

    The objective of GRC CNIS/IE work is to build a plug-n-play infrastructure that provides the Grand Challenge Applications with a suite of tools for coupling codes together, numerical zooming between fidelity of codes and gaining deployment of these simulations onto the Information Power Grid. The GRC CNIS/IE work will streamline and improve this process by providing tighter integration of various tools through the use of object oriented design of component models and data objects and through the use of CORBA (Common Object Request Broker Architecture).

  3. Multi-class geospatial object detection and geographic image classification based on collection of part detectors

    NASA Astrophysics Data System (ADS)

    Cheng, Gong; Han, Junwei; Zhou, Peicheng; Guo, Lei

    2014-12-01

    The rapid development of remote sensing technology has facilitated us the acquisition of remote sensing images with higher and higher spatial resolution, but how to automatically understand the image contents is still a big challenge. In this paper, we develop a practical and rotation-invariant framework for multi-class geospatial object detection and geographic image classification based on collection of part detectors (COPD). The COPD is composed of a set of representative and discriminative part detectors, where each part detector is a linear support vector machine (SVM) classifier used for the detection of objects or recurring spatial patterns within a certain range of orientation. Specifically, when performing multi-class geospatial object detection, we learn a set of seed-based part detectors where each part detector corresponds to a particular viewpoint of an object class, so the collection of them provides a solution for rotation-invariant detection of multi-class objects. When performing geographic image classification, we utilize a large number of pre-trained part detectors to discovery distinctive visual parts from images and use them as attributes to represent the images. Comprehensive evaluations on two remote sensing image databases and comparisons with some state-of-the-art approaches demonstrate the effectiveness and superiority of the developed framework.

  4. The Implementation of Satellite Attitude Control System Software Using Object Oriented Design

    NASA Technical Reports Server (NTRS)

    Reid, W. Mark; Hansell, William; Phillips, Tom; Anderson, Mark O.; Drury, Derek

    1998-01-01

    NASA established the Small Explorer (SNMX) program in 1988 to provide frequent opportunities for highly focused and relatively inexpensive space science missions. The SMEX program has produced five satellites, three of which have been successfully launched. The remaining two spacecraft are scheduled for launch within the coming year. NASA has recently developed a prototype for the next generation Small Explorer spacecraft (SMEX-Lite). This paper describes the object-oriented design (OOD) of the SMEX-Lite Attitude Control System (ACS) software. The SMEX-Lite ACS is three-axis controlled and is capable of performing sub-arc-minute pointing. This paper first describes high level requirements governing the SMEX-Lite ACS software architecture. Next, the context in which the software resides is explained. The paper describes the principles of encapsulation, inheritance, and polymorphism with respect to the implementation of an ACS software system. This paper will also discuss the design of several ACS software components. Specifically, object-oriented designs are presented for sensor data processing, attitude determination, attitude control, and failure detection. Finally, this paper will address the establishment of the ACS Foundation Class (AFC) Library. The AFC is a large software repository, requiring a minimal amount of code modifications to produce ACS software for future projects.

  5. Subjective Wellbeing, Objective Wellbeing and Inequality in Australia

    PubMed Central

    Western, Mark

    2016-01-01

    In recent years policy makers and social scientists have devoted considerable attention to wellbeing, a concept that refers to people’s capacity to live healthy, creative and fulfilling lives. Two conceptual approaches dominate wellbeing research. The objective approach examines the objective components of a good life. The subjective approach examines people’s subjective evaluations of their lives. In the objective approach how subjective wellbeing relates to objective wellbeing is not a relevant research question. The subjective approach does investigate how objective wellbeing relates to subjective wellbeing, but has focused primarily on one objective wellbeing indicator, income, rather than the comprehensive indicator set implied by the objective approach. This paper attempts to contribute by examining relationships between a comprehensive set of objective wellbeing measures and subjective wellbeing, and by linking wellbeing research to inequality research by also investigating how subjective and objective wellbeing relate to class, gender, age and ethnicity. We use three waves of a representative state-level household panel study from Queensland, Australia, undertaken from 2008 to 2010, to investigate how objective measures of wellbeing are socially distributed by gender, class, age, and ethnicity. We also examine relationships between objective wellbeing and overall life satisfaction, providing one of the first longitudinal analyses linking objective wellbeing with subjective evaluations. Objective aspects of wellbeing are unequally distributed by gender, age, class and ethnicity and are strongly associated with life satisfaction. Moreover, associations between gender, ethnicity, class and life satisfaction persist after controlling for objective wellbeing, suggesting that mechanisms in addition to objective wellbeing link structural dimensions of inequality to life satisfaction. PMID:27695042

  6. Subjective Wellbeing, Objective Wellbeing and Inequality in Australia.

    PubMed

    Western, Mark; Tomaszewski, Wojtek

    2016-01-01

    In recent years policy makers and social scientists have devoted considerable attention to wellbeing, a concept that refers to people's capacity to live healthy, creative and fulfilling lives. Two conceptual approaches dominate wellbeing research. The objective approach examines the objective components of a good life. The subjective approach examines people's subjective evaluations of their lives. In the objective approach how subjective wellbeing relates to objective wellbeing is not a relevant research question. The subjective approach does investigate how objective wellbeing relates to subjective wellbeing, but has focused primarily on one objective wellbeing indicator, income, rather than the comprehensive indicator set implied by the objective approach. This paper attempts to contribute by examining relationships between a comprehensive set of objective wellbeing measures and subjective wellbeing, and by linking wellbeing research to inequality research by also investigating how subjective and objective wellbeing relate to class, gender, age and ethnicity. We use three waves of a representative state-level household panel study from Queensland, Australia, undertaken from 2008 to 2010, to investigate how objective measures of wellbeing are socially distributed by gender, class, age, and ethnicity. We also examine relationships between objective wellbeing and overall life satisfaction, providing one of the first longitudinal analyses linking objective wellbeing with subjective evaluations. Objective aspects of wellbeing are unequally distributed by gender, age, class and ethnicity and are strongly associated with life satisfaction. Moreover, associations between gender, ethnicity, class and life satisfaction persist after controlling for objective wellbeing, suggesting that mechanisms in addition to objective wellbeing link structural dimensions of inequality to life satisfaction.

  7. A Study of Neutron Leakage in Finite Objects

    NASA Technical Reports Server (NTRS)

    Wilson, John W.; Slaba, Tony C.; Badavi, Francis F.; Reddell, Brandon D.; Bahadori, Amir A.

    2015-01-01

    A computationally efficient 3DHZETRN code capable of simulating High charge (Z) and Energy (HZE) and light ions (including neutrons) under space-like boundary conditions with enhanced neutron and light ion propagation was recently developed for simple shielded objects. Monte Carlo (MC) benchmarks were used to verify the 3DHZETRN methodology in slab and spherical geometry, and it was shown that 3DHZETRN agrees with MC codes to the degree that various MC codes agree among themselves. One limitation in the verification process is that all of the codes (3DHZETRN and three MC codes) utilize different nuclear models/databases. In the present report, the new algorithm, with well-defined convergence criteria, is used to quantify the neutron leakage from simple geometries to provide means of verifying 3D effects and to provide guidance for further code development.

  8. Short-Block Protograph-Based LDPC Codes

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush; Dolinar, Samuel; Jones, Christopher

    2010-01-01

    Short-block low-density parity-check (LDPC) codes of a special type are intended to be especially well suited for potential applications that include transmission of command and control data, cellular telephony, data communications in wireless local area networks, and satellite data communications. [In general, LDPC codes belong to a class of error-correcting codes suitable for use in a variety of wireless data-communication systems that include noisy channels.] The codes of the present special type exhibit low error floors, low bit and frame error rates, and low latency (in comparison with related prior codes). These codes also achieve low maximum rate of undetected errors over all signal-to-noise ratios, without requiring the use of cyclic redundancy checks, which would significantly increase the overhead for short blocks. These codes have protograph representations; this is advantageous in that, for reasons that exceed the scope of this article, the applicability of protograph representations makes it possible to design highspeed iterative decoders that utilize belief- propagation algorithms.

  9. Joint Geophysical Inversion With Multi-Objective Global Optimization Methods

    NASA Astrophysics Data System (ADS)

    Lelievre, P. G.; Bijani, R.; Farquharson, C. G.

    2015-12-01

    Pareto multi-objective global optimization (PMOGO) methods generate a suite of solutions that minimize multiple objectives (e.g. data misfits and regularization terms) in a Pareto-optimal sense. Providing a suite of models, as opposed to a single model that minimizes a weighted sum of objectives, allows a more complete assessment of the possibilities and avoids the often difficult choice of how to weight each objective. We are applying PMOGO methods to three classes of inverse problems. The first class are standard mesh-based problems where the physical property values in each cell are treated as continuous variables. The second class of problems are also mesh-based but cells can only take discrete physical property values corresponding to known or assumed rock units. In the third class we consider a fundamentally different type of inversion in which a model comprises wireframe surfaces representing contacts between rock units; the physical properties of each rock unit remain fixed while the inversion controls the position of the contact surfaces via control nodes. This third class of problem is essentially a geometry inversion, which can be used to recover the unknown geometry of a target body or to investigate the viability of a proposed Earth model. Joint inversion is greatly simplified for the latter two problem classes because no additional mathematical coupling measure is required in the objective function. PMOGO methods can solve numerically complicated problems that could not be solved with standard descent-based local minimization methods. This includes the latter two classes of problems mentioned above. There are significant increases in the computational requirements when PMOGO methods are used but these can be ameliorated using parallelization and problem dimension reduction strategies.

  10. Luminous Herbig-Haro objects from a massive protostar: The unique case of HH 80/81

    NASA Astrophysics Data System (ADS)

    Reipurth, Bo

    2017-08-01

    Herbig-Haro (HH) objects are the optical manifestations of shock waves excited by outflows from young stars. They represent one of the few classes of spatially extended astronomical objects where both structural changes and proper motions can be measured on time scales of years to decades. HH 80/81 is a pair of HH objects in Sagittarius which are the intrinsically most luminous HH objects known. The driving source of HH 80/81 is the embedded star IRAS 18162-2048, which has a luminosity of 20,000 Lsun and excites a compact HII region, suggesting that it is a newborn massive star. HH objects associated with massive young stars are very rare, only a handful of cases are known, but what makes the HH 80/81 source unique among massive protostars is that it produces a finely collimated bipolar radio jet with extremely high velocity and pointing straight to HH 80/81. We propose to observe the HH 80/81 complex with WFC3 and the following four filters: Halpha 6563, Hbeta 4861, [SII] 6717/31, and [OIII] 5007. First epoch HST images were obtained 22 years ago, which now allows a very precise determination of proper motions. Groundbased optical and radio proper motions are not only uncertain, but actually contradict each other, a controversy that will be resolved by HST. The fine resolution of WFC3 allows a study of both fine structural details and structural changes of the shocks. Finally we will use a sophisticated adaptive grid code to interpret the (de-reddened) line ratios across the shocks.

  11. Integrative and distinctive coding of visual and conceptual object features in the ventral visual stream

    PubMed Central

    Douglas, Danielle; Newsome, Rachel N; Man, Louisa LY

    2018-01-01

    A significant body of research in cognitive neuroscience is aimed at understanding how object concepts are represented in the human brain. However, it remains unknown whether and where the visual and abstract conceptual features that define an object concept are integrated. We addressed this issue by comparing the neural pattern similarities among object-evoked fMRI responses with behavior-based models that independently captured the visual and conceptual similarities among these stimuli. Our results revealed evidence for distinctive coding of visual features in lateral occipital cortex, and conceptual features in the temporal pole and parahippocampal cortex. By contrast, we found evidence for integrative coding of visual and conceptual object features in perirhinal cortex. The neuroanatomical specificity of this effect was highlighted by results from a searchlight analysis. Taken together, our findings suggest that perirhinal cortex uniquely supports the representation of fully specified object concepts through the integration of their visual and conceptual features. PMID:29393853

  12. Characterization of Non-coding DNA Satellites Associated with Sweepoviruses (Genus Begomovirus, Geminiviridae) – Definition of a Distinct Class of Begomovirus-Associated Satellites

    PubMed Central

    Lozano, Gloria; Trenado, Helena P.; Fiallo-Olivé, Elvira; Chirinos, Dorys; Geraud-Pouey, Francis; Briddon, Rob W.; Navas-Castillo, Jesús

    2016-01-01

    Begomoviruses (family Geminiviridae) are whitefly-transmitted, plant-infecting single-stranded DNA viruses that cause crop losses throughout the warmer parts of the World. Sweepoviruses are a phylogenetically distinct group of begomoviruses that infect plants of the family Convolvulaceae, including sweet potato (Ipomoea batatas). Two classes of subviral molecules are often associated with begomoviruses, particularly in the Old World; the betasatellites and the alphasatellites. An analysis of sweet potato and Ipomoea indica samples from Spain and Merremia dissecta samples from Venezuela identified small non-coding subviral molecules in association with several distinct sweepoviruses. The sequences of 18 clones were obtained and found to be structurally similar to tomato leaf curl virus-satellite (ToLCV-sat, the first DNA satellite identified in association with a begomovirus), with a region with significant sequence identity to the conserved region of betasatellites, an A-rich sequence, a predicted stem–loop structure containing the nonanucleotide TAATATTAC, and a second predicted stem–loop. These sweepovirus-associated satellites join an increasing number of ToLCV-sat-like non-coding satellites identified recently. Although sharing some features with betasatellites, evidence is provided to suggest that the ToLCV-sat-like satellites are distinct from betasatellites and should be considered a separate class of satellites, for which the collective name deltasatellites is proposed. PMID:26925037

  13. Management System for EMR Work Study Program.

    ERIC Educational Resources Information Center

    Columbia County Board of Public Instruction, Lake City, FL. Exceptional Child Education Dept.

    A computerized information management system involving the specification of objectives, the coding of teacher evaluations of students, and a variety of possible outputs has been used in a work study program for educable mentally retarded adolescents. Instructional objectives are specified and coded by number and category. Evaluation is by means of…

  14. Diagnostic Coding of Abuse Related Fractures at Two Children's Emergency Departments

    ERIC Educational Resources Information Center

    Somji, Zeeshanefatema; Plint, Amy; McGahern, Candice; Al-Saleh, Ahmed; Boutis, Kathy

    2011-01-01

    Objectives: Pediatric fractures suspicious for abuse are often evaluated in emergency departments (ED), although corresponding diagnostic coding for possible abuse may be lacking. Thus, the primary objective of this study was to determine the proportion of fracture cases investigated in the ED for abuse that had corresponding International…

  15. Prescription Drug Abuse Information in D.A.R.E.

    ERIC Educational Resources Information Center

    Morris, Melissa C.; Cline, Rebecca J. Welch; Weiler, Robert M.; Broadway, S. Camille

    2006-01-01

    This investigation was designed to examine prescription drug-related content and learning objectives in Drug Abuse Resistance Education (D.A.R.E.) for upper elementary and middle schools. Specific prescription-drug topics and context associated with content and objectives were coded. The coding system for topics included 126 topics organized…

  16. Coding Location: The View from Toddler Studies

    ERIC Educational Resources Information Center

    Huttenlocher, Janellen

    2008-01-01

    The ability to locate objects in the environment is adaptively important for mobile organisms. Research on location coding reveals that even toddlers have considerable spatial skill. Important information has been obtained using a disorientation task in which children watch a target object being hidden and are then blindfolded and rotated so they…

  17. Making the Undergraduate Classroom into a Policy Think Tank: Reflections from a Field Methods Class

    ERIC Educational Resources Information Center

    Broughton, Chad

    2011-01-01

    This article examines the opportunities and limitations presented by organizing an undergraduate field research methods class as a policy think tank working for a government client. Organized as such, the course had both the learning objectives of a traditional undergraduate methods class and the corporate objectives of a policy think tank (i.e.,…

  18. Cooperative optimization and their application in LDPC codes

    NASA Astrophysics Data System (ADS)

    Chen, Ke; Rong, Jian; Zhong, Xiaochun

    2008-10-01

    Cooperative optimization is a new way for finding global optima of complicated functions of many variables. The proposed algorithm is a class of message passing algorithms and has solid theory foundations. It can achieve good coding gains over the sum-product algorithm for LDPC codes. For (6561, 4096) LDPC codes, the proposed algorithm can achieve 2.0 dB gains over the sum-product algorithm at BER of 4×10-7. The decoding complexity of the proposed algorithm is lower than the sum-product algorithm can do; furthermore, the former can achieve much lower error floor than the latter can do after the Eb / No is higher than 1.8 dB.

  19. 26 CFR 1.922-1 - Requirements that a corporation must satisfy to be a FSC or a small FSC.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... the Caribbean Basin Economic Recovery Act (Code section 274(h)((6)(C)), or (B) a bilateral income tax... counted as a shareholder. (g) Class of stock. Q-7. What is preferred stock for purposes of determining... liquidation. Q-8. Can a corporation have outstanding more than one class of common stock? A-8. Yes. However...

  20. Object-Oriented Design for Sparse Direct Solvers

    NASA Technical Reports Server (NTRS)

    Dobrian, Florin; Kumfert, Gary; Pothen, Alex

    1999-01-01

    We discuss the object-oriented design of a software package for solving sparse, symmetric systems of equations (positive definite and indefinite) by direct methods. At the highest layers, we decouple data structure classes from algorithmic classes for flexibility. We describe the important structural and algorithmic classes in our design, and discuss the trade-offs we made for high performance. The kernels at the lower layers were optimized by hand. Our results show no performance loss from our object-oriented design, while providing flexibility, case of use, and extensibility over solvers using procedural design.

  1. A Multi-modal, Discriminative and Spatially Invariant CNN for RGB-D Object Labeling.

    PubMed

    Asif, Umar; Bennamoun, Mohammed; Sohel, Ferdous

    2017-08-30

    While deep convolutional neural networks have shown a remarkable success in image classification, the problems of inter-class similarities, intra-class variances, the effective combination of multimodal data, and the spatial variability in images of objects remain to be major challenges. To address these problems, this paper proposes a novel framework to learn a discriminative and spatially invariant classification model for object and indoor scene recognition using multimodal RGB-D imagery. This is achieved through three postulates: 1) spatial invariance - this is achieved by combining a spatial transformer network with a deep convolutional neural network to learn features which are invariant to spatial translations, rotations, and scale changes, 2) high discriminative capability - this is achieved by introducing Fisher encoding within the CNN architecture to learn features which have small inter-class similarities and large intra-class compactness, and 3) multimodal hierarchical fusion - this is achieved through the regularization of semantic segmentation to a multi-modal CNN architecture, where class probabilities are estimated at different hierarchical levels (i.e., imageand pixel-levels), and fused into a Conditional Random Field (CRF)- based inference hypothesis, the optimization of which produces consistent class labels in RGB-D images. Extensive experimental evaluations on RGB-D object and scene datasets, and live video streams (acquired from Kinect) show that our framework produces superior object and scene classification results compared to the state-of-the-art methods.

  2. 40 CFR 86.1542 - Information required.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ...), fuel system (including number of carburetors, number of carburetor barrels, fuel injection type and fuel tank(s) capacity and location), engine code, gross vehicle weight rating, inertia weight class and...

  3. Euglena gracilis chloroplast DNA: analysis of a 1.6 kb intron of the psb C gene containing an open reading frame of 458 codons.

    PubMed

    Montandon, P E; Vasserot, A; Stutz, E

    1986-01-01

    We retrieved a 1.6 kbp intron separating two exons of the psb C gene which codes for the 44 kDa reaction center protein of photosystem II. This intron is 3 to 4 times the size of all previously sequenced Euglena gracilis chloroplast introns. It contains an open reading frame of 458 codons potentially coding for a basic protein of 54 kDa of yet unknown function. The intron boundaries follow consensus sequences established for chloroplast introns related to class II and nuclear pre-mRNA introns. Its 3'-terminal segment has structural features similar to class II mitochondrial introns with an invariant base A as possible branch point for lariat formation.

  4. Positive predictive value between medical-chart body-mass-index category and obesity versus codes in a claims-data warehouse.

    PubMed

    Caplan, Eleanor O; Kamble, Pravin S; Harvey, Raymond A; Smolarz, B Gabriel; Renda, Andrew; Bouchard, Jonathan R; Huang, Joanna C

    2018-01-01

    To evaluate the positive predictive value of claims-based V85 codes for identifying individuals with varying degrees of BMI relative to their measured BMI obtained from medical record abstraction. This was a retrospective validation study utilizing administrative claims and medical chart data from 1 January 2009 to 31 August 2015. Randomly selected samples of patients enrolled in a Medicare Advantage Prescription Drug (MAPD) or commercial health plan and with a V85 claim were identified. The claims-based BMI category (underweight, normal weight, overweight, obese class I-III) was determined via corresponding V85 codes and compared to the BMI category derived from chart abstracted height, weight and/or BMI. The positive predictive values (PPVs) of the claims-based BMI categories were calculated with the corresponding 95% confidence intervals (CIs). The overall PPVs (95% CIs) in the MAPD and commercial samples were 90.3% (86.3%-94.4%) and 91.1% (87.3%-94.9%), respectively. In each BMI category, the PPVs (95% CIs) for the MAPD and commercial samples, respectively, were: underweight, 71.0% (55.0%-87.0%) and 75.9% (60.3%-91.4%); normal, 93.8% (85.4%-100%) and 87.8% (77.8%-97.8%); overweight, 97.4% (92.5%-100%) and 93.5% (84.9%-100%); obese class I, 96.9 (90.9%-100%) and 97.2% (91.9%-100%); obese class II, 97.0% (91.1%-100%) and 93.0% (85.4%-100%); and obese class III, 85.0% (73.3%-96.1%) and 97.1% (91.4%-100%). BMI categories derived from administrative claims, when available, can be used successfully particularly in the context of obesity research.

  5. The Rodin-Ohno hypothesis that two enzyme superfamilies descended from one ancestral gene: an unlikely scenario for the origins of translation that will not be dismissed

    PubMed Central

    2014-01-01

    Background Because amino acid activation is rate-limiting for uncatalyzed protein synthesis, it is a key puzzle in understanding the origin of the genetic code. Two unrelated classes (I and II) of contemporary aminoacyl-tRNA synthetases (aaRS) now translate the code. Observing that codons for the most highly conserved, Class I catalytic peptides, when read in the reverse direction, are very nearly anticodons for Class II defining catalytic peptides, Rodin and Ohno proposed that the two superfamilies descended from opposite strands of the same ancestral gene. This unusual hypothesis languished for a decade, perhaps because it appeared to be unfalsifiable. Results The proposed sense/antisense alignment makes important predictions. Fragments that align in antiparallel orientations, and contain the respective active sites, should catalyze the same two reactions catalyzed by contemporary synthetases. Recent experiments confirmed that prediction. Invariant cores from both classes, called Urzymes after Ur = primitive, authentic, plus enzyme and representing ~20% of the contemporary structures, can be expressed and exhibit high, proportionate rate accelerations for both amino-acid activation and tRNA acylation. A major fraction (60%) of the catalytic rate acceleration by contemporary synthetases resides in segments that align sense/antisense. Bioinformatic evidence for sense/antisense ancestry extends to codons specifying the invariant secondary and tertiary structures outside the active sites of the two synthetase classes. Peptides from a designed, 46-residue gene constrained by Rosetta to encode Class I and II ATP binding sites with fully complementary sequences both accelerate amino acid activation by ATP ~400 fold. Conclusions Biochemical and bioinformatic results substantially enhance the posterior probability that ancestors of the two synthetase classes arose from opposite strands of the same ancestral gene. The remarkable acceleration by short peptides of the rate-limiting step in uncatalyzed protein synthesis, together with the synergy of synthetase Urzymes and their cognate tRNAs, introduce a new paradigm for the origin of protein catalysts, emphasize the potential relevance of an operational RNA code embedded in the tRNA acceptor stems, and challenge the RNA-World hypothesis. Reviewers This article was reviewed by Dr. Paul Schimmel (nominated by Laura Landweber), Dr. Eugene Koonin and Professor David Ardell. PMID:24927791

  6. Molecular Evolution of Trehalose-6-Phosphate Synthase (TPS) Gene Family in Populus, Arabidopsis and Rice

    PubMed Central

    Yang, Hai-Ling; Liu, Yan-Jing; Wang, Cai-Ling; Zeng, Qing-Yin

    2012-01-01

    Trehalose-6-phosphate synthase (TPS) plays important roles in trehalose metabolism and signaling. Plant TPS proteins contain both a TPS and a trehalose-6-phosphate phosphatase (TPP) domain, which are coded by a multi-gene family. The plant TPS gene family has been divided into class I and class II. A previous study showed that the Populus, Arabidopsis, and rice genomes have seven class I and 27 class II TPS genes. In this study, we found that all class I TPS genes had 16 introns within the protein-coding region, whereas class II TPS genes had two introns. A significant sequence difference between the two classes of TPS proteins was observed by pairwise sequence comparisons of the 34 TPS proteins. A phylogenetic analysis revealed that at least seven TPS genes were present in the monocot–dicot common ancestor. Segmental duplications contributed significantly to the expansion of this gene family. At least five and three TPS genes were created by segmental duplication events in the Populus and rice genomes, respectively. Both the TPS and TPP domains of 34 TPS genes have evolved under purifying selection, but the selective constraint on the TPP domain was more relaxed than that on the TPS domain. Among 34 TPS genes from Populus, Arabidopsis, and rice, four class I TPS genes (AtTPS1, OsTPS1, PtTPS1, and PtTPS2) were under stronger purifying selection, whereas three Arabidopsis class I TPS genes (AtTPS2, 3, and 4) apparently evolved under relaxed selective constraint. Additionally, a reverse transcription polymerase chain reaction analysis showed the expression divergence of the TPS gene family in Populus, Arabidopsis, and rice under normal growth conditions and in response to stressors. Our findings provide new insights into the mechanisms of gene family expansion and functional evolution. PMID:22905132

  7. Molecular evolution of trehalose-6-phosphate synthase (TPS) gene family in Populus, Arabidopsis and rice.

    PubMed

    Yang, Hai-Ling; Liu, Yan-Jing; Wang, Cai-Ling; Zeng, Qing-Yin

    2012-01-01

    Trehalose-6-phosphate synthase (TPS) plays important roles in trehalose metabolism and signaling. Plant TPS proteins contain both a TPS and a trehalose-6-phosphate phosphatase (TPP) domain, which are coded by a multi-gene family. The plant TPS gene family has been divided into class I and class II. A previous study showed that the Populus, Arabidopsis, and rice genomes have seven class I and 27 class II TPS genes. In this study, we found that all class I TPS genes had 16 introns within the protein-coding region, whereas class II TPS genes had two introns. A significant sequence difference between the two classes of TPS proteins was observed by pairwise sequence comparisons of the 34 TPS proteins. A phylogenetic analysis revealed that at least seven TPS genes were present in the monocot-dicot common ancestor. Segmental duplications contributed significantly to the expansion of this gene family. At least five and three TPS genes were created by segmental duplication events in the Populus and rice genomes, respectively. Both the TPS and TPP domains of 34 TPS genes have evolved under purifying selection, but the selective constraint on the TPP domain was more relaxed than that on the TPS domain. Among 34 TPS genes from Populus, Arabidopsis, and rice, four class I TPS genes (AtTPS1, OsTPS1, PtTPS1, and PtTPS2) were under stronger purifying selection, whereas three Arabidopsis class I TPS genes (AtTPS2, 3, and 4) apparently evolved under relaxed selective constraint. Additionally, a reverse transcription polymerase chain reaction analysis showed the expression divergence of the TPS gene family in Populus, Arabidopsis, and rice under normal growth conditions and in response to stressors. Our findings provide new insights into the mechanisms of gene family expansion and functional evolution.

  8. A Computational Model for Observation in Quantum Mechanics.

    DTIC Science & Technology

    1987-03-16

    Interferometer experiment ............. 17 2.3 The EPR Paradox experiment ................. 22 3 The Computational Model, an Overview 28 4 Implementation 34...40 4.4 Code for the EPR paradox experiment ............... 46 4.5 Code for the double slit interferometer experiment ..... .. 50 5 Conclusions 59 A...particle run counter to fact. The EPR paradox experiment (see section 2.3) is hard to resolve with this class of models, collectively called hidden

  9. Unitary reconstruction of secret for stabilizer-based quantum secret sharing

    NASA Astrophysics Data System (ADS)

    Matsumoto, Ryutaroh

    2017-08-01

    We propose a unitary procedure to reconstruct quantum secret for a quantum secret sharing scheme constructed from stabilizer quantum error-correcting codes. Erasure correcting procedures for stabilizer codes need to add missing shares for reconstruction of quantum secret, while unitary reconstruction procedures for certain class of quantum secret sharing are known to work without adding missing shares. The proposed procedure also works without adding missing shares.

  10. 49 CFR 173.242 - Bulk packagings for certain medium hazard liquids and solids, including solids with dual hazards.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... provisions specified in column 7 of the § 172.101 table. (a) Rail cars: Class DOT 103, 104, 105, 109, 111... IM 101, IM 102, and UN portable tanks when a T Code is specified in Column (7) of the § 172.101... authorized according to the IBC packaging code specified for the specific hazardous material in Column (7) of...

  11. 49 CFR 173.242 - Bulk packagings for certain medium hazard liquids and solids, including solids with dual hazards.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... provisions specified in column 7 of the § 172.101 table. (a) Rail cars: Class DOT 103, 104, 105, 109, 111...; Specification IM 101, IM 102, and UN portable tanks when a T Code is specified in Column (7) of the § 172.101... authorized according to the IBC packaging code specified for the specific hazardous material in Column (7) of...

  12. 49 CFR 173.242 - Bulk packagings for certain medium hazard liquids and solids, including solids with dual hazards.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... provisions specified in column 7 of the § 172.101 table. (a) Rail cars: Class DOT 103, 104, 105, 109, 111... IM 101, IM 102, and UN portable tanks when a T Code is specified in Column (7) of the § 172.101... authorized according to the IBC packaging code specified for the specific hazardous material in Column (7) of...

  13. 49 CFR 173.242 - Bulk packagings for certain medium hazard liquids and solids, including solids with dual hazards.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... provisions specified in column 7 of the § 172.101 table. (a) Rail cars: Class DOT 103, 104, 105, 109, 111... IM 101, IM 102, and UN portable tanks when a T Code is specified in Column (7) of the § 172.101... authorized according to the IBC packaging code specified for the specific hazardous material in Column (7) of...

  14. 49 CFR 173.242 - Bulk packagings for certain medium hazard liquids and solids, including solids with dual hazards.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... provisions specified in column 7 of the § 172.101 table. (a) Rail cars: Class DOT 103, 104, 105, 109, 111... IM 101, IM 102, and UN portable tanks when a T Code is specified in Column (7) of the § 172.101... authorized according to the IBC packaging code specified for the specific hazardous material in Column (7) of...

  15. Proposal for a new content model for the Austrian Procedure Catalogue.

    PubMed

    Neururer, Sabrina B; Pfeiffer, Karl P

    2013-01-01

    The Austrian Procedure Catalogue is used for procedure coding in Austria. Its architecture and content has some major weaknesses. The aim of this study is the presentation of a new potential content model for this classification system consisting of main characteristics of health interventions. It is visualized using a UML class diagram. Based on this proposition, an implementation of an ontology for procedure coding is planned.

  16. The functional spectrum of low-frequency coding variation.

    PubMed

    Marth, Gabor T; Yu, Fuli; Indap, Amit R; Garimella, Kiran; Gravel, Simon; Leong, Wen Fung; Tyler-Smith, Chris; Bainbridge, Matthew; Blackwell, Tom; Zheng-Bradley, Xiangqun; Chen, Yuan; Challis, Danny; Clarke, Laura; Ball, Edward V; Cibulskis, Kristian; Cooper, David N; Fulton, Bob; Hartl, Chris; Koboldt, Dan; Muzny, Donna; Smith, Richard; Sougnez, Carrie; Stewart, Chip; Ward, Alistair; Yu, Jin; Xue, Yali; Altshuler, David; Bustamante, Carlos D; Clark, Andrew G; Daly, Mark; DePristo, Mark; Flicek, Paul; Gabriel, Stacey; Mardis, Elaine; Palotie, Aarno; Gibbs, Richard

    2011-09-14

    Rare coding variants constitute an important class of human genetic variation, but are underrepresented in current databases that are based on small population samples. Recent studies show that variants altering amino acid sequence and protein function are enriched at low variant allele frequency, 2 to 5%, but because of insufficient sample size it is not clear if the same trend holds for rare variants below 1% allele frequency. The 1000 Genomes Exon Pilot Project has collected deep-coverage exon-capture data in roughly 1,000 human genes, for nearly 700 samples. Although medical whole-exome projects are currently afoot, this is still the deepest reported sampling of a large number of human genes with next-generation technologies. According to the goals of the 1000 Genomes Project, we created effective informatics pipelines to process and analyze the data, and discovered 12,758 exonic SNPs, 70% of them novel, and 74% below 1% allele frequency in the seven population samples we examined. Our analysis confirms that coding variants below 1% allele frequency show increased population-specificity and are enriched for functional variants. This study represents a large step toward detecting and interpreting low frequency coding variation, clearly lays out technical steps for effective analysis of DNA capture data, and articulates functional and population properties of this important class of genetic variation.

  17. Functional annotation of the vlinc class of non-coding RNAs using systems biology approach

    PubMed Central

    Laurent, Georges St.; Vyatkin, Yuri; Antonets, Denis; Ri, Maxim; Qi, Yao; Saik, Olga; Shtokalo, Dmitry; de Hoon, Michiel J.L.; Kawaji, Hideya; Itoh, Masayoshi; Lassmann, Timo; Arner, Erik; Forrest, Alistair R.R.; Nicolas, Estelle; McCaffrey, Timothy A.; Carninci, Piero; Hayashizaki, Yoshihide; Wahlestedt, Claes; Kapranov, Philipp

    2016-01-01

    Functionality of the non-coding transcripts encoded by the human genome is the coveted goal of the modern genomics research. While commonly relied on the classical methods of forward genetics, integration of different genomics datasets in a global Systems Biology fashion presents a more productive avenue of achieving this very complex aim. Here we report application of a Systems Biology-based approach to dissect functionality of a newly identified vast class of very long intergenic non-coding (vlinc) RNAs. Using highly quantitative FANTOM5 CAGE dataset, we show that these RNAs could be grouped into 1542 novel human genes based on analysis of insulators that we show here indeed function as genomic barrier elements. We show that vlincRNAs genes likely function in cis to activate nearby genes. This effect while most pronounced in closely spaced vlincRNA–gene pairs can be detected over relatively large genomic distances. Furthermore, we identified 101 vlincRNA genes likely involved in early embryogenesis based on patterns of their expression and regulation. We also found another 109 such genes potentially involved in cellular functions also happening at early stages of development such as proliferation, migration and apoptosis. Overall, we show that Systems Biology-based methods have great promise for functional annotation of non-coding RNAs. PMID:27001520

  18. Object Based Numerical Zooming Between the NPSS Version 1 and a 1-Dimensional Meanline High Pressure Compressor Design Analysis Code

    NASA Technical Reports Server (NTRS)

    Follen, G.; Naiman, C.; auBuchon, M.

    2000-01-01

    Within NASA's High Performance Computing and Communication (HPCC) program, NASA Glenn Research Center is developing an environment for the analysis/design of propulsion systems for aircraft and space vehicles called the Numerical Propulsion System Simulation (NPSS). The NPSS focuses on the integration of multiple disciplines such as aerodynamics, structures, and heat transfer, along with the concept of numerical zooming between 0- Dimensional to 1-, 2-, and 3-dimensional component engine codes. The vision for NPSS is to create a "numerical test cell" enabling full engine simulations overnight on cost-effective computing platforms. Current "state-of-the-art" engine simulations are 0-dimensional in that there is there is no axial, radial or circumferential resolution within a given component (e.g. a compressor or turbine has no internal station designations). In these 0-dimensional cycle simulations the individual component performance characteristics typically come from a table look-up (map) with adjustments for off-design effects such as variable geometry, Reynolds effects, and clearances. Zooming one or more of the engine components to a higher order, physics-based analysis means a higher order code is executed and the results from this analysis are used to adjust the 0-dimensional component performance characteristics within the system simulation. By drawing on the results from more predictive, physics based higher order analysis codes, "cycle" simulations are refined to closely model and predict the complex physical processes inherent to engines. As part of the overall development of the NPSS, NASA and industry began the process of defining and implementing an object class structure that enables Numerical Zooming between the NPSS Version I (0-dimension) and higher order 1-, 2- and 3-dimensional analysis codes. The NPSS Version I preserves the historical cycle engineering practices but also extends these classical practices into the area of numerical zooming for use within a companies' design system. What follows here is a description of successfully zooming I-dimensional (row-by-row) high pressure compressor results back to a NPSS engine 0-dimension simulation and a discussion of the results illustrated using an advanced data visualization tool. This type of high fidelity system-level analysis, made possible by the zooming capability of the NPSS, will greatly improve the fidelity of the engine system simulation and enable the engine system to be "pre-validated" prior to commitment to engine hardware.

  19. Developing CORBA-Based Distributed Scientific Applications From Legacy Fortran Programs

    NASA Technical Reports Server (NTRS)

    Sang, Janche; Kim, Chan; Lopez, Isaac

    2000-01-01

    An efficient methodology is presented for integrating legacy applications written in Fortran into a distributed object framework. Issues and strategies regarding the conversion and decomposition of Fortran codes into Common Object Request Broker Architecture (CORBA) objects are discussed. Fortran codes are modified as little as possible as they are decomposed into modules and wrapped as objects. A new conversion tool takes the Fortran application as input and generates the C/C++ header file and Interface Definition Language (IDL) file. In addition, the performance of the client server computing is evaluated.

  20. Metrics and tools for consistent cohort discovery and financial analyses post-transition to ICD-10-CM.

    PubMed

    Boyd, Andrew D; Li, Jianrong John; Kenost, Colleen; Joese, Binoy; Yang, Young Min; Kalagidis, Olympia A; Zenku, Ilir; Saner, Donald; Bahroos, Neil; Lussier, Yves A

    2015-05-01

    In the United States, International Classification of Disease Clinical Modification (ICD-9-CM, the ninth revision) diagnosis codes are commonly used to identify patient cohorts and to conduct financial analyses related to disease. In October 2015, the healthcare system of the United States will transition to ICD-10-CM (the tenth revision) diagnosis codes. One challenge posed to clinical researchers and other analysts is conducting diagnosis-related queries across datasets containing both coding schemes. Further, healthcare administrators will manage growth, trends, and strategic planning with these dually-coded datasets. The majority of the ICD-9-CM to ICD-10-CM translations are complex and nonreciprocal, creating convoluted representations and meanings. Similarly, mapping back from ICD-10-CM to ICD-9-CM is equally complex, yet different from mapping forward, as relationships are likewise nonreciprocal. Indeed, 10 of the 21 top clinical categories are complex as 78% of their diagnosis codes are labeled as "convoluted" by our analyses. Analysis and research related to external causes of morbidity, injury, and poisoning will face the greatest challenges due to 41 745 (90%) convolutions and a decrease in the number of codes. We created a web portal tool and translation tables to list all ICD-9-CM diagnosis codes related to the specific input of ICD-10-CM diagnosis codes and their level of complexity: "identity" (reciprocal), "class-to-subclass," "subclass-to-class," "convoluted," or "no mapping." These tools provide guidance on ambiguous and complex translations to reveal where reports or analyses may be challenging to impossible.Web portal: http://www.lussierlab.org/transition-to-ICD9CM/Tables annotated with levels of translation complexity: http://www.lussierlab.org/publications/ICD10to9. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association.

  1. A data set for evaluating the performance of multi-class multi-object video tracking

    NASA Astrophysics Data System (ADS)

    Chakraborty, Avishek; Stamatescu, Victor; Wong, Sebastien C.; Wigley, Grant; Kearney, David

    2017-05-01

    One of the challenges in evaluating multi-object video detection, tracking and classification systems is having publically available data sets with which to compare different systems. However, the measures of performance for tracking and classification are different. Data sets that are suitable for evaluating tracking systems may not be appropriate for classification. Tracking video data sets typically only have ground truth track IDs, while classification video data sets only have ground truth class-label IDs. The former identifies the same object over multiple frames, while the latter identifies the type of object in individual frames. This paper describes an advancement of the ground truth meta-data for the DARPA Neovision2 Tower data set to allow both the evaluation of tracking and classification. The ground truth data sets presented in this paper contain unique object IDs across 5 different classes of object (Car, Bus, Truck, Person, Cyclist) for 24 videos of 871 image frames each. In addition to the object IDs and class labels, the ground truth data also contains the original bounding box coordinates together with new bounding boxes in instances where un-annotated objects were present. The unique IDs are maintained during occlusions between multiple objects or when objects re-enter the field of view. This will provide: a solid foundation for evaluating the performance of multi-object tracking of different types of objects, a straightforward comparison of tracking system performance using the standard Multi Object Tracking (MOT) framework, and classification performance using the Neovision2 metrics. These data have been hosted publically.

  2. 76 FR 1059 - Publicly Available Mass Market Encryption Software and Other Specified Publicly Available...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-01-07

    .... 100108014-0121-01] RIN 0694-AE82 Publicly Available Mass Market Encryption Software and Other Specified Publicly Available Encryption Software in Object Code AGENCY: Bureau of Industry and Security, Commerce... encryption object code software with a symmetric key length greater than 64-bits, and ``publicly available...

  3. Representation of Gravity-Aligned Scene Structure in Ventral Pathway Visual Cortex.

    PubMed

    Vaziri, Siavash; Connor, Charles E

    2016-03-21

    The ventral visual pathway in humans and non-human primates is known to represent object information, including shape and identity [1]. Here, we show the ventral pathway also represents scene structure aligned with the gravitational reference frame in which objects move and interact. We analyzed shape tuning of recently described macaque monkey ventral pathway neurons that prefer scene-like stimuli to objects [2]. Individual neurons did not respond to a single shape class, but to a variety of scene elements that are typically aligned with gravity: large planes in the orientation range of ground surfaces under natural viewing conditions, planes in the orientation range of ceilings, and extended convex and concave edges in the orientation range of wall/floor/ceiling junctions. For a given neuron, these elements tended to share a common alignment in eye-centered coordinates. Thus, each neuron integrated information about multiple gravity-aligned structures as they would be seen from a specific eye and head orientation. This eclectic coding strategy provides only ambiguous information about individual structures but explicit information about the environmental reference frame and the orientation of gravity in egocentric coordinates. In the ventral pathway, this could support perceiving and/or predicting physical events involving objects subject to gravity, recognizing object attributes like animacy based on movement not caused by gravity, and/or stabilizing perception of the world against changes in head orientation [3-5]. Our results, like the recent discovery of object weight representation [6], imply that the ventral pathway is involved not just in recognition, but also in physical understanding of objects and scenes. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. Automated Patent Categorization and Guided Patent Search using IPC as Inspired by MeSH and PubMed.

    PubMed

    Eisinger, Daniel; Tsatsaronis, George; Bundschus, Markus; Wieneke, Ulrich; Schroeder, Michael

    2013-04-15

    Document search on PubMed, the pre-eminent database for biomedical literature, relies on the annotation of its documents with relevant terms from the Medical Subject Headings ontology (MeSH) for improving recall through query expansion. Patent documents are another important information source, though they are considerably less accessible. One option to expand patent search beyond pure keywords is the inclusion of classification information: Since every patent is assigned at least one class code, it should be possible for these assignments to be automatically used in a similar way as the MeSH annotations in PubMed. In order to develop a system for this task, it is necessary to have a good understanding of the properties of both classification systems. This report describes our comparative analysis of MeSH and the main patent classification system, the International Patent Classification (IPC). We investigate the hierarchical structures as well as the properties of the terms/classes respectively, and we compare the assignment of IPC codes to patents with the annotation of PubMed documents with MeSH terms.Our analysis shows a strong structural similarity of the hierarchies, but significant differences of terms and annotations. The low number of IPC class assignments and the lack of occurrences of class labels in patent texts imply that current patent search is severely limited. To overcome these limits, we evaluate a method for the automated assignment of additional classes to patent documents, and we propose a system for guided patent search based on the use of class co-occurrence information and external resources.

  5. Automated Patent Categorization and Guided Patent Search using IPC as Inspired by MeSH and PubMed

    PubMed Central

    2013-01-01

    Document search on PubMed, the pre-eminent database for biomedical literature, relies on the annotation of its documents with relevant terms from the Medical Subject Headings ontology (MeSH) for improving recall through query expansion. Patent documents are another important information source, though they are considerably less accessible. One option to expand patent search beyond pure keywords is the inclusion of classification information: Since every patent is assigned at least one class code, it should be possible for these assignments to be automatically used in a similar way as the MeSH annotations in PubMed. In order to develop a system for this task, it is necessary to have a good understanding of the properties of both classification systems. This report describes our comparative analysis of MeSH and the main patent classification system, the International Patent Classification (IPC). We investigate the hierarchical structures as well as the properties of the terms/classes respectively, and we compare the assignment of IPC codes to patents with the annotation of PubMed documents with MeSH terms. Our analysis shows a strong structural similarity of the hierarchies, but significant differences of terms and annotations. The low number of IPC class assignments and the lack of occurrences of class labels in patent texts imply that current patent search is severely limited. To overcome these limits, we evaluate a method for the automated assignment of additional classes to patent documents, and we propose a system for guided patent search based on the use of class co-occurrence information and external resources. PMID:23734562

  6. Model-Based Least Squares Reconstruction of Coded Source Neutron Radiographs: Integrating the ORNL HFIR CG1D Source Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Santos-Villalobos, Hector J; Gregor, Jens; Bingham, Philip R

    2014-01-01

    At the present, neutron sources cannot be fabricated small and powerful enough in order to achieve high resolution radiography while maintaining an adequate flux. One solution is to employ computational imaging techniques such as a Magnified Coded Source Imaging (CSI) system. A coded-mask is placed between the neutron source and the object. The system resolution is increased by reducing the size of the mask holes and the flux is increased by increasing the size of the coded-mask and/or the number of holes. One limitation of such system is that the resolution of current state-of-the-art scintillator-based detectors caps around 50um. Tomore » overcome this challenge, the coded-mask and object are magnified by making the distance from the coded-mask to the object much smaller than the distance from object to detector. In previous work, we have shown via synthetic experiments that our least squares method outperforms other methods in image quality and reconstruction precision because of the modeling of the CSI system components. However, the validation experiments were limited to simplistic neutron sources. In this work, we aim to model the flux distribution of a real neutron source and incorporate such a model in our least squares computational system. We provide a full description of the methodology used to characterize the neutron source and validate the method with synthetic experiments.« less

  7. SILHOUETTE - HIDDEN LINE COMPUTER CODE WITH GENERALIZED SILHOUETTE SOLUTION

    NASA Technical Reports Server (NTRS)

    Hedgley, D. R.

    1994-01-01

    Flexibility in choosing how to display computer-generated three-dimensional drawings has become increasingly important in recent years. A major consideration is the enhancement of the realism and aesthetics of the presentation. A polygonal representation of objects, even with hidden lines removed, is not always desirable. A more pleasing pictorial representation often can be achieved by removing some of the remaining visible lines, thus creating silhouettes (or outlines) of selected surfaces of the object. Additionally, it should be noted that this silhouette feature allows warped polygons. This means that any polygon can be decomposed into constituent triangles. Considering these triangles as members of the same family will present a polygon with no interior lines, and thus removes the restriction of flat polygons. SILHOUETTE is a program for calligraphic drawings that can render any subset of polygons as a silhouette with respect to itself. The program is flexible enough to be applicable to every class of object. SILHOUETTE offers all possible combinations of silhouette and nonsilhouette specifications for an arbitrary solid. Thus, it is possible to enhance the clarity of any three-dimensional scene presented in two dimensions. Input to the program can be line segments or polygons. Polygons designated with the same number will be drawn as a silhouette of those polygons. SILHOUETTE is written in FORTRAN 77 and requires a graphics package such as DI-3000. The program has been implemented on a DEC VAX series computer running VMS and used 65K of virtual memory without a graphics package linked in. The source code is intended to be machine independent. This program is available on a 5.25 inch 360K MS-DOS format diskette (standard distribution) and is also available on a 9-track 1600 BPI ASCII CARD IMAGE magnetic tape. SILHOUETTE was developed in 1986 and was last updated in 1992.

  8. nRC: non-coding RNA Classifier based on structural features.

    PubMed

    Fiannaca, Antonino; La Rosa, Massimo; La Paglia, Laura; Rizzo, Riccardo; Urso, Alfonso

    2017-01-01

    Non-coding RNA (ncRNA) are small non-coding sequences involved in gene expression regulation of many biological processes and diseases. The recent discovery of a large set of different ncRNAs with biologically relevant roles has opened the way to develop methods able to discriminate between the different ncRNA classes. Moreover, the lack of knowledge about the complete mechanisms in regulative processes, together with the development of high-throughput technologies, has required the help of bioinformatics tools in addressing biologists and clinicians with a deeper comprehension of the functional roles of ncRNAs. In this work, we introduce a new ncRNA classification tool, nRC (non-coding RNA Classifier). Our approach is based on features extraction from the ncRNA secondary structure together with a supervised classification algorithm implementing a deep learning architecture based on convolutional neural networks. We tested our approach for the classification of 13 different ncRNA classes. We obtained classification scores, using the most common statistical measures. In particular, we reach an accuracy and sensitivity score of about 74%. The proposed method outperforms other similar classification methods based on secondary structure features and machine learning algorithms, including the RNAcon tool that, to date, is the reference classifier. nRC tool is freely available as a docker image at https://hub.docker.com/r/tblab/nrc/. The source code of nRC tool is also available at https://github.com/IcarPA-TBlab/nrc.

  9. Comparison of two non-convex mixed-integer nonlinear programming algorithms applied to autoregressive moving average model structure and parameter estimation

    NASA Astrophysics Data System (ADS)

    Uilhoorn, F. E.

    2016-10-01

    In this article, the stochastic modelling approach proposed by Box and Jenkins is treated as a mixed-integer nonlinear programming (MINLP) problem solved with a mesh adaptive direct search and a real-coded genetic class of algorithms. The aim is to estimate the real-valued parameters and non-negative integer, correlated structure of stationary autoregressive moving average (ARMA) processes. The maximum likelihood function of the stationary ARMA process is embedded in Akaike's information criterion and the Bayesian information criterion, whereas the estimation procedure is based on Kalman filter recursions. The constraints imposed on the objective function enforce stability and invertibility. The best ARMA model is regarded as the global minimum of the non-convex MINLP problem. The robustness and computational performance of the MINLP solvers are compared with brute-force enumeration. Numerical experiments are done for existing time series and one new data set.

  10. Landscape object-based analysis of wetland plant functional types: the effects of spatial scale, vegetation classes and classifier methods

    NASA Astrophysics Data System (ADS)

    Dronova, I.; Gong, P.; Wang, L.; Clinton, N.; Fu, W.; Qi, S.

    2011-12-01

    Remote sensing-based vegetation classifications representing plant function such as photosynthesis and productivity are challenging in wetlands with complex cover and difficult field access. Recent advances in object-based image analysis (OBIA) and machine-learning algorithms offer new classification tools; however, few comparisons of different algorithms and spatial scales have been discussed to date. We applied OBIA to delineate wetland plant functional types (PFTs) for Poyang Lake, the largest freshwater lake in China and Ramsar wetland conservation site, from 30-m Landsat TM scene at the peak of spring growing season. We targeted major PFTs (C3 grasses, C3 forbs and different types of C4 grasses and aquatic vegetation) that are both key players in system's biogeochemical cycles and critical providers of waterbird habitat. Classification results were compared among: a) several object segmentation scales (with average object sizes 900-9000 m2); b) several families of statistical classifiers (including Bayesian, Logistic, Neural Network, Decision Trees and Support Vector Machines) and c) two hierarchical levels of vegetation classification, a generalized 3-class set and more detailed 6-class set. We found that classification benefited from object-based approach which allowed including object shape, texture and context descriptors in classification. While a number of classifiers achieved high accuracy at the finest pixel-equivalent segmentation scale, the highest accuracies and best agreement among algorithms occurred at coarser object scales. No single classifier was consistently superior across all scales, although selected algorithms of Neural Network, Logistic and K-Nearest Neighbors families frequently provided the best discrimination of classes at different scales. The choice of vegetation categories also affected classification accuracy. The 6-class set allowed for higher individual class accuracies but lower overall accuracies than the 3-class set because individual classes differed in scales at which they were best discriminated from others. Main classification challenges included a) presence of C3 grasses in C4-grass areas, particularly following harvesting of C4 reeds and b) mixtures of emergent, floating and submerged aquatic plants at sub-object and sub-pixel scales. We conclude that OBIA with advanced statistical classifiers offers useful instruments for landscape vegetation analyses, and that spatial scale considerations are critical in mapping PFTs, while multi-scale comparisons can be used to guide class selection. Future work will further apply fuzzy classification and field-collected spectral data for PFT analysis and compare results with MODIS PFT products.

  11. Investigation of advanced counterrotation blade configuration concepts for high speed turboprop systems. Task 2: Unsteady ducted propfan analysis computer program users manual

    NASA Technical Reports Server (NTRS)

    Hall, Edward J.; Delaney, Robert A.; Bettner, James L.

    1991-01-01

    The primary objective of this study was the development of a time-dependent three-dimensional Euler/Navier-Stokes aerodynamic analysis to predict unsteady compressible transonic flows about ducted and unducted propfan propulsion systems at angle of attack. The computer codes resulting from this study are referred to as Advanced Ducted Propfan Analysis Codes (ADPAC). This report is intended to serve as a computer program user's manual for the ADPAC developed under Task 2 of NASA Contract NAS3-25270, Unsteady Ducted Propfan Analysis. Aerodynamic calculations were based on a four-stage Runge-Kutta time-marching finite volume solution technique with added numerical dissipation. A time-accurate implicit residual smoothing operator was utilized for unsteady flow predictions. For unducted propfans, a single H-type grid was used to discretize each blade passage of the complete propeller. For ducted propfans, a coupled system of five grid blocks utilizing an embedded C-grid about the cowl leading edge was used to discretize each blade passage. Grid systems were generated by a combined algebraic/elliptic algorithm developed specifically for ducted propfans. Numerical calculations were compared with experimental data for both ducted and unducted propfan flows. The solution scheme demonstrated efficiency and accuracy comparable with other schemes of this class.

  12. Modeling the Galaxy-Halo Connection: An open-source approach with Halotools

    NASA Astrophysics Data System (ADS)

    Hearin, Andrew

    2016-03-01

    Although the modern form of galaxy-halo modeling has been in place for over ten years, there exists no common code base for carrying out large-scale structure calculations. Considering, for example, the advances in CMB science made possible by Boltzmann-solvers such as CMBFast, CAMB and CLASS, there are clear precedents for how theorists working in a well-defined subfield can mutually benefit from such a code base. Motivated by these and other examples, I present Halotools: an open-source, object-oriented python package for building and testing models of the galaxy-halo connection. Halotools is community-driven, and already includes contributions from over a dozen scientists spread across numerous universities. Designed with high-speed performance in mind, the package generates mock observations of synthetic galaxy populations with sufficient speed to conduct expansive MCMC likelihood analyses over a diverse and highly customizable set of models. The package includes an automated test suite and extensive web-hosted documentation and tutorials (halotools.readthedocs.org). I conclude the talk by describing how Halotools can be used to analyze existing datasets to obtain robust and novel constraints on galaxy evolution models, and by outlining the Halotools program to prepare the field of cosmology for the arrival of Stage IV dark energy experiments.

  13. Episodic accretion: the interplay of infall and disc instabilities

    NASA Astrophysics Data System (ADS)

    Kuffmeier, Michael; Frimann, Søren; Jensen, Sigurd S.; Haugbølle, Troels

    2018-04-01

    Using zoom-simulations carried out with the adaptive mesh-refinement code RAMSES with a dynamic range of up to 227 ≈ 1.34 × 108 we investigate the accretion profiles around six stars embedded in different environments inside a (40 pc)3 giant molecular cloud, the role of mass infall and disc instabilities on the accretion profile, and thus on the luminosity of the forming protostar. Our results show that the environment in which the protostar is embedded determines the overall accretion profile of the protostar. Infall on to the circumstellar disc may trigger gravitational disc instabilities in the disc at distances of around ˜10 to ˜50 au leading to rapid transport of angular momentum and strong accretion bursts. These bursts typically last for about ˜10 to a ˜100 yr, consistent with typical orbital times at the location of the instability, and enhance the luminosity of the protostar. Calculations with the stellar evolution code MESA show that the accretion bursts induce significant changes in the protostellar properties, such as the stellar temperature and radius. We apply the obtained protostellar properties to produce synthetic observables with RADMC3D and predict that accretion bursts lead to observable enhancements around 20 to 200 μm in the spectral energy distribution of Class 0 type young stellar objects.

  14. NPSS Space Team

    NASA Technical Reports Server (NTRS)

    Lavelle, Tom

    2003-01-01

    The objective is to increase the usability of the current NPSS code/architecture by incorporating an advanced space transportation propulsion system capability into the existing NPSS code and begin defining advanced capabilities for NPSS and provide an enhancement for the NPSS code/architecture.

  15. ERDC MSRC Resource. High Performance Computing for the Warfighter. Fall 2006

    DTIC Science & Technology

    2006-01-01

    to as Aggregated Combat Modeling, putting us at the campaign level).” Incorporating UIT within DAC The DAC system is written in Python and uses...API calls with two Python classes, UITConnectionFactory and UITConnection. UITConnectionFactory supports Kerberos authentication and establishes a...API calls within these Python classes, we insulated the DAC code from the Python SOAP interface requirements and details of the ERDC MSRC Resource

  16. A Case Study of Engineering Ethics

    NASA Astrophysics Data System (ADS)

    Shimizu, Kazuo

    In Engineering Ethics Class at Shizuoka University, the Code of Ethics and Cases for Electrical Engineers by IEEJ Ethics committee is used to promote for high education effect to correspond large number of students (140students). In this paper, a case study in the class, and survey results for ethics value of students are presented. In addition, some comments for role playing act on the case of virtual experiences by students are described.

  17. An assessment of management history of damaged and undamaged trees 8 years after the ice storm in Rochester, New York, U.S.

    Treesearch

    Wayne C. Zipperer; Susan M. Sisinni; Jerry Bond; Chris Luley; Andrew G. Pleninger

    2004-01-01

    Rochester, New York, U.S., were reviewed to evaluate the city's storm related removal protocol and how maintenance varied by damage classes. Maintenance codes assigned in 1991 were used to identify ice-storm damage classes based on percentage of crown loss. We evaluated seven species Noway maple (Acer platanoides), silver maple (A. saccharinum), sugar maple (A....

  18. Applying Systems Engineering to Improve the Main Gas Turbine Exhaust System Maintenance Strategy for the CG-47 Ticonderoga Class Cruiser

    DTIC Science & Technology

    2015-09-01

    15 4. Commander, Naval Regional Maintenance Center .................. 15 5 . Private Ship Repair Industry...TURBINE EXHAUST SYSTEM MAINTENANCE STRATEGY FOR THE CG-47 TICONDEROGA CLASS CRUISER 5 . FUNDING NUMBERS 6. AUTHOR(S) Sparks, Robert D. 7. PERFORMING...condition-based maintenance, condition-directed, failure finding, fault tree analysis 15 . NUMBER OF PAGES 133 16. PRICE CODE 17. SECURITY

  19. 3D neutronic codes coupled with thermal-hydraulic system codes for PWR, and BWR and VVER reactors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Langenbuch, S.; Velkov, K.; Lizorkin, M.

    1997-07-01

    This paper describes the objectives of code development for coupling 3D neutronics codes with thermal-hydraulic system codes. The present status of coupling ATHLET with three 3D neutronics codes for VVER- and LWR-reactors is presented. After describing the basic features of the 3D neutronic codes BIPR-8 from Kurchatov-Institute, DYN3D from Research Center Rossendorf and QUABOX/CUBBOX from GRS, first applications of coupled codes for different transient and accident scenarios are presented. The need of further investigations is discussed.

  20. An improved and validated RNA HLA class I SBT approach for obtaining full length coding sequences.

    PubMed

    Gerritsen, K E H; Olieslagers, T I; Groeneweg, M; Voorter, C E M; Tilanus, M G J

    2014-11-01

    The functional relevance of human leukocyte antigen (HLA) class I allele polymorphism beyond exons 2 and 3 is difficult to address because more than 70% of the HLA class I alleles are defined by exons 2 and 3 sequences only. For routine application on clinical samples we improved and validated the HLA sequence-based typing (SBT) approach based on RNA templates, using either a single locus-specific or two overlapping group-specific polymerase chain reaction (PCR) amplifications, with three forward and three reverse sequencing reactions for full length sequencing. Locus-specific HLA typing with RNA SBT of a reference panel, representing the major antigen groups, showed identical results compared to DNA SBT typing. Alleles encountered with unknown exons in the IMGT/HLA database and three samples, two with Null and one with a Low expressed allele, have been addressed by the group-specific RNA SBT approach to obtain full length coding sequences. This RNA SBT approach has proven its value in our routine full length definition of alleles. © 2014 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  1. Open source posturography.

    PubMed

    Rey-Martinez, Jorge; Pérez-Fernández, Nicolás

    2016-12-01

    The proposed validation goal of 0.9 in intra-class correlation coefficient was reached with the results of this study. With the obtained results we consider that the developed software (RombergLab) is a validated balance assessment software. The reliability of this software is dependent of the used force platform technical specifications. Develop and validate a posturography software and share its source code in open source terms. Prospective non-randomized validation study: 20 consecutive adults underwent two balance assessment tests, six condition posturography was performed using a clinical approved software and force platform and the same conditions were measured using the new developed open source software using a low cost force platform. Intra-class correlation index of the sway area obtained from the center of pressure variations in both devices for the six conditions was the main variable used for validation. Excellent concordance between RombergLab and clinical approved force platform was obtained (intra-class correlation coefficient =0.94). A Bland and Altman graphic concordance plot was also obtained. The source code used to develop RombergLab was published in open source terms.

  2. HLA-F polymorphisms in a Euro-Brazilian population from Southern Brazil.

    PubMed

    Manvailer, L F S; Wowk, P F; Mattar, S B; da Siva, J S; da Graça Bicalho, M; Roxo, V M M S

    2014-12-01

    HLA-F is a non-classical major histocompatibility complex (MHC) gene. It codes class Ib MHC molecules with restricted distribution and less nucleotide variations than MHC class Ia genes. Of the 22 alleles registered on the IMGT database only four alleles encode for proteins that differ in their primary structure. To estimate genotype and allele frequencies, this study targeted on known protein coding regions of the HLA-F gene. Genotyping was performed by Sequence Base Typing (SBT). The sample was composed by 199-unrelated bone marrow donors from the Brazilian Bone Marrow Donor Registry (REDOME), Euro-Brazilians, from Southern Brazil. About 1673 bp were analyzed. The most frequent allele was HLA-F*01:01 (87.19%), followed by HLA-F*01:03 (12.31%), HLA-F*01:02 (0.25%) and HLA-F*01:04 (0.25%). Significant linkage disequilibrium (LD) was verified between HLA-F and HLA classes I and II alleles. This is the first study regarding HLA-F polymorphisms in a Euro-Brazilian population contributing to the Southern Brazilian genetic characterization. © 2014 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  3. Development of high-fidelity multiphysics system for light water reactor analysis

    NASA Astrophysics Data System (ADS)

    Magedanz, Jeffrey W.

    There has been a tendency in recent years toward greater heterogeneity in reactor cores, due to the use of mixed-oxide (MOX) fuel, burnable absorbers, and longer cycles with consequently higher fuel burnup. The resulting asymmetry of the neutron flux and energy spectrum between regions with different compositions causes a need to account for the directional dependence of the neutron flux, instead of the traditional diffusion approximation. Furthermore, the presence of both MOX and high-burnup fuel in the core increases the complexity of the heat conduction. The heat transfer properties of the fuel pellet change with irradiation, and the thermal and mechanical expansion of the pellet and cladding strongly affect the size of the gap between them, and its consequent thermal resistance. These operational tendencies require higher fidelity multi-physics modeling capabilities, and this need is addressed by the developments performed within this PhD research. The dissertation describes the development of a High-Fidelity Multi-Physics System for Light Water Reactor Analysis. It consists of three coupled codes -- CTF for Thermal Hydraulics, TORT-TD for Neutron Kinetics, and FRAPTRAN for Fuel Performance. It is meant to address these modeling challenges in three ways: (1) by resolving the state of the system at the level of each fuel pin, rather than homogenizing entire fuel assemblies, (2) by using the multi-group Discrete Ordinates method to account for the directional dependence of the neutron flux, and (3) by using a fuel-performance code, rather than a Thermal Hydraulics code's simplified fuel model, to account for the material behavior of the fuel and its feedback to the hydraulic and neutronic behavior of the system. While the first two are improvements, the third, the use of a fuel-performance code for feedback, constitutes an innovation in this PhD project. Also important to this work is the manner in which such coupling is written. While coupling involves combining codes into a single executable, they are usually still developed and maintained separately. It should thus be a design objective to minimize the changes to those codes, and keep the changes to each code free of dependence on the details of the other codes. This will ease the incorporation of new versions of the code into the coupling, as well as re-use of parts of the coupling to couple with different codes. In order to fulfill this objective, an interface for each code was created in the form of an object-oriented abstract data type. Object-oriented programming is an effective method for enforcing a separation between different parts of a program, and clarifying the communication between them. The interfaces enable the main program to control the codes in terms of high-level functionality. This differs from the established practice of a master/slave relationship, in which the slave code is incorporated into the master code as a set of subroutines. While this PhD research continues previous work with a coupling between CTF and TORT-TD, it makes two major original contributions: (1) using a fuel-performance code, instead of a thermal-hydraulics code's simplified built-in models, to model the feedback from the fuel rods, and (2) the design of an object-oriented interface as an innovative method to interact with a coupled code in a high-level, easily-understandable manner. The resulting code system will serve as a tool to study the question of under what conditions, and to what extent, these higher-fidelity methods will provide benefits to reactor core analysis. (Abstract shortened by UMI.)

  4. The CRONOS Code for Astrophysical Magnetohydrodynamics

    NASA Astrophysics Data System (ADS)

    Kissmann, R.; Kleimann, J.; Krebl, B.; Wiengarten, T.

    2018-06-01

    We describe the magnetohydrodynamics (MHD) code CRONOS, which has been used in astrophysics and space-physics studies in recent years. CRONOS has been designed to be easily adaptable to the problem in hand, where the user can expand or exchange core modules or add new functionality to the code. This modularity comes about through its implementation using a C++ class structure. The core components of the code include solvers for both hydrodynamical (HD) and MHD problems. These problems are solved on different rectangular grids, which currently support Cartesian, spherical, and cylindrical coordinates. CRONOS uses a finite-volume description with different approximate Riemann solvers that can be chosen at runtime. Here, we describe the implementation of the code with a view toward its ongoing development. We illustrate the code’s potential through several (M)HD test problems and some astrophysical applications.

  5. PARAMESH: A Parallel Adaptive Mesh Refinement Community Toolkit

    NASA Technical Reports Server (NTRS)

    MacNeice, Peter; Olson, Kevin M.; Mobarry, Clark; deFainchtein, Rosalinda; Packer, Charles

    1999-01-01

    In this paper, we describe a community toolkit which is designed to provide parallel support with adaptive mesh capability for a large and important class of computational models, those using structured, logically cartesian meshes. The package of Fortran 90 subroutines, called PARAMESH, is designed to provide an application developer with an easy route to extend an existing serial code which uses a logically cartesian structured mesh into a parallel code with adaptive mesh refinement. Alternatively, in its simplest use, and with minimal effort, it can operate as a domain decomposition tool for users who want to parallelize their serial codes, but who do not wish to use adaptivity. The package can provide them with an incremental evolutionary path for their code, converting it first to uniformly refined parallel code, and then later if they so desire, adding adaptivity.

  6. R classes and methods for SNP array data.

    PubMed

    Scharpf, Robert B; Ruczinski, Ingo

    2010-01-01

    The Bioconductor project is an "open source and open development software project for the analysis and comprehension of genomic data" (1), primarily based on the R programming language. Infrastructure packages, such as Biobase, are maintained by Bioconductor core developers and serve several key roles to the broader community of Bioconductor software developers and users. In particular, Biobase introduces an S4 class, the eSet, for high-dimensional assay data. Encapsulating the assay data as well as meta-data on the samples, features, and experiment in the eSet class definition ensures propagation of the relevant sample and feature meta-data throughout an analysis. Extending the eSet class promotes code reuse through inheritance as well as interoperability with other R packages and is less error-prone. Recently proposed class definitions for high-throughput SNP arrays extend the eSet class. This chapter highlights the advantages of adopting and extending Biobase class definitions through a working example of one implementation of classes for the analysis of high-throughput SNP arrays.

  7. Object-oriented productivity metrics

    NASA Technical Reports Server (NTRS)

    Connell, John L.; Eller, Nancy

    1992-01-01

    Software productivity metrics are useful for sizing and costing proposed software and for measuring development productivity. Estimating and measuring source lines of code (SLOC) has proven to be a bad idea because it encourages writing more lines of code and using lower level languages. Function Point Analysis is an improved software metric system, but it is not compatible with newer rapid prototyping and object-oriented approaches to software development. A process is presented here for counting object-oriented effort points, based on a preliminary object-oriented analysis. It is proposed that this approach is compatible with object-oriented analysis, design, programming, and rapid prototyping. Statistics gathered on actual projects are presented to validate the approach.

  8. Translating an AI application from Lisp to Ada: A case study

    NASA Technical Reports Server (NTRS)

    Davis, Gloria J.

    1991-01-01

    A set of benchmarks was developed to test the performance of a newly designed computer executing both Lisp and Ada. Among these was AutoClassII -- a large Artificial Intelligence (AI) application written in Common Lisp. The extraction of a representative subset of this complex application was aided by a Lisp Code Analyzer (LCA). The LCA enabled rapid analysis of the code, putting it in a concise and functionally readable form. An equivalent benchmark was created in Ada through manual translation of the Lisp version. A comparison of the execution results of both programs across a variety of compiler-machine combinations indicate that line-by-line translation coupled with analysis of the initial code can produce relatively efficient and reusable target code.

  9. Apparatus and method to achieve high-resolution microscopy with non-diffracting or refracting radiation

    DOEpatents

    Tobin, Jr., Kenneth W.; Bingham, Philip R.; Hawari, Ayman I.

    2012-11-06

    An imaging system employing a coded aperture mask having multiple pinholes is provided. The coded aperture mask is placed at a radiation source to pass the radiation through. The radiation impinges on, and passes through an object, which alters the radiation by absorption and/or scattering. Upon passing through the object, the radiation is detected at a detector plane to form an encoded image, which includes information on the absorption and/or scattering caused by the material and structural attributes of the object. The encoded image is decoded to provide a reconstructed image of the object. Because the coded aperture mask includes multiple pinholes, the radiation intensity is greater than a comparable system employing a single pinhole, thereby enabling a higher resolution. Further, the decoding of the encoded image can be performed to generate multiple images of the object at different distances from the detector plane. Methods and programs for operating the imaging system are also disclosed.

  10. 76 FR 5829 - Manufacturer of Controlled Substances; Notice of Application

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-02-02

    ... bulk manufacturer of the following basic classes of controlled substances: Drug Schedule Marihuana.... In reference to drug code 7360 (Marihuana), the company plans to bulk manufacture cannabidiol as a...

  11. 76 FR 51401 - Manufacturer of Controlled Substances; Notice of Application

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-08-18

    ... bulk manufacturer of the following basic classes of controlled substances: Drug Schedule Marihuana... development and for distribution to its customers. In reference to drug code 7360 (Marihuana), the company...

  12. Simulating cosmologies beyond ΛCDM with PINOCCHIO

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rizzo, Luca A.; Villaescusa-Navarro, Francisco; Monaco, Pierluigi

    2017-01-01

    We present a method that extends the capabilities of the PINpointing Orbit-Crossing Collapsed HIerarchical Objects (PINOCCHIO) code, allowing it to generate accurate dark matter halo mock catalogues in cosmological models where the linear growth factor and the growth rate depend on scale. Such cosmologies comprise, among others, models with massive neutrinos and some classes of modified gravity theories. We validate the code by comparing the halo properties from PINOCCHIO against N-body simulations, focusing on cosmologies with massive neutrinos: νΛCDM. We analyse the halo mass function, halo two-point correlation function and halo power spectrum, showing that PINOCCHIO reproduces the results frommore » simulations with the same level of precision as the original code (∼ 5–10%). We demonstrate that the abundance of halos in cosmologies with massless and massive neutrinos from PINOCCHIO matches very well the outcome of simulations, and point out that PINOCCHIO can reproduce the Ω{sub ν}–σ{sub 8} degeneracy that affects the halo mass function. We finally show that the clustering properties of the halos from PINOCCHIO matches accurately those from simulations both in real and redshift-space, in the latter case up to k = 0.3 h Mpc{sup −1}. We emphasize that the computational time required by PINOCCHIO to generate mock halo catalogues is orders of magnitude lower than the one needed for N-body simulations. This makes this tool ideal for applications like covariance matrix studies within the standard ΛCDM model but also in cosmologies with massive neutrinos or some modified gravity theories.« less

  13. Multi-optimization Criteria-based Robot Behavioral Adaptability and Motion Planning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pin, Francois G.

    2002-06-01

    Robotic tasks are typically defined in Task Space (e.g., the 3-D World), whereas robots are controlled in Joint Space (motors). The transformation from Task Space to Joint Space must consider the task objectives (e.g., high precision, strength optimization, torque optimization), the task constraints (e.g., obstacles, joint limits, non-holonomic constraints, contact or tool task constraints), and the robot kinematics configuration (e.g., tools, type of joints, mobile platform, manipulator, modular additions, locked joints). Commercially available robots are optimized for a specific set of tasks, objectives and constraints and, therefore, their control codes are extremely specific to a particular set of conditions. Thus,more » there exist a multiplicity of codes, each handling a particular set of conditions, but none suitable for use on robots with widely varying tasks, objectives, constraints, or environments. On the other hand, most DOE missions and tasks are typically ''batches of one''. Attempting to use commercial codes for such work requires significant personnel and schedule costs for re-programming or adding code to the robots whenever a change in task objective, robot configuration, number and type of constraint, etc. occurs. The objective of our project is to develop a ''generic code'' to implement this Task-space to Joint-Space transformation that would allow robot behavior adaptation, in real time (at loop rate), to changes in task objectives, number and type of constraints, modes of controls, kinematics configuration (e.g., new tools, added module). Our specific goal is to develop a single code for the general solution of under-specified systems of algebraic equations that is suitable for solving the inverse kinematics of robots, is useable for all types of robots (mobile robots, manipulators, mobile manipulators, etc.) with no limitation on the number of joints and the number of controlled Task-Space variables, can adapt to real time changes in number and type of constraints and in task objectives, and can adapt to changes in kinematics configurations (change of module, change of tool, joint failure adaptation, etc.).« less

  14. Incoherent digital holograms acquired by interferenceless coded aperture correlation holography system without refractive lenses.

    PubMed

    Kumar, Manoj; Vijayakumar, A; Rosen, Joseph

    2017-09-14

    We present a lensless, interferenceless incoherent digital holography technique based on the principle of coded aperture correlation holography. The acquired digital hologram by this technique contains a three-dimensional image of some observed scene. Light diffracted by a point object (pinhole) is modulated using a random-like coded phase mask (CPM) and the intensity pattern is recorded and composed as a point spread hologram (PSH). A library of PSHs is created using the same CPM by moving the pinhole to all possible axial locations. Intensity diffracted through the same CPM from an object placed within the axial limits of the PSH library is recorded by a digital camera. The recorded intensity this time is composed as the object hologram. The image of the object at any axial plane is reconstructed by cross-correlating the object hologram with the corresponding component of the PSH library. The reconstruction noise attached to the image is suppressed by various methods. The reconstruction results of multiplane and thick objects by this technique are compared with regular lens-based imaging.

  15. Developing CORBA-Based Distributed Scientific Applications from Legacy Fortran Programs

    NASA Technical Reports Server (NTRS)

    Sang, Janche; Kim, Chan; Lopez, Isaac

    2000-01-01

    Recent progress in distributed object technology has enabled software applications to be developed and deployed easily such that objects or components can work together across the boundaries of the network, different operating systems, and different languages. A distributed object is not necessarily a complete application but rather a reusable, self-contained piece of software that co-operates with other objects in a plug-and-play fashion via a well-defined interface. The Common Object Request Broker Architecture (CORBA), a middleware standard defined by the Object Management Group (OMG), uses the Interface Definition Language (IDL) to specify such an interface for transparent communication between distributed objects. Since IDL can be mapped to any programming language, such as C++, Java, Smalltalk, etc., existing applications can be integrated into a new application and hence the tasks of code re-writing and software maintenance can be reduced. Many scientific applications in aerodynamics and solid mechanics are written in Fortran. Refitting these legacy Fortran codes with CORBA objects can increase the codes reusability. For example, scientists could link their scientific applications to vintage Fortran programs such as Partial Differential Equation(PDE) solvers in a plug-and-play fashion. Unfortunately, CORBA IDL to Fortran mapping has not been proposed and there seems to be no direct method of generating CORBA objects from Fortran without having to resort to manually writing C/C++ wrappers. In this paper, we present an efficient methodology to integrate Fortran legacy programs into a distributed object framework. Issues and strategies regarding the conversion and decomposition of Fortran codes into CORBA objects are discussed. The following diagram shows the conversion and decomposition mechanism we proposed. Our goal is to keep the Fortran codes unmodified. The conversion- aided tool takes the Fortran application program as input and helps programmers generate C/C++ header file and IDL file for wrapping the Fortran code. Programmers need to determine by themselves how to decompose the legacy application into several reusable components based on the cohesion and coupling factors among the functions and subroutines. However, programming effort still can be greatly reduced because function headings and types have been converted to C++ and IDL styles. Most Fortran applications use the COMMON block to facilitate the transfer of large amount of variables among several functions. The COMMON block plays the similar role of global variables used in C. In the CORBA-compliant programming environment, global variables can not be used to pass values between objects. One approach to dealing with this problem is to put the COMMON variables into the parameter list. We do not adopt this approach because it requires modification of the Fortran source code which violates our design consideration. Our approach is to extract the COMMON blocks and convert them into a structure-typed attribute in C++. Through attributes, each component can initialize the variables and return the computation result back to the client. We have tested successfully the proposed conversion methodology based on the f2c converter. Since f2c only translates Fortran to C, we still needed to edit the converted code to meet the C++ and IDL syntax. For example, C++/IDL requires a tag in the structure type, while C does not. In this paper, we identify the necessary changes to the f2c converter in order to directly generate the C++ header and the IDL file. Our future work is to add GUI interface to ease the decomposition task by simply dragging and dropping icons.

  16. Bio++: a set of C++ libraries for sequence analysis, phylogenetics, molecular evolution and population genetics.

    PubMed

    Dutheil, Julien; Gaillard, Sylvain; Bazin, Eric; Glémin, Sylvain; Ranwez, Vincent; Galtier, Nicolas; Belkhir, Khalid

    2006-04-04

    A large number of bioinformatics applications in the fields of bio-sequence analysis, molecular evolution and population genetics typically share input/output methods, data storage requirements and data analysis algorithms. Such common features may be conveniently bundled into re-usable libraries, which enable the rapid development of new methods and robust applications. We present Bio++, a set of Object Oriented libraries written in C++. Available components include classes for data storage and handling (nucleotide/amino-acid/codon sequences, trees, distance matrices, population genetics datasets), various input/output formats, basic sequence manipulation (concatenation, transcription, translation, etc.), phylogenetic analysis (maximum parsimony, markov models, distance methods, likelihood computation and maximization), population genetics/genomics (diversity statistics, neutrality tests, various multi-locus analyses) and various algorithms for numerical calculus. Implementation of methods aims at being both efficient and user-friendly. A special concern was given to the library design to enable easy extension and new methods development. We defined a general hierarchy of classes that allow the developer to implement its own algorithms while remaining compatible with the rest of the libraries. Bio++ source code is distributed free of charge under the CeCILL general public licence from its website http://kimura.univ-montp2.fr/BioPP.

  17. Learning to distinguish similar objects

    NASA Astrophysics Data System (ADS)

    Seibert, Michael; Waxman, Allen M.; Gove, Alan N.

    1995-04-01

    This paper describes how the similarities and differences among similar objects can be discovered during learning to facilitate recognition. The application domain is single views of flying model aircraft captured in silhouette by a CCD camera. The approach was motivated by human psychovisual and monkey neurophysiological data. The implementation uses neural net processing mechanisms to build a hierarchy that relates similar objects to superordinate classes, while simultaneously discovering the salient differences between objects within a class. Learning and recognition experiments both with and without the class similarity and difference learning show the effectiveness of the approach on this visual data. To test the approach, the hierarchical approach was compared to a non-hierarchical approach, and was found to improve the average percentage of correctly classified views from 77% to 84%.

  18. Modeling Subsurface Reactive Flows Using Leadership-Class Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mills, Richard T; Hammond, Glenn; Lichtner, Peter

    2009-01-01

    We describe our experiences running PFLOTRAN - a code for simulation of coupled hydro-thermal-chemical processes in variably saturated, non-isothermal, porous media - on leadership-class supercomputers, including initial experiences running on the petaflop incarnation of Jaguar, the Cray XT5 at the National Center for Computational Sciences at Oak Ridge National Laboratory. PFLOTRAN utilizes fully implicit time-stepping and is built on top of the Portable, Extensible Toolkit for Scientific Computation (PETSc). We discuss some of the hurdles to 'at scale' performance with PFLOTRAN and the progress we have made in overcoming them on leadership-class computer architectures.

  19. Uncertainty in Pedotransfer Functions from Soil Survey Data

    NASA Astrophysics Data System (ADS)

    Pachepsky, Y. A.; Rawls, W. J.

    2002-05-01

    Pedotransfer functions (PTFs) are empirical relationships between hard-to-get soil parameters, i.e. hydraulic properties, and more easily obtainable basic soil properties, such as texture. Use of PTFs in large-scale projects and pilot studies relies on data of soil survey that provides soil basic data as a categorical information. Unlike numerical variables, categorical data cannot be directly used in statistical regressions or neural networks to develop PTFs. Objectives of this work were (a) to find and test techniques to develop PTFs for soil water retention and saturated hydraulic conductivity with soil categorical data as inputs, (b) to evaluate sources of uncertainty in results of such PTFs and to research opportunities of mitigating the uncertainty. We used a subset of about 12,000 samples from the US National Soil characterization database to estimate water retention, and the data set for circa 1000 hydraulic conductivity measurements done in the US. Regression trees and polynomial neural networks based on dummy coding were the techniques tried for the PTF development. The jackknife validation was used to prevent the over-parameterization. Both techniques were equally efficient in developing PTFs, but regression trees gave much more transparent results. Textural class was the leading predictor with RMSE values of about 6.5 and 4.1 vol.% for water retention at -33 and -1500 kPa, respectively. The RMSE values decreased 10% when the laboratory textural analysis was used to establish the textural class. Textural class in the field was determined correctly only in 41% of all cases. To mitigate this source of error, we added slopes, position on the slope classes, and land surface shape classes to the list of PTF inputs. Regression trees generated topotextural groups that encompassed several textural classes. Using topographic variables and soil horizon appeared to be the way to make up for errors made in field determination of texture. Adding field descriptors of soil structure to the field-determined textural class gave similar results. No large improvement was achieved probably because textural class, topographic descriptors and structure descriptors were correlated predictors in many cases. Both median values and uncertainty of the saturated hydraulic conductivity had a power-law decrease as clay content increased. Defining two classes of bulk density helped to estimate hydraulic conductivity within textural classes. We conclude that categorical field soil survey data can be used in PTF-based estimating soil water retention and saturated hydraulic conductivity with quantified uncertainty

  20. The New Students: A Dialectic between Language and Learning

    ERIC Educational Resources Information Center

    Perl, Sondra

    1975-01-01

    Examines Basil Bernstein's principles on language as set forth in "Class, Codes and Control: Theoretical Studies towards a Sociology of Language" and applies them to the teaching of English composition. (RB)

Top