Science.gov

Sample records for common component architecture

  1. Parallel PDE-Based Simulations Using the Common Component Architecture

    SciTech Connect

    McInnes, Lois C.; Allan, Benjamin A.; Armstrong, Robert; Benson, Steven J.; Bernholdt, David E.; Dahlgren, Tamara L.; Diachin, Lori; Krishnan, Manoj Kumar; Kohl, James A.; Larson, J. Walter; Lefantzi, Sophia; Nieplocha, Jarek; Norris, Boyana; Parker, Steven G.; Ray, Jaideep; Zhou, Shujia

    2006-03-05

    Summary. The complexity of parallel PDE-based simulations continues to increase as multimodel, multiphysics, and multi-institutional projects become widespread. A goal of componentbased software engineering in such large-scale simulations is to help manage this complexity by enabling better interoperability among various codes that have been independently developed by different groups. The Common Component Architecture (CCA) Forum is defining a component architecture specification to address the challenges of high-performance scientific computing. In addition, several execution frameworks, supporting infrastructure, and generalpurpose components are being developed. Furthermore, this group is collaborating with others in the high-performance computing community to design suites of domain-specific component interface specifications and underlying implementations. This chapter discusses recent work on leveraging these CCA efforts in parallel PDE-based simulations involving accelerator design, climate modeling, combustion, and accidental fires and explosions. We explain how component technology helps to address the different challenges posed by each of these applications, and we highlight how component interfaces built on existing parallel toolkits facilitate the reuse of software for parallel mesh manipulation, discretization, linear algebra, integration, optimization, and parallel data redistribution. We also present performance data to demonstrate the suitability of this approach, and we discuss strategies for applying component technologies to both new and existing applications.

  2. The common component architecture for particle accelerator simulations.

    SciTech Connect

    Dechow, D. R.; Norris, B.; Amundson, J.; Mathematics and Computer Science; Tech-X Corp; FNAL

    2007-01-01

    Synergia2 is a beam dynamics modeling and simulation application for high-energy accelerators such as the Tevatron at Fermilab and the International Linear Collider, which is now under planning and development. Synergia2 is a hybrid, multilanguage software package comprised of two separate accelerator physics packages (Synergia and MaryLie/Impact) and one high-performance computer science package (PETSc). We describe our approach to producing a set of beam dynamics-specific software components based on the Common Component Architecture specification. Among other topics, we describe particular experiences with the following tasks: using Python steering to guide the creation of interfaces and to prototype components; working with legacy Fortran codes; and an example component-based, beam dynamics simulation.

  3. How the Common Component Architecture Advances Compuational Science

    SciTech Connect

    Kumfert, G; Bernholdt, D; Epperly, T; Kohl, J; McInnes, L C; Parker, S; Ray, J

    2006-06-19

    Computational chemists are using Common Component Architecture (CCA) technology to increase the parallel scalability of their application ten-fold. Combustion researchers are publishing science faster because the CCA manages software complexity for them. Both the solver and meshing communities in SciDAC are converging on community interface standards as a direct response to the novel level of interoperability that CCA presents. Yet, there is much more to do before component technology becomes mainstream computational science. This paper highlights the impact that the CCA has made on scientific applications, conveys some lessons learned from five years of the SciDAC program, and previews where applications could go with the additional capabilities that the CCA has planned for SciDAC 2.

  4. Toward a common component architecture for high-performance scientific computing

    SciTech Connect

    Armstrong, R; Gannon, D; Geist, A; Katarzyna, K; Kohn, S; McInnes, L; Parker, S; Smolinski, B

    1999-06-09

    This paper describes work in progress to develop a standard for interoperability among high-performance scientific components. This research stems from growing recognition that the scientific community must better manage the complexity of multidisciplinary simulations and better address scalable performance issues on parallel and distributed architectures. Driving forces are the need for fast connections among components that perform numerically intensive work and parallel collective interactions among components that use multiple processes or threads. This paper focuses on the areas we believe are most crucial for such interactions, namely an interface definition language that supports scientific abstractions for specifying component interfaces and a ports connection model for specifying component interactions.

  5. Cognitive Architecture of Common and Scientific Concepts

    NASA Astrophysics Data System (ADS)

    Tarábek, Paul

    2010-07-01

    The cognitive architecture of concept is a specific structure consisting of the concept core, concept periphery, the semantic frame as the meaning and the sense of the concept, and the relations among all components of this structure. The model of the cognitive architecture of scientific and common concepts is a conceptual meta-model built upon Vygotsky's concept theory, Fillmore's semantic frame, semantic triangle, on widespread ideas of the structuring of conceptual systems, and the Hestenes' Modeling Theory. The method of semantic mapping of concepts flowing from the model is designed.

  6. Parallel, Multigrid Finite Element Simulator for Fractured/Faulted and Other Complex Reservoirs based on Common Component Architecture (CCA)

    SciTech Connect

    Milind Deo; Chung-Kan Huang; Huabing Wang

    2008-08-31

    Black-oil, compositional and thermal simulators have been developed to address different physical processes in reservoir simulation. A number of different types of discretization methods have also been proposed to address issues related to representing the complex reservoir geometry. These methods are more significant for fractured reservoirs where the geometry can be particularly challenging. In this project, a general modular framework for reservoir simulation was developed, wherein the physical models were efficiently decoupled from the discretization methods. This made it possible to couple any discretization method with different physical models. Oil characterization methods are becoming increasingly sophisticated, and it is possible to construct geologically constrained models of faulted/fractured reservoirs. Discrete Fracture Network (DFN) simulation provides the option of performing multiphase calculations on spatially explicit, geologically feasible fracture sets. Multiphase DFN simulations of and sensitivity studies on a wide variety of fracture networks created using fracture creation/simulation programs was undertaken in the first part of this project. This involved creating interfaces to seamlessly convert the fracture characterization information into simulator input, grid the complex geometry, perform the simulations, and analyze and visualize results. Benchmarking and comparison with conventional simulators was also a component of this work. After demonstration of the fact that multiphase simulations can be carried out on complex fracture networks, quantitative effects of the heterogeneity of fracture properties were evaluated. Reservoirs are populated with fractures of several different scales and properties. A multiscale fracture modeling study was undertaken and the effects of heterogeneity and storage on water displacement dynamics in fractured basements were investigated. In gravity-dominated systems, more oil could be recovered at a given pore

  7. A reference architecture for the component factory

    NASA Technical Reports Server (NTRS)

    Basili, Victor R.; Caldiera, Gianluigi; Cantone, Giovanni

    1992-01-01

    Software reuse can be achieved through an organization that focuses on utilization of life cycle products from previous developments. The component factory is both an example of the more general concepts of experience and domain factory and an organizational unit worth being considered independently. The critical features of such an organization are flexibility and continuous improvement. In order to achieve these features we can represent the architecture of the factory at different levels of abstraction and define a reference architecture from which specific architectures can be derived by instantiation. A reference architecture is an implementation and organization independent representation of the component factory and its environment. The paper outlines this reference architecture, discusses the instantiation process, and presents some examples of specific architectures by comparing them in the framework of the reference model.

  8. Perceptual-components architecture for digital video

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.

    1990-01-01

    A perceptual-components architecture for digital video partitions the image stream into signal components in a manner analogous to that used in the human visual system. These components consist of achromatic and opponent color channels, divided into static and motion channels, further divided into bands of particular spatial frequency and orientation. Bits are allocated to an individual band in accord with visual sensitivity to that band and in accord with the properties of visual masking. This architecture is argued to have desirable features such as efficiency, error tolerance, scalability, device independence, and extensibility.

  9. Component architecture of the Tecolote framework

    SciTech Connect

    Zander, M.; Hall, J.; Painter, J.; O`Rourke, S.

    1998-11-01

    Los Alamos National Laboratory`s Tecolote Framework is used in conjunction with other libraries by several physical simulations. This paper briefly describes the design and use of Tecolote`s component architecture. A component is a C++ class that meets several requirements imposed by the framework to increase its reusability, configurability, and ease of replacement. The authors discuss both the motives for imposing these requirements upon components and the means by which a generic C++ class may be integrated into Tecolote by satisfying these requirements. They also describe the means by which these components may be combined into a physics application.

  10. A Component Architecture for High-Performance Computing

    SciTech Connect

    Bernholdt, D E; Elwasif, W R; Kohl, J A; Epperly, T G W

    2003-01-21

    The Common Component Architecture (CCA) provides a means for developers to manage the complexity of large-scale scientific software systems and to move toward a ''plug and play'' environment for high-performance computing. The CCA model allows for a direct connection between components within the same process to maintain performance on inter-component calls. It is neutral with respect to parallelism, allowing components to use whatever means they desire to communicate within their parallel ''cohort.'' We will discuss in detail the importance of performance in the design of the CCA and will analyze the performance costs associated with features of the CCA.

  11. Component architecture in drug discovery informatics.

    PubMed

    Smith, Peter M

    2002-05-01

    This paper reviews the characteristics of a new model of computing that has been spurred on by the Internet, known as Netcentric computing. Developments in this model led to distributed component architectures, which, although not new ideas, are now realizable with modern tools such as Enterprise Java. The application of this approach to scientific computing, particularly in pharmaceutical discovery research, is discussed and highlighted by a particular case involving the management of biological assay data. PMID:12058611

  12. A Component Architecture for High-Performance Scientific Computing

    SciTech Connect

    Bernholdt, D E; Allan, B A; Armstrong, R; Bertrand, F; Chiu, K; Dahlgren, T L; Damevski, K; Elwasif, W R; Epperly, T W; Govindaraju, M; Katz, D S; Kohl, J A; Krishnan, M; Kumfert, G; Larson, J W; Lefantzi, S; Lewis, M J; Malony, A D; McInnes, L C; Nieplocha, J; Norris, B; Parker, S G; Ray, J; Shende, S; Windus, T L; Zhou, S

    2004-12-14

    The Common Component Architecture (CCA) provides a means for software developers to manage the complexity of large-scale scientific simulations and to move toward a plug-and-play environment for high-performance computing. In the scientific computing context, component models also promote collaboration using independently developed software, thereby allowing particular individuals or groups to focus on the aspects of greatest interest to them. The CCA supports parallel and distributed computing as well as local high-performance connections between components in a language-independent manner. The design places minimal requirements on components and thus facilitates the integration of existing code into the CCA environment. The CCA model imposes minimal overhead to minimize the impact on application performance. The focus on high performance distinguishes the CCA from most other component models. The CCA is being applied within an increasing range of disciplines, including combustion research, global climate simulation, and computational chemistry.

  13. A Component Architecture for High-Performance Scientific Computing

    SciTech Connect

    Bernholdt, David E; Allan, Benjamin A; Armstrong, Robert C; Bertrand, Felipe; Chiu, Kenneth; Dahlgren, Tamara L; Damevski, Kostadin; Elwasif, Wael R; Epperly, Thomas G; Govindaraju, Madhusudhan; Katz, Daniel S; Kohl, James A; Krishnan, Manoj Kumar; Kumfert, Gary K; Larson, J Walter; Lefantzi, Sophia; Lewis, Michael J; Malony, Allen D; McInnes, Lois C; Nieplocha, Jarek; Norris, Boyana; Parker, Steven G; Ray, Jaideep; Shende, Sameer; Windus, Theresa L; Zhou, Shujia

    2006-07-03

    The Common Component Architecture (CCA) provides a means for software developers to manage the complexity of large-scale scientific simulations and to move toward a plug-and-play environment for high-performance computing. In the scientific computing context, component models also promote collaboration using independently developed software, thereby allowing particular individuals or groups to focus on the aspects of greatest interest to them. The CCA supports parallel and distributed computing as well as local high-performance connections between components in a language-independent manner. The design places minimal requirements on components and thus facilitates the integration of existing code into the CCA environment. The CCA model imposes minimal overhead to minimize the impact on application performance. The focus on high performance distinguishes the CCA from most other component models. The CCA is being applied within an increasing range of disciplines, including combustion research, global climate simulation, and computational chemistry.

  14. Lifecycle Prognostics Architecture for Selected High-Cost Active Components

    SciTech Connect

    N. Lybeck; B. Pham; M. Tawfik; J. B. Coble; R. M. Meyer; P. Ramuhalli; L. J. Bond

    2011-08-01

    There are an extensive body of knowledge and some commercial products available for calculating prognostics, remaining useful life, and damage index parameters. The application of these technologies within the nuclear power community is still in its infancy. Online monitoring and condition-based maintenance is seeing increasing acceptance and deployment, and these activities provide the technological bases for expanding to add predictive/prognostics capabilities. In looking to deploy prognostics there are three key aspects of systems that are presented and discussed: (1) component/system/structure selection, (2) prognostic algorithms, and (3) prognostics architectures. Criteria are presented for component selection: feasibility, failure probability, consequences of failure, and benefits of the prognostics and health management (PHM) system. The basis and methods commonly used for prognostics algorithms are reviewed and summarized. Criteria for evaluating PHM architectures are presented: open, modular architecture; platform independence; graphical user interface for system development and/or results viewing; web enabled tools; scalability; and standards compatibility. Thirteen software products were identified and discussed in the context of being potentially useful for deployment in a PHM program applied to systems in a nuclear power plant (NPP). These products were evaluated by using information available from company websites, product brochures, fact sheets, scholarly publications, and direct communication with vendors. The thirteen products were classified into four groups of software: (1) research tools, (2) PHM system development tools, (3) deployable architectures, and (4) peripheral tools. Eight software tools fell into the deployable architectures category. Of those eight, only two employ all six modules of a full PHM system. Five systems did not offer prognostic estimates, and one system employed the full health monitoring suite but lacked operations and

  15. SIFT - A Component-Based Integration Architecture for Enterprise Analytics

    SciTech Connect

    Thurman, David A.; Almquist, Justin P.; Gorton, Ian; Wynne, Adam S.; Chatterton, Jack

    2007-02-01

    Architectures and technologies for enterprise application integration are relatively mature, resulting in a range of standards-based and proprietary middleware technologies. In the domain of complex analytical applications, integration architectures are not so well understood. Analytical applications such as those used in scientific discovery, emergency response, financial and intelligence analysis exert unique demands on their underlying architecture. These demands make existing integration middleware inappropriate for use in enterprise analytics environments. In this paper we describe SIFT (Scalable Information Fusion and Triage), a platform designed for integrating the various components that comprise enterprise analytics applications. SIFT exploits a common pattern for composing analytical components, and extends an existing messaging platform with dynamic configuration mechanisms and scaling capabilities. We demonstrate the use of SIFT to create a decision support platform for quality control based on large volumes of incoming delivery data. The strengths of the SIFT solution are discussed, and we conclude by describing where further work is required to create a complete solution applicable to a wide range of analytical application domains.

  16. Study on the standard architecture for geoinformation common services

    NASA Astrophysics Data System (ADS)

    Zha, Z.; Zhang, L.; Wang, C.; Jiang, J.; Huang, W.

    2014-04-01

    The construction of platform for geoinformation common services was completed or on going in in most provinces and cities in these years in China, and the platforms plays an important role in the economic and social activities. Geoinfromation and geoinfromation based services are the key issues in the platform. The standards on geoinormation common services play as bridges among the users, systems and designers of the platform. The standard architecture for geoinformation common services is the guideline for designing and using the standard system in which the standards integrated to each other to promote the development, sharing and services of geoinformation resources. To establish the standard architecture for geoinformation common services is one of the tasks of "Study on important standards for geonformation common services and management of public facilities in city". The scope of the standard architecture is defined, such as data or information model, interoperability interface or service, information management. Some Research work on the status of international standards of geoinormation common services in organization and countries, like ISO/TC 211, OGC and other countries or unions like USA, EU, Japan have done. Some principles are set up to evaluate the standard, such as availability, suitability and extensible ability. Then the development requirement and practical situation are analyzed, and a framework of the standard architecture for geoinformation common services are proposed. Finally, a summary and prospects of the geoinformation standards are made.

  17. Component architecture in HIS: a drug order entry case study.

    PubMed

    Schlesinger, J M; Blumenfeld, B; Broverman, C

    1997-01-01

    Historically, many healthcare information systems (HIS) have been designed around monolithic architectures that rely upon a single organization to provide most, if not all, of the system's business logic. Recent advances in distributed systems technology and healthcare standards make a component-based architecture feasible in building today's HIS. The First DataBank Drug Toolkit is used as a case study for the role of components in the design of a HIS. Several technical challenges associated with building truly plug and play components are discussed. PMID:10175372

  18. GEARS: An Enterprise Architecture Based On Common Ground Services

    NASA Astrophysics Data System (ADS)

    Petersen, S.

    2014-12-01

    Earth observation satellites collect a broad variety of data used in applications that range from weather forecasting to climate monitoring. Within NOAA the National Environmental Satellite Data and Information Service (NESDIS) supports these applications by operating satellites in both geosynchronous and polar orbits. Traditionally NESDIS has acquired and operated its satellites as stand-alone systems with their own command and control, mission management, processing, and distribution systems. As the volume, velocity, veracity, and variety of sensor data and products produced by these systems continues to increase, NESDIS is migrating to a new concept of operation in which it will operate and sustain the ground infrastructure as an integrated Enterprise. Based on a series of common ground services, the Ground Enterprise Architecture System (GEARS) approach promises greater agility, flexibility, and efficiency at reduced cost. This talk describes the new architecture and associated development activities, and presents the results of initial efforts to improve product processing and distribution.

  19. Component Architectures and Web-Based Learning Environments

    ERIC Educational Resources Information Center

    Ferdig, Richard E.; Mishra, Punya; Zhao, Yong

    2004-01-01

    The Web has caught the attention of many educators as an efficient communication medium and content delivery system. But we feel there is another aspect of the Web that has not been given the attention it deserves. We call this aspect of the Web its "component architecture." Briefly it means that on the Web one can develop very complex…

  20. Digital visual communications using a Perceptual Components Architecture

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.

    1991-01-01

    The next era of space exploration will generate extraordinary volumes of image data, and management of this image data is beyond current technical capabilities. We propose a strategy for coding visual information that exploits the known properties of early human vision. This Perceptual Components Architecture codes images and image sequences in terms of discrete samples from limited bands of color, spatial frequency, orientation, and temporal frequency. This spatiotemporal pyramid offers efficiency (low bit rate), variable resolution, device independence, error-tolerance, and extensibility.

  1. Development of x-ray laser architectural components

    SciTech Connect

    Wan, A.S.; Da Silva, L.B.; Moreno, J.C.

    1994-06-01

    This paper describes the recent experimental and computational development of short-pulse, enhanced-coherence, and high-brilliance x-ray lasers (XRLs). The authors will describe the development of an XRL cavity by injecting laser photons back into an amplifying XRL plasma. Using a combination of LASNEX/GLF/SPECTRE-BEAM3 codes, they obtained good agreement with experimental results. They will describe the adaptive spatial filtering technique used to design small-aperture shaped XRLs with near diffraction-limited output. Finally they will discuss issues concerning the development of high-brilliance XRL architecture, with emphasis on scaling the XRL aperture. Combining these advances in XRL architectural components allows them to develop a short-pulse, high-brilliance, coherent XRL suitable for applications in areas such as biological holography, plasma interferometry, and nonlinear optics.

  2. Component-Level Electronic-Assembly Repair (CLEAR) System Architecture

    NASA Technical Reports Server (NTRS)

    Oeftering, Richard C.; Bradish, Martin A.; Juergens, Jeffrey R.; Lewis, Michael J.; Vrnak, Daniel R.

    2011-01-01

    This document captures the system architecture for a Component-Level Electronic-Assembly Repair (CLEAR) capability needed for electronics maintenance and repair of the Constellation Program (CxP). CLEAR is intended to improve flight system supportability and reduce the mass of spares required to maintain the electronics of human rated spacecraft on long duration missions. By necessity it allows the crew to make repairs that would otherwise be performed by Earth based repair depots. Because of practical knowledge and skill limitations of small spaceflight crews they must be augmented by Earth based support crews and automated repair equipment. This system architecture covers the complete system from ground-user to flight hardware and flight crew and defines an Earth segment and a Space segment. The Earth Segment involves database management, operational planning, and remote equipment programming and validation processes. The Space Segment involves the automated diagnostic, test and repair equipment required for a complete repair process. This document defines three major subsystems including, tele-operations that links the flight hardware to ground support, highly reconfigurable diagnostics and test instruments, and a CLEAR Repair Apparatus that automates the physical repair process.

  3. A Plug and Play GNC Architecture Using FPGA Components

    NASA Technical Reports Server (NTRS)

    KrishnaKumar, K.; Kaneshige, J.; Waterman, R.; Pires, C.; Ippoloito, C.

    2005-01-01

    The goal of Plug and Play, or PnP, is to allow hardware and software components to work together automatically, without requiring manual setup procedures. As a result, new or replacement hardware can be plugged into a system and automatically configured with the appropriate resource assignments. However, in many cases it may not be practical or even feasible to physically replace hardware components. One method for handling these types of situations is through the incorporation of reconfigurable hardware such as Field Programmable Gate Arrays, or FPGAs. This paper describes a phased approach to developing a Guidance, Navigation, and Control (GNC) architecture that expands on the traditional concepts of PnP, in order to accommodate hardware reconfiguration without requiring detailed knowledge of the hardware. This is achieved by establishing a functional based interface that defines how the hardware will operate, and allow the hardware to reconfigure itself. The resulting system combines the flexibility of manipulating software components with the speed and efficiency of hardware.

  4. Common and Cluster-Specific Simultaneous Component Analysis

    PubMed Central

    De Roover, Kim; Timmerman, Marieke E.; Mesquita, Batja; Ceulemans, Eva

    2013-01-01

    In many fields of research, so-called ‘multiblock’ data are collected, i.e., data containing multivariate observations that are nested within higher-level research units (e.g., inhabitants of different countries). Each higher-level unit (e.g., country) then corresponds to a ‘data block’. For such data, it may be interesting to investigate the extent to which the correlation structure of the variables differs between the data blocks. More specifically, when capturing the correlation structure by means of component analysis, one may want to explore which components are common across all data blocks and which components differ across the data blocks. This paper presents a common and cluster-specific simultaneous component method which clusters the data blocks according to their correlation structure and allows for common and cluster-specific components. Model estimation and model selection procedures are described and simulation results validate their performance. Also, the method is applied to data from cross-cultural values research to illustrate its empirical value. PMID:23667463

  5. Raft lipids as common components of human extracellular amyloid fibrils

    PubMed Central

    Gellermann, Gerald P.; Appel, Thomas R.; Tannert, Astrid; Radestock, Anja; Hortschansky, Peter; Schroeckh, Volker; Leisner, Christian; Lütkepohl, Tim; Shtrasburg, Shmuel; Röcken, Christoph; Pras, Mordechai; Linke, Reinhold P.; Diekmann, Stephan; Fändrich, Marcus

    2005-01-01

    Amyloid fibrils are fibrillar polypeptide aggregates from several degenerative human conditions, including Alzheimer's and Creutzfeldt-Jakob diseases. Analysis of amyloid fibrils derived from various human diseases (AA, ATTR, Aβ2M, ALλ, and ALκ amyloidosis) shows that these are associated with a common lipid component that has a conserved chemical composition and that is specifically rich in cholesterol and sphingolipids, the major components of cellular lipid rafts. This pattern is not notably affected by the purification procedure, and no tight lipid interactions can be detected when preformed fibrils are mixed with lipids. By contrast, the early and prefibrillar aggregates formed in an AA amyloid-producing cell system interact with the raft marker ganglioside-1, and amyloid formation is impaired by addition of cholesterol-reducing agents. These data suggest the existence of common cellular mechanisms in the generation of different types of clinical amyloid deposits. PMID:15851687

  6. Potential of hypocotyl diameter in family selection aiming at plant architecture improvement of common bean.

    PubMed

    Oliveira, A M C; Batista, R O; Carneiro, P C S; Carneiro, J E S; Cruz, C D

    2015-01-01

    Cultivars of common bean with more erect plant architecture and greater tolerance to degree of lodging are required by producers. Thus, to evaluate the potential of hypocotyl diameter (HD) in family selection for plant architecture improvement of common bean, the HDs of 32 F2 plants were measured in 3 distinct populations, and the characteristics related to plant architecture were analyzed in their progenies. Ninety-six F2:3 families and 4 controls were evaluated in a randomized block design, with 3 replications, analyzing plant architecture grade, HD, and grain yield during the winter 2010 and drought 2011 seasons. We found that the correlation between the HD of F2 plants and traits related to plant architecture of F2:3 progenies were of low magnitude compared to the estimates for correlations considering the parents, indicating a high environmental influence on HD in bean plants. There was a predominance of additive genetic effects on the determination of hypocotyl diameter, which showed higher precision and accuracy compared to plant architecture grade. Thus, this characteristic can be used to select progenies in plant architecture improvement of common beans; however, selection must be based on the means of at least 39 plants in the plot, according to the results of repeatability analysis. PMID:26436392

  7. The design and fabrication of common optical components lithography lens

    NASA Astrophysics Data System (ADS)

    Huang, Jiun-Woei

    2015-07-01

    The design and fabrication of common optical components lithography Lens has been carried out for a 1 to 1 stepper. The specification of lens is fulfilled the 3-D lithography system as 2 micron in resolution for 1 inch x 2.8 inches system. The lens has been sophistically designed by dual path in a triplet to reduce the number of components. A single aspherical surface has been applied to reduce the aberration to diffraction limit in lens. The well-made shapes of lens have been suggested. Then, the fabrication of lens has been in the process. Finally, the optical axis of tolerance optical mechanical mountings for lens system in assembly has been analyzed, and valuable for assembly and fabrication.

  8. Common Readout Unit (CRU) - A new readout architecture for the ALICE experiment

    NASA Astrophysics Data System (ADS)

    Mitra, J.; Khan, S. A.; Mukherjee, S.; Paul, R.

    2016-03-01

    The ALICE experiment at the CERN Large Hadron Collider (LHC) is presently going for a major upgrade in order to fully exploit the scientific potential of the upcoming high luminosity run, scheduled to start in the year 2021. The high interaction rate and the large event size will result in an experimental data flow of about 1 TB/s from the detectors, which need to be processed before sending to the online computing system and data storage. This processing is done in a dedicated Common Readout Unit (CRU), proposed for data aggregation, trigger and timing distribution and control moderation. It act as common interface between sub-detector electronic systems, computing system and trigger processors. The interface links include GBT, TTC-PON and PCIe. GBT (Gigabit transceiver) is used for detector data payload transmission and fixed latency path for trigger distribution between CRU and detector readout electronics. TTC-PON (Timing, Trigger and Control via Passive Optical Network) is employed for time multiplex trigger distribution between CRU and Central Trigger Processor (CTP). PCIe (Peripheral Component Interconnect Express) is the high-speed serial computer expansion bus standard for bulk data transport between CRU boards and processors. In this article, we give an overview of CRU architecture in ALICE, discuss the different interfaces, along with the firmware design and implementation of CRU on the LHCb PCIe40 board.

  9. Joint Polar Satellite System (JPSS) Common Ground System (CGS) Architecture Overview and Technical Performance Measures

    NASA Astrophysics Data System (ADS)

    Grant, K. D.; Johnson, B. R.; Miller, S. W.; Jamilkowski, M. L.

    2014-12-01

    The National Oceanic and Atmospheric Administration (NOAA) and National Aeronautics and Space Administration (NASA) are jointly acquiring the next-generation civilian weather and environmental satellite system: the Joint Polar Satellite System (JPSS). The Joint Polar Satellite System will replace the afternoon orbit component and ground processing system of the current Polar-orbiting Operational Environmental Satellites (POES) managed by NOAA. The JPSS satellites will carry a suite of sensors designed to collect meteorological, oceanographic, climatological and geophysical observations of the Earth. The ground processing system for JPSS is known as the JPSS Common Ground System (JPSS CGS). Developed and maintained by Raytheon Intelligence, Information and Services (IIS), the CGS is a multi-mission enterprise system serving NOAA, NASA and their national and international partners. The CGS provides a wide range of support to a number of missions. Originally designed to support S-NPP and JPSS, the CGS has demonstrated its scalability and flexibility to incorporate all of these other important missions efficiently and with minimal cost, schedule and risk, while strengthening global partnerships in weather and environmental monitoring. The CGS architecture will be upgraded to Block 2.0 in 2015 to satisfy several key objectives, including: "operationalizing" S-NPP, which had originally been intended as a risk reduction mission; leveraging lessons learned to date in multi-mission support; taking advantage of newer, more reliable and efficient technologies; and satisfying new requirements and constraints due to the continually evolving budgetary environment. To ensure the CGS meets these needs, we have developed 48 Technical Performance Measures (TPMs) across 9 categories: Data Availability, Data Latency, Operational Availability, Margin, Scalability, Situational Awareness, Transition (between environments and sites), WAN Efficiency, and Data Recovery Processing. This

  10. A Successful Component Architecture for Interoperable and Evolvable Ground Data Systems

    NASA Technical Reports Server (NTRS)

    Smith, Danford S.; Bristow, John O.; Wilmot, Jonathan

    2006-01-01

    The National Aeronautics and Space Administration (NASA) Goddard Space Flight Center (GSFC) has adopted an open architecture approach for satellite control centers and is now realizing benefits beyond those originally envisioned. The Goddard Mission Services Evolution Center (GMSEC) architecture utilizes standardized interfaces and a middleware software bus to allow functional components to be easily integrated. This paper presents the GMSEC architectural goals and concepts, the capabilities enabled and the benefits realized by adopting this framework approach. NASA experiences with applying the GMSEC architecture on multiple missions are discussed. The paper concludes with a summary of lessons learned, future directions for GMSEC and the possible applications beyond NASA GSFC.

  11. A Proposal of Client Application Architecture using Loosely Coupled Component Connection Method in Banking Branch System

    NASA Astrophysics Data System (ADS)

    Someya, Harushi; Mori, Yuichi; Abe, Masahiro; Machida, Isamu; Hasegawa, Atsushi; Yoshie, Osamu

    Due to the deregulation of financial industry, the branches in banking industry need to shift to the sales-oriented bases from the operation-oriented bases. For corresponding to this movement, new banking branch systems are being developed. It is the main characteristics of new systems that we bring the form operations that have traditionally been performed at each branch into the centralized operation center for the purpose of rationalization and efficiency of the form operations. The branches treat a wide variety of forms. The forms can be described by common items in many cases, but the items include the different business logic and each form has the different relation among the items. And there is a need to develop the client application by user oneself. Consequently the challenge is to arrange the development environment that is high reusable, easy customizable and user developable. We propose a client application architecture that has a loosely coupled component connection method, and allows developing the applications by only describing the screen configurations and their transitions in XML documents. By adopting our architecture, we developed client applications of the centralized operation center for the latest banking branch system. Our experiments demonstrate good performances.

  12. Using an architectural approach to integrate heterogeneous, distributed software components

    NASA Technical Reports Server (NTRS)

    Callahan, John R.; Purtilo, James M.

    1995-01-01

    Many computer programs cannot be easily integrated because their components are distributed and heterogeneous, i.e., they are implemented in diverse programming languages, use different data representation formats, or their runtime environments are incompatible. In many cases, programs are integrated by modifying their components or interposing mechanisms that handle communication and conversion tasks. For example, remote procedure call (RPC) helps integrate heterogeneous, distributed programs. When configuring such programs, however, mechanisms like RPC must be used explicitly by software developers in order to integrate collections of diverse components. Each collection may require a unique integration solution. This paper describes improvements to the concepts of software packaging and some of our experiences in constructing complex software systems from a wide variety of components in different execution environments. Software packaging is a process that automatically determines how to integrate a diverse collection of computer programs based on the types of components involved and the capabilities of available translators and adapters in an environment. Software packaging provides a context that relates such mechanisms to software integration processes and reduces the cost of configuring applications whose components are distributed or implemented in different programming languages. Our software packaging tool subsumes traditional integration tools like UNIX make by providing a rule-based approach to software integration that is independent of execution environments.

  13. A common network architecture efficiently implements a variety of sparsity-based inference problems.

    PubMed

    Charles, Adam S; Garrigues, Pierre; Rozell, Christopher J

    2012-12-01

    The sparse coding hypothesis has generated significant interest in the computational and theoretical neuroscience communities, but there remain open questions about the exact quantitative form of the sparsity penalty and the implementation of such a coding rule in neurally plausible architectures. The main contribution of this work is to show that a wide variety of sparsity-based probabilistic inference problems proposed in the signal processing and statistics literatures can be implemented exactly in the common network architecture known as the locally competitive algorithm (LCA). Among the cost functions we examine are approximate l(p) norms (0 ≤ p ≤ 2), modified l(p)-norms, block-l1 norms, and reweighted algorithms. Of particular interest is that we show significantly increased performance in reweighted l1 algorithms by inferring all parameters jointly in a dynamical system rather than using an iterative approach native to digital computational architectures. PMID:22970876

  14. Micro guidance and control synthesis: New components, architectures, and capabilities

    NASA Technical Reports Server (NTRS)

    Mettler, Edward; Hadaegh, Fred Y.

    1993-01-01

    New GN&C (guidance, navigation and control) system capabilities are shown to arise from component innovations that involve the synergistic use of microminiature sensors and actuators, microelectronics, and fiber optics. Micro-GN&C system and component concepts are defined that include micro-actuated adaptive optics, micromachined inertial sensors, fiber-optic data nets and light-power transmission, and VLSI microcomputers. The thesis is advanced that these micro-miniaturization products are capable of having a revolutionary impact on space missions and systems, and that GN&C is the pathfinder micro-technology application that can bring that about.

  15. Joint Polar Satellite System (JPSS) Common Ground System (CGS) Overview and Architectural Tenets

    NASA Astrophysics Data System (ADS)

    Miller, S. W.; Grant, K. D.; Jamilkowski, M. L.

    2013-12-01

    The National Oceanic and Atmospheric Administration (NOAA) and National Aeronautics and Space Administration (NASA) are jointly acquiring the next-generation civilian weather and environmental satellite system: the Joint Polar Satellite System (JPSS). The Joint Polar Satellite System will replace the afternoon orbit component and ground processing system of the current Polar-orbiting Operational Environmental Satellites (POES) managed by NOAA. The JPSS satellites will carry a suite of sensors designed to collect meteorological, oceanographic, climatological and geophysical observations of the Earth. The ground processing system for JPSS is known as the JPSS Common Ground System (JPSS CGS). Developed and maintained by Raytheon Intelligence and Information Systems (IIS), the CGS is a multi-mission enterprise system serving NOAA, NASA and their national and international partners. The CGS provides a wide range of support to a number of missions: 1) Command and control and mission management for the Suomi National Polar Partnership (S-NPP) mission today, expanding this support to the JPSS-1 satellite and the Polar Free Flyer mission in 2017 2) Data acquisition via a Polar Receptor Network (PRN) for S-NPP, the Japan Aerospace Exploration Agency's (JAXA) Global Change Observation Mission - Water (GCOM-W1), POES, and the Defense Meteorological Satellite Program (DMSP) and Coriolis/WindSat for the Department of Defense (DoD) 3) Data routing over a global fiber Wide Area Network (WAN) for S-NPP, JPSS-1, Polar Free Flyer, GCOM-W1, POES, DMSP, Coriolis/WindSat, the NASA Space Communications and Navigation (SCaN, which includes several Earth Observing System [EOS] missions), MetOp for the European Organisation for the Exploitation of Meteorological Satellites (EUMETSAT), and the National Science Foundation (NSF) 4) Environmental data processing and distribution for S-NPP, GCOM-W1 and JPSS-1 The CGS architecture will receive a technology refresh in 2015 to satisfy several key

  16. Manned/Unmanned Common Architecture Program (MCAP) net centric flight tests

    NASA Astrophysics Data System (ADS)

    Johnson, Dale

    2009-04-01

    Properly architected avionics systems can reduce the costs of periodic functional improvements, maintenance, and obsolescence. With this in mind, the U.S. Army Aviation Applied Technology Directorate (AATD) initiated the Manned/Unmanned Common Architecture Program (MCAP) in 2003 to develop an affordable, high-performance embedded mission processing architecture for potential application to multiple aviation platforms. MCAP analyzed Army helicopter and unmanned air vehicle (UAV) missions, identified supporting subsystems, surveyed advanced hardware and software technologies, and defined computational infrastructure technical requirements. The project selected a set of modular open systems standards and market-driven commercial-off-theshelf (COTS) electronics and software, and, developed experimental mission processors, network architectures, and software infrastructures supporting the integration of new capabilities, interoperability, and life cycle cost reductions. MCAP integrated the new mission processing architecture into an AH-64D Apache Longbow and participated in Future Combat Systems (FCS) network-centric operations field experiments in 2006 and 2007 at White Sands Missile Range (WSMR), New Mexico and at the Nevada Test and Training Range (NTTR) in 2008. The MCAP Apache also participated in PM C4ISR On-the-Move (OTM) Capstone Experiments 2007 (E07) and 2008 (E08) at Ft. Dix, NJ and conducted Mesa, Arizona local area flight tests in December 2005, February 2006, and June 2008.

  17. A Systems Approach to Developing an Affordable Space Ground Transportation Architecture using a Commonality Approach

    NASA Technical Reports Server (NTRS)

    Garcia, Jerry L.; McCleskey, Carey M.; Bollo, Timothy R.; Rhodes, Russel E.; Robinson, John W.

    2012-01-01

    This paper presents a structured approach for achieving a compatible Ground System (GS) and Flight System (FS) architecture that is affordable, productive and sustainable. This paper is an extension of the paper titled "Approach to an Affordable and Productive Space Transportation System" by McCleskey et al. This paper integrates systems engineering concepts and operationally efficient propulsion system concepts into a structured framework for achieving GS and FS compatibility in the mid-term and long-term time frames. It also presents a functional and quantitative relationship for assessing system compatibility called the Architecture Complexity Index (ACI). This paper: (1) focuses on systems engineering fundamentals as it applies to improving GS and FS compatibility; (2) establishes mid-term and long-term spaceport goals; (3) presents an overview of transitioning a spaceport to an airport model; (4) establishes a framework for defining a ground system architecture; (5) presents the ACI concept; (6) demonstrates the approach by presenting a comparison of different GS architectures; and (7) presents a discussion on the benefits of using this approach with a focus on commonality.

  18. The architectural relationship of components controlling mast cell endocytosis

    PubMed Central

    Cleyrat, Cédric; Darehshouri, Anza; Anderson, Karen L.; Page, Christopher; Lidke, Diane S.; Volkmann, Niels; Hanein, Dorit; Wilson, Bridget S.

    2013-01-01

    Summary Eukaryotic cells use multiple routes for receptor internalization. Here, we examine the topographical relationships of clathrin-dependent and clathrin-independent endocytic structures on the plasma membranes of leukemia-derived mast cells. The high affinity IgE receptor (FcεRI) utilizes both pathways, whereas transferrin receptor serves as a marker for the classical clathrin-mediated endocytosis pathway. Both receptors were tracked by live-cell imaging in the presence or absence of inhibitors that established their differential dependence on specific endocytic adaptor proteins. The topology of antigen-bound FcεRI, clathrin, dynamin, Arf6 and Eps15-positive structures were analyzed by 2D and 3D immunoelectron microscopy techniques, revealing their remarkable spatial relationships and unique geometry. We conclude that the mast cell plasma membrane has multiple specialized domains for endocytosis. Their close proximity might reflect shared components, such as lipids and adaptor proteins, that facilitate inward membrane curvature. Intersections between these specialized domains might represent sorting stations that direct cargo to specific endocytic pathways. PMID:23986485

  19. Use of common beans as components in polymeric materials

    Technology Transfer Automated Retrieval System (TEKTRAN)

    One of the research trends in recent years is to use natural renewable materials as "green" raw materials for industrial applications. Common beans are well known, widely available and relatively cheap. They contain polysaccharides, proteins, triglyceride oils, minerals, vitamins, and phenolic antio...

  20. Common relationships among proximate composition components in fishes

    USGS Publications Warehouse

    Hartman, K.J.; Margraf, F.J.

    2008-01-01

    Relationships between the various body proximate components and dry matter content were examined for five species of fishes, representing anadromous, marine and freshwater species: chum salmon Oncorhynchus keta, Chinook salmon Oncorhynchus tshawytscha, brook trout Salvelinus fontinalis, bluefish Pomatomus saltatrix and striped bass Morone saxatilis. The dry matter content or per cent dry mass of these fishes can be used to reliably predict the per cent composition of the other components. Therefore, with validation it is possible to estimate fat, protein and ash content of fishes from per cent dry mass information, reducing the need for costly and time-consuming laboratory proximate analysis. This approach coupled with new methods of non-lethal estimation of per cent dry mass, such as from bioelectrical impedance analysis, can provide non-destructive measurements of proximate composition of fishes. ?? 2008 The Authors.

  1. Sulfate Storage and Stability on Common Lean NOx Trap Components

    SciTech Connect

    Ottinger, Nathan A; Toops, Todd J; Pihl, Josh A; Roop, Justin T; Choi, Jae-Soon; Partridge Jr, William P

    2012-01-01

    Components found in a commercial lean NO{sub x} trap have been studied in order to determine their impact on sulfate storage and release. A micro-reactor and a diffuse reflectance infrared Fourier transform spectrometer (DRIFTS) were used to compare components MgAl{sub 2}O{sub 4}, Pt/MgAl{sub 2}O{sub 4}, Pt/Al{sub 2}O{sub 3}, Pt/Ba/Al{sub 2}O{sub 3}, Pt/CeO{sub 2}-ZrO{sub 2}, and Pt/Ba/CeO{sub 2}-ZrO{sub 2}, as well as physical mixtures of Pt/Al{sub 2}O{sub 3} + MgAl{sub 2}O{sub 4} and Pt/Ba/CeO{sub 2}-ZrO{sub 2} + MgAl{sub 2}O{sub 4}. Desulfation temperature profiles as well as DRIFTS NO{sub x} and SO{sub x} storage spectra are presented for all components. This systematic approach highlighted the ability of the underlying support to impact sulfate stability, in particular when Ba was supported on ceria-zirconia rather than alumina the desulfation temperature decreased by 60-120 C. A conceptual model of sulfation progression on the ceria-zirconia support is proposed that explains the high uptake of sulfur and low temperature release when it is employed. It was also determined that the close proximity of platinum is not necessary for much of the sulfation and desulfation chemistry that occurs, as physical mixtures with platinum dispersed on only one phase displayed similar behavior to samples with platinum dispersed on both phases.

  2. Educational Software Architecture and Systematic Impact: The Promise of Component Software.

    ERIC Educational Resources Information Center

    Roschelle, Jeremy; Kaput, James

    1996-01-01

    Examines the failure of current technology to meet education's flexible needs and points to a promising solution: component software architecture. Discusses the failure of stand-alone applications in their incompatibility, waste of funding, prevention of customization, and obstruction of integration. (AEF)

  3. Comparison of pelvic muscle architecture between humans and commonly used laboratory species

    PubMed Central

    Alperin, Marianna; Tuttle, Lori J.; Conner, Blair R.; Dixon, Danielle M.; Mathewson, Margie A.; Ward, Samuel R.

    2014-01-01

    Introduction and hypothesis Pelvic floor muscles (PFM) are deleteriously affected by vaginal birth, which contributes to the development of pelvic floor disorders. To mechanistically link these events, experiments using animal models are required, as access to human PFM tissue is challenging. In choosing an animal model, a comparative study of PFM design is necessary, since gross anatomy alone is insufficient to guide the selection. Methods Human PFM architecture was measured using micromechanical dissection and then compared with mouse (n=10), rat (n=10), and rabbit (n=10) using the Architectural Difference Index (ADI) (parameterizing a combined measure of sarcomere length-to-optimal-sarcomere ratio, fiber-to-muscle-length ratio, and fraction of total PFM mass and physiological cross-sectional area (PCSA) contributed by each muscle). Coccygeus (C), iliocaudalis (IC), and pubocaudalis (PC) were harvested and subjected to architectural measurements. Parameters within species were compared using repeated measures analysis of variance (ANOVA) with post hoc Tukey's tests. The scaling relationships of PFM across species were quantified using least-squares regression of log-10-transformed variables. Results Based on the ADI, rat was found to be the most similar to humans (ADI = 2.5), followed by mouse (ADI = 3.3). When animals' body mass was regressed against muscle mass, muscle length, fiber length, and PCSA scaling coefficients showed a negative allometric relationship or smaller increase than predicted by geometric scaling. Conclusion In terms of muscle design among commonly used laboratory animals, rat best approximates the human PFM, followed by mouse. Negative allometric scaling of PFM architectural parameters is likely due to the multifaceted function of these muscles. PMID:24915840

  4. Coupling Multi-Component Models with MPH on Distributed MemoryComputer Architectures

    SciTech Connect

    He, Yun; Ding, Chris

    2005-03-24

    A growing trend in developing large and complex applications on today's Teraflop scale computers is to integrate stand-alone and/or semi-independent program components into a comprehensive simulation package. One example is the Community Climate System Model which consists of atmosphere, ocean, land-surface and sea-ice components. Each component is semi-independent and has been developed at a different institution. We study how this multi-component, multi-executable application can run effectively on distributed memory architectures. For the first time, we clearly identify five effective execution modes and develop the MPH library to support application development utilizing these modes. MPH performs component-name registration, resource allocation and initial component handshaking in a flexible way.

  5. A component-based, distributed object services architecture for a clinical workstation.

    PubMed Central

    Chueh, H. C.; Raila, W. F.; Pappas, J. J.; Ford, M.; Zatsman, P.; Tu, J.; Barnett, G. O.

    1996-01-01

    Attention to an architectural framework in the development of clinical applications can promote reusability of both legacy systems as well as newly designed software. We describe one approach to an architecture for a clinical workstation application which is based on a critical middle tier of distributed object-oriented services. This tier of network-based services provides flexibility in the creation of both the user interface and the database tiers. We developed a clinical workstation for ambulatory care using this architecture, defining a number of core services including those for vocabulary, patient index, documents, charting, security, and encounter management. These services can be implemented through proprietary or more standard distributed object interfaces such as CORBA and OLE. Services are accessed over the network by a collection of user interface components which can be mixed and matched to form a variety of interface styles. These services have also been reused with several applications based on World Wide Web browser interfaces. PMID:8947744

  6. Judicious use of custom development in an open source component architecture

    NASA Astrophysics Data System (ADS)

    Bristol, S.; Latysh, N.; Long, D.; Tekell, S.; Allen, J.

    2014-12-01

    Modern software engineering is not as much programming from scratch as innovative assembly of existing components. Seamlessly integrating disparate components into scalable, performant architecture requires sound engineering craftsmanship and can often result in increased cost efficiency and accelerated capabilities if software teams focus their creativity on the edges of the problem space. ScienceBase is part of the U.S. Geological Survey scientific cyberinfrastructure, providing data and information management, distribution services, and analysis capabilities in a way that strives to follow this pattern. ScienceBase leverages open source NoSQL and relational databases, search indexing technology, spatial service engines, numerous libraries, and one proprietary but necessary software component in its architecture. The primary engineering focus is cohesive component interaction, including construction of a seamless Application Programming Interface (API) across all elements. The API allows researchers and software developers alike to leverage the infrastructure in unique, creative ways. Scaling the ScienceBase architecture and core API with increasing data volume (more databases) and complexity (integrated science problems) is a primary challenge addressed by judicious use of custom development in the component architecture. Other data management and informatics activities in the earth sciences have independently resolved to a similar design of reusing and building upon established technology and are working through similar issues for managing and developing information (e.g., U.S. Geoscience Information Network; NASA's Earth Observing System Clearing House; GSToRE at the University of New Mexico). Recent discussions facilitated through the Earth Science Information Partners are exploring potential avenues to exploit the implicit relationships between similar projects for explicit gains in our ability to more rapidly advance global scientific cyberinfrastructure.

  7. Investigation of a Novel Common Subexpression Elimination Method for Low Power and Area Efficient DCT Architecture

    PubMed Central

    Siddiqui, M. F.; Reza, A. W.; Kanesan, J.; Ramiah, H.

    2014-01-01

    A wide interest has been observed to find a low power and area efficient hardware design of discrete cosine transform (DCT) algorithm. This research work proposed a novel Common Subexpression Elimination (CSE) based pipelined architecture for DCT, aimed at reproducing the cost metrics of power and area while maintaining high speed and accuracy in DCT applications. The proposed design combines the techniques of Canonical Signed Digit (CSD) representation and CSE to implement the multiplier-less method for fixed constant multiplication of DCT coefficients. Furthermore, symmetry in the DCT coefficient matrix is used with CSE to further decrease the number of arithmetic operations. This architecture needs a single-port memory to feed the inputs instead of multiport memory, which leads to reduction of the hardware cost and area. From the analysis of experimental results and performance comparisons, it is observed that the proposed scheme uses minimum logic utilizing mere 340 slices and 22 adders. Moreover, this design meets the real time constraints of different video/image coders and peak-signal-to-noise-ratio (PSNR) requirements. Furthermore, the proposed technique has significant advantages over recent well-known methods along with accuracy in terms of power reduction, silicon area usage, and maximum operating frequency by 41%, 15%, and 15%, respectively. PMID:25133249

  8. [A telemedicine electrocardiography system based on the component-architecture soft].

    PubMed

    Potapov, I V; Selishchev, S V

    2004-01-01

    The paper deals with a universal component-oriented architecture for creating the telemedicine applications. The worked-out system ensures the ECG reading, pressure measurements and pulsometry. The system design comprises a central database server and a client telemedicine module. Data can be transmitted via different interfaces--from an ordinary local network to digital satellite phones. The data protection is guaranteed by microchip charts that were used to realize the authentication 3DES algorithm. PMID:15293500

  9. Workflow-enabled distributed component-based information architecture for digital medical imaging enterprises.

    PubMed

    Wong, Stephen T C; Tjandra, Donny; Wang, Huili; Shen, Weimin

    2003-09-01

    Few information systems today offer a flexible means to define and manage the automated part of radiology processes, which provide clinical imaging services for the entire healthcare organization. Even fewer of them provide a coherent architecture that can easily cope with heterogeneity and inevitable local adaptation of applications and can integrate clinical and administrative information to aid better clinical, operational, and business decisions. We describe an innovative enterprise architecture of image information management systems to fill the needs. Such a system is based on the interplay of production workflow management, distributed object computing, Java and Web techniques, and in-depth domain knowledge in radiology operations. Our design adapts the approach of "4+1" architectural view. In this new architecture, PACS and RIS become one while the user interaction can be automated by customized workflow process. Clinical service applications are implemented as active components. They can be reasonably substituted by applications of local adaptations and can be multiplied for fault tolerance and load balancing. Furthermore, the workflow-enabled digital radiology system would provide powerful query and statistical functions for managing resources and improving productivity. This paper will potentially lead to a new direction of image information management. We illustrate the innovative design with examples taken from an implemented system. PMID:14518730

  10. Contrasting genetic architectures of schizophrenia and other complex diseases using fast variance components analysis

    PubMed Central

    Bhatia, Gaurav; Gusev, Alexander; Finucane, Hilary K; Bulik-Sullivan, Brendan K; Pollack, Samuela J; de Candia, Teresa R; Lee, Sang Hong; Wray, Naomi R; Kendler, Kenneth S; O’Donovan, Michael C; Neale, Benjamin M; Patterson, Nick

    2015-01-01

    Heritability analyses of GWAS cohorts have yielded important insights into complex disease architecture, and increasing sample sizes hold the promise of further discoveries. Here, we analyze the genetic architecture of schizophrenia in 49,806 samples from the PGC, and nine complex diseases in 54,734 samples from the GERA cohort. For schizophrenia, we infer an overwhelmingly polygenic disease architecture in which ≥71% of 1Mb genomic regions harbor ≥1 variant influencing schizophrenia risk. We also observe significant enrichment of heritability in GC-rich regions and in higher-frequency SNPs for both schizophrenia and GERA diseases. In bivariate analyses, we observe significant genetic correlations (ranging from 0.18 to 0.85) among several pairs of GERA diseases; genetic correlations were on average 1.3x stronger than correlations of overall disease liabilities. To accomplish these analyses, we developed a fast algorithm for multi-component, multi-trait variance components analysis that overcomes prior computational barriers that made such analyses intractable at this scale. PMID:26523775

  11. Frequency multiplexed flux locked loop architecture providing an array of DC SQUIDS having both shared and unshared components

    DOEpatents

    Ganther, Jr., Kenneth R.; Snapp, Lowell D.

    2002-01-01

    Architecture for frequency multiplexing multiple flux locked loops in a system comprising an array of DC SQUID sensors. The architecture involves dividing the traditional flux locked loop into multiple unshared components and a single shared component which, in operation, form a complete flux locked loop relative to each DC SQUID sensor. Each unshared flux locked loop component operates on a different flux modulation frequency. The architecture of the present invention allows a reduction from 2N to N+1 in the number of connections between the cryogenic DC SQUID sensors and their associated room temperature flux locked loops. Furthermore, the 1.times.N architecture of the present invention can be paralleled to form an M.times.N array architecture without increasing the required number of flux modulation frequencies.

  12. Defining the role of common variation in the genomic and biological architecture of adult human height.

    PubMed

    Wood, Andrew R; Esko, Tonu; Yang, Jian; Vedantam, Sailaja; Pers, Tune H; Gustafsson, Stefan; Chu, Audrey Y; Estrada, Karol; Luan, Jian'an; Kutalik, Zoltán; Amin, Najaf; Buchkovich, Martin L; Croteau-Chonka, Damien C; Day, Felix R; Duan, Yanan; Fall, Tove; Fehrmann, Rudolf; Ferreira, Teresa; Jackson, Anne U; Karjalainen, Juha; Lo, Ken Sin; Locke, Adam E; Mägi, Reedik; Mihailov, Evelin; Porcu, Eleonora; Randall, Joshua C; Scherag, André; Vinkhuyzen, Anna A E; Westra, Harm-Jan; Winkler, Thomas W; Workalemahu, Tsegaselassie; Zhao, Jing Hua; Absher, Devin; Albrecht, Eva; Anderson, Denise; Baron, Jeffrey; Beekman, Marian; Demirkan, Ayse; Ehret, Georg B; Feenstra, Bjarke; Feitosa, Mary F; Fischer, Krista; Fraser, Ross M; Goel, Anuj; Gong, Jian; Justice, Anne E; Kanoni, Stavroula; Kleber, Marcus E; Kristiansson, Kati; Lim, Unhee; Lotay, Vaneet; Lui, Julian C; Mangino, Massimo; Mateo Leach, Irene; Medina-Gomez, Carolina; Nalls, Michael A; Nyholt, Dale R; Palmer, Cameron D; Pasko, Dorota; Pechlivanis, Sonali; Prokopenko, Inga; Ried, Janina S; Ripke, Stephan; Shungin, Dmitry; Stancáková, Alena; Strawbridge, Rona J; Sung, Yun Ju; Tanaka, Toshiko; Teumer, Alexander; Trompet, Stella; van der Laan, Sander W; van Setten, Jessica; Van Vliet-Ostaptchouk, Jana V; Wang, Zhaoming; Yengo, Loïc; Zhang, Weihua; Afzal, Uzma; Arnlöv, Johan; Arscott, Gillian M; Bandinelli, Stefania; Barrett, Amy; Bellis, Claire; Bennett, Amanda J; Berne, Christian; Blüher, Matthias; Bolton, Jennifer L; Böttcher, Yvonne; Boyd, Heather A; Bruinenberg, Marcel; Buckley, Brendan M; Buyske, Steven; Caspersen, Ida H; Chines, Peter S; Clarke, Robert; Claudi-Boehm, Simone; Cooper, Matthew; Daw, E Warwick; De Jong, Pim A; Deelen, Joris; Delgado, Graciela; Denny, Josh C; Dhonukshe-Rutten, Rosalie; Dimitriou, Maria; Doney, Alex S F; Dörr, Marcus; Eklund, Niina; Eury, Elodie; Folkersen, Lasse; Garcia, Melissa E; Geller, Frank; Giedraitis, Vilmantas; Go, Alan S; Grallert, Harald; Grammer, Tanja B; Gräßler, Jürgen; Grönberg, Henrik; de Groot, Lisette C P G M; Groves, Christopher J; Haessler, Jeffrey; Hall, Per; Haller, Toomas; Hallmans, Goran; Hannemann, Anke; Hartman, Catharina A; Hassinen, Maija; Hayward, Caroline; Heard-Costa, Nancy L; Helmer, Quinta; Hemani, Gibran; Henders, Anjali K; Hillege, Hans L; Hlatky, Mark A; Hoffmann, Wolfgang; Hoffmann, Per; Holmen, Oddgeir; Houwing-Duistermaat, Jeanine J; Illig, Thomas; Isaacs, Aaron; James, Alan L; Jeff, Janina; Johansen, Berit; Johansson, Åsa; Jolley, Jennifer; Juliusdottir, Thorhildur; Junttila, Juhani; Kho, Abel N; Kinnunen, Leena; Klopp, Norman; Kocher, Thomas; Kratzer, Wolfgang; Lichtner, Peter; Lind, Lars; Lindström, Jaana; Lobbens, Stéphane; Lorentzon, Mattias; Lu, Yingchang; Lyssenko, Valeriya; Magnusson, Patrik K E; Mahajan, Anubha; Maillard, Marc; McArdle, Wendy L; McKenzie, Colin A; McLachlan, Stela; McLaren, Paul J; Menni, Cristina; Merger, Sigrun; Milani, Lili; Moayyeri, Alireza; Monda, Keri L; Morken, Mario A; Müller, Gabriele; Müller-Nurasyid, Martina; Musk, Arthur W; Narisu, Narisu; Nauck, Matthias; Nolte, Ilja M; Nöthen, Markus M; Oozageer, Laticia; Pilz, Stefan; Rayner, Nigel W; Renstrom, Frida; Robertson, Neil R; Rose, Lynda M; Roussel, Ronan; Sanna, Serena; Scharnagl, Hubert; Scholtens, Salome; Schumacher, Fredrick R; Schunkert, Heribert; Scott, Robert A; Sehmi, Joban; Seufferlein, Thomas; Shi, Jianxin; Silventoinen, Karri; Smit, Johannes H; Smith, Albert Vernon; Smolonska, Joanna; Stanton, Alice V; Stirrups, Kathleen; Stott, David J; Stringham, Heather M; Sundström, Johan; Swertz, Morris A; Syvänen, Ann-Christine; Tayo, Bamidele O; Thorleifsson, Gudmar; Tyrer, Jonathan P; van Dijk, Suzanne; van Schoor, Natasja M; van der Velde, Nathalie; van Heemst, Diana; van Oort, Floor V A; Vermeulen, Sita H; Verweij, Niek; Vonk, Judith M; Waite, Lindsay L; Waldenberger, Melanie; Wennauer, Roman; Wilkens, Lynne R; Willenborg, Christina; Wilsgaard, Tom; Wojczynski, Mary K; Wong, Andrew; Wright, Alan F; Zhang, Qunyuan; Arveiler, Dominique; Bakker, Stephan J L; Beilby, John; Bergman, Richard N; Bergmann, Sven; Biffar, Reiner; Blangero, John; Boomsma, Dorret I; Bornstein, Stefan R; Bovet, Pascal; Brambilla, Paolo; Brown, Morris J; Campbell, Harry; Caulfield, Mark J; Chakravarti, Aravinda; Collins, Rory; Collins, Francis S; Crawford, Dana C; Cupples, L Adrienne; Danesh, John; de Faire, Ulf; den Ruijter, Hester M; Erbel, Raimund; Erdmann, Jeanette; Eriksson, Johan G; Farrall, Martin; Ferrannini, Ele; Ferrières, Jean; Ford, Ian; Forouhi, Nita G; Forrester, Terrence; Gansevoort, Ron T; Gejman, Pablo V

    2014-11-01

    Using genome-wide data from 253,288 individuals, we identified 697 variants at genome-wide significance that together explained one-fifth of the heritability for adult height. By testing different numbers of variants in independent studies, we show that the most strongly associated ∼2,000, ∼3,700 and ∼9,500 SNPs explained ∼21%, ∼24% and ∼29% of phenotypic variance. Furthermore, all common variants together captured 60% of heritability. The 697 variants clustered in 423 loci were enriched for genes, pathways and tissue types known to be involved in growth and together implicated genes and pathways not highlighted in earlier efforts, such as signaling by fibroblast growth factors, WNT/β-catenin and chondroitin sulfate-related genes. We identified several genes and pathways not previously connected with human skeletal growth, including mTOR, osteoglycin and binding of hyaluronic acid. Our results indicate a genetic architecture for human height that is characterized by a very large but finite number (thousands) of causal variants. PMID:25282103

  13. Defining the role of common variation in the genomic and biological architecture of adult human height

    PubMed Central

    Chu, Audrey Y; Estrada, Karol; Luan, Jian’an; Kutalik, Zoltán; Amin, Najaf; Buchkovich, Martin L; Croteau-Chonka, Damien C; Day, Felix R; Duan, Yanan; Fall, Tove; Fehrmann, Rudolf; Ferreira, Teresa; Jackson, Anne U; Karjalainen, Juha; Lo, Ken Sin; Locke, Adam E; Mägi, Reedik; Mihailov, Evelin; Porcu, Eleonora; Randall, Joshua C; Scherag, André; Vinkhuyzen, Anna AE; Westra, Harm-Jan; Winkler, Thomas W; Workalemahu, Tsegaselassie; Zhao, Jing Hua; Absher, Devin; Albrecht, Eva; Anderson, Denise; Baron, Jeffrey; Beekman, Marian; Demirkan, Ayse; Ehret, Georg B; Feenstra, Bjarke; Feitosa, Mary F; Fischer, Krista; Fraser, Ross M; Goel, Anuj; Gong, Jian; Justice, Anne E; Kanoni, Stavroula; Kleber, Marcus E; Kristiansson, Kati; Lim, Unhee; Lotay, Vaneet; Lui, Julian C; Mangino, Massimo; Leach, Irene Mateo; Medina-Gomez, Carolina; Nalls, Michael A; Nyholt, Dale R; Palmer, Cameron D; Pasko, Dorota; Pechlivanis, Sonali; Prokopenko, Inga; Ried, Janina S; Ripke, Stephan; Shungin, Dmitry; Stancáková, Alena; Strawbridge, Rona J; Sung, Yun Ju; Tanaka, Toshiko; Teumer, Alexander; Trompet, Stella; van der Laan, Sander W; van Setten, Jessica; Van Vliet-Ostaptchouk, Jana V; Wang, Zhaoming; Yengo, Loïc; Zhang, Weihua; Afzal, Uzma; Ärnlöv, Johan; Arscott, Gillian M; Bandinelli, Stefania; Barrett, Amy; Bellis, Claire; Bennett, Amanda J; Berne, Christian; Blüher, Matthias; Bolton, Jennifer L; Böttcher, Yvonne; Boyd, Heather A; Bruinenberg, Marcel; Buckley, Brendan M; Buyske, Steven; Caspersen, Ida H; Chines, Peter S; Clarke, Robert; Claudi-Boehm, Simone; Cooper, Matthew; Daw, E Warwick; De Jong, Pim A; Deelen, Joris; Delgado, Graciela; Denny, Josh C; Dhonukshe-Rutten, Rosalie; Dimitriou, Maria; Doney, Alex SF; Dörr, Marcus; Eklund, Niina; Eury, Elodie; Folkersen, Lasse; Garcia, Melissa E; Geller, Frank; Giedraitis, Vilmantas; Go, Alan S; Grallert, Harald; Grammer, Tanja B; Gräßler, Jürgen; Grönberg, Henrik; de Groot, Lisette C.P.G.M.; Groves, Christopher J; Haessler, Jeffrey; Hall, Per; Haller, Toomas; Hallmans, Goran; Hannemann, Anke; Hartman, Catharina A; Hassinen, Maija; Hayward, Caroline; Heard-Costa, Nancy L; Helmer, Quinta; Hemani, Gibran; Henders, Anjali K; Hillege, Hans L; Hlatky, Mark A; Hoffmann, Wolfgang; Hoffmann, Per; Holmen, Oddgeir; Houwing-Duistermaat, Jeanine J; Illig, Thomas; Isaacs, Aaron; James, Alan L; Jeff, Janina; Johansen, Berit; Johansson, Åsa; Jolley, Jennifer; Juliusdottir, Thorhildur; Junttila, Juhani; Kho, Abel N; Kinnunen, Leena; Klopp, Norman; Kocher, Thomas; Kratzer, Wolfgang; Lichtner, Peter; Lind, Lars; Lindström, Jaana; Lobbens, Stéphane; Lorentzon, Mattias; Lu, Yingchang; Lyssenko, Valeriya; Magnusson, Patrik KE; Mahajan, Anubha; Maillard, Marc; McArdle, Wendy L; McKenzie, Colin A; McLachlan, Stela; McLaren, Paul J; Menni, Cristina; Merger, Sigrun; Milani, Lili; Moayyeri, Alireza; Monda, Keri L; Morken, Mario A; Müller, Gabriele; Müller-Nurasyid, Martina; Musk, Arthur W; Narisu, Narisu; Nauck, Matthias; Nolte, Ilja M; Nöthen, Markus M; Oozageer, Laticia; Pilz, Stefan; Rayner, Nigel W; Renstrom, Frida; Robertson, Neil R; Rose, Lynda M; Roussel, Ronan; Sanna, Serena; Scharnagl, Hubert; Scholtens, Salome; Schumacher, Fredrick R; Schunkert, Heribert; Scott, Robert A; Sehmi, Joban; Seufferlein, Thomas; Shi, Jianxin; Silventoinen, Karri; Smit, Johannes H; Smith, Albert Vernon; Smolonska, Joanna; Stanton, Alice V; Stirrups, Kathleen; Stott, David J; Stringham, Heather M; Sundström, Johan; Swertz, Morris A; Syvänen, Ann-Christine; Tayo, Bamidele O; Thorleifsson, Gudmar; Tyrer, Jonathan P; van Dijk, Suzanne; van Schoor, Natasja M; van der Velde, Nathalie; van Heemst, Diana; van Oort, Floor VA; Vermeulen, Sita H; Verweij, Niek; Vonk, Judith M; Waite, Lindsay L; Waldenberger, Melanie; Wennauer, Roman; Wilkens, Lynne R; Willenborg, Christina; Wilsgaard, Tom; Wojczynski, Mary K; Wong, Andrew; Wright, Alan F; Zhang, Qunyuan; Arveiler, Dominique; Bakker, Stephan JL; Beilby, John; Bergman, Richard N; Bergmann, Sven; Biffar, Reiner; Blangero, John; Boomsma, Dorret I; Bornstein, Stefan R; Bovet, Pascal; Brambilla, Paolo; Brown, Morris J; Campbell, Harry; Caulfield, Mark J; Chakravarti, Aravinda; Collins, Rory; Collins, Francis S; Crawford, Dana C; Cupples, L Adrienne; Danesh, John; de Faire, Ulf; den Ruijter, Hester M; Erbel, Raimund; Erdmann, Jeanette; Eriksson, Johan G; Farrall, Martin; Ferrannini, Ele; Ferrières, Jean; Ford, Ian; Forouhi, Nita G; Forrester, Terrence; Gansevoort, Ron T

    2014-01-01

    Using genome-wide data from 253,288 individuals, we identified 697 variants at genome-wide significance that together explain one-fifth of heritability for adult height. By testing different numbers of variants in independent studies, we show that the most strongly associated ~2,000, ~3,700 and ~9,500 SNPs explained ~21%, ~24% and ~29% of phenotypic variance. Furthermore, all common variants together captured the majority (60%) of heritability. The 697 variants clustered in 423 loci enriched for genes, pathways, and tissue-types known to be involved in growth and together implicated genes and pathways not highlighted in earlier efforts, such as signaling by fibroblast growth factors, WNT/beta-catenin, and chondroitin sulfate-related genes. We identified several genes and pathways not previously connected with human skeletal growth, including mTOR, osteoglycin and binding of hyaluronic acid. Our results indicate a genetic architecture for human height that is characterized by a very large but finite number (thousands) of causal variants. PMID:25282103

  14. Contrasting genetic architectures of schizophrenia and other complex diseases using fast variance-components analysis.

    PubMed

    Loh, Po-Ru; Bhatia, Gaurav; Gusev, Alexander; Finucane, Hilary K; Bulik-Sullivan, Brendan K; Pollack, Samuela J; de Candia, Teresa R; Lee, Sang Hong; Wray, Naomi R; Kendler, Kenneth S; O'Donovan, Michael C; Neale, Benjamin M; Patterson, Nick; Price, Alkes L

    2015-12-01

    Heritability analyses of genome-wide association study (GWAS) cohorts have yielded important insights into complex disease architecture, and increasing sample sizes hold the promise of further discoveries. Here we analyze the genetic architectures of schizophrenia in 49,806 samples from the PGC and nine complex diseases in 54,734 samples from the GERA cohort. For schizophrenia, we infer an overwhelmingly polygenic disease architecture in which ≥71% of 1-Mb genomic regions harbor ≥1 variant influencing schizophrenia risk. We also observe significant enrichment of heritability in GC-rich regions and in higher-frequency SNPs for both schizophrenia and GERA diseases. In bivariate analyses, we observe significant genetic correlations (ranging from 0.18 to 0.85) for several pairs of GERA diseases; genetic correlations were on average 1.3 tunes stronger than the correlations of overall disease liabilities. To accomplish these analyses, we developed a fast algorithm for multicomponent, multi-trait variance-components analysis that overcomes prior computational barriers that made such analyses intractable at this scale. PMID:26523775

  15. THE ACTIVITY/SPACE, A LEAST COMMON DENOMINATOR FOR ARCHITECTURAL PROGRAMMING.

    ERIC Educational Resources Information Center

    HAVILAND, DAVID S.

    TWO INTERRELATED PROBLEM AREAS OF ARCHITECTURAL PROGRAMING ARE DISCUSSED--(1) "NEEDS DEFINITION," AND (2) "NEEDS DOCUMENTATION AND COMMUNICATION". FUNDAMENTAL ISSUES AND WORK OF THE CENTER FOR ARCHITECTURAL RESEARCH ARE PRESENTED. ISSUES ARE THE FAILURE TO RECOGNIZE HOW, WHEN, AND IN WHAT FORM THE NEED WILL BE USED. CRITERIA FORMULATION MUST BE…

  16. On Studying Common Factor Variance in Multiple-Component Measuring Instruments

    ERIC Educational Resources Information Center

    Raykov, Tenko; Pohl, Steffi

    2013-01-01

    A method for examining common factor variance in multiple-component measuring instruments is outlined. The procedure is based on an application of the latent variable modeling methodology and is concerned with evaluating observed variance explained by a global factor and by one or more additional component-specific factors. The approach furnishes…

  17. The component-based architecture of the HELIOS medical software engineering environment.

    PubMed

    Degoulet, P; Jean, F C; Engelmann, U; Meinzer, H P; Baud, R; Sandblad, B; Wigertz, O; Le Meur, R; Jagermann, C

    1994-12-01

    The constitution of highly integrated health information networks and the growth of multimedia technologies raise new challenges for the development of medical applications. We describe in this paper the general architecture of the HELIOS medical software engineering environment devoted to the development and maintenance of multimedia distributed medical applications. HELIOS is made of a set of software components, federated by a communication channel called the HELIOS Unification Bus. The HELIOS kernel includes three main components, the Analysis-Design and Environment, the Object Information System and the Interface Manager. HELIOS services consist in a collection of toolkits providing the necessary facilities to medical application developers. They include Image Related services, a Natural Language Processor, a Decision Support System and Connection services. The project gives special attention to both object-oriented approaches and software re-usability that are considered crucial steps towards the development of more reliable, coherent and integrated applications. PMID:7882667

  18. The Emergence of Agent-Based Technology as an Architectural Component of Serious Games

    NASA Technical Reports Server (NTRS)

    Phillips, Mark; Scolaro, Jackie; Scolaro, Daniel

    2010-01-01

    The evolution of games as an alternative to traditional simulations in the military context has been gathering momentum over the past five years, even though the exploration of their use in the serious sense has been ongoing since the mid-nineties. Much of the focus has been on the aesthetics of the visuals provided by the core game engine as well as the artistry provided by talented development teams to produce not only breathtaking artwork, but highly immersive game play. Consideration of game technology is now so much a part of the modeling and simulation landscape that it is becoming difficult to distinguish traditional simulation solutions from game-based approaches. But games have yet to provide the much needed interactive free play that has been the domain of semi-autonomous forces (SAF). The component-based middleware architecture that game engines provide promises a great deal in terms of options for the integration of agent solutions to support the development of non-player characters that engage the human player without the deterministic nature of scripted behaviors. However, there are a number of hard-learned lessons on the modeling and simulation side of the equation that game developers have yet to learn, such as: correlation of heterogeneous systems, scalability of both terrain and numbers of non-player entities, and the bi-directional nature of simulation to game interaction provided by Distributed Interactive Simulation (DIS) and High Level Architecture (HLA).

  19. Security Framework for Pervasive Healthcare Architectures Utilizing MPEG-21 IPMP Components

    PubMed Central

    Fragopoulos, Anastasios; Gialelis, John; Serpanos, Dimitrios

    2009-01-01

    Nowadays in modern and ubiquitous computing environments, it is imperative more than ever the necessity for deployment of pervasive healthcare architectures into which the patient is the central point surrounded by different types of embedded and small computing devices, which measure sensitive physical indications, interacting with hospitals databases, allowing thus urgent medical response in occurrences of critical situations. Such environments must be developed satisfying the basic security requirements for real-time secure data communication, and protection of sensitive medical data and measurements, data integrity and confidentiality, and protection of the monitored patient's privacy. In this work, we argue that the MPEG-21 Intellectual Property Management and Protection (IPMP) components can be used in order to achieve protection of transmitted medical information and enhance patient's privacy, since there is selective and controlled access to medical data that sent toward the hospital's servers. PMID:19132095

  20. Architecture of the major component of the type III secretion system export apparatus

    PubMed Central

    Abrusci, Patrizia; Vergara–Irigaray, Marta; Johnson, Steven; Beeby, Morgan D; Hendrixson, David; Roversi, Pietro; Friede, Miriam E; Deane, Janet E; Jensen, Grant J; Tang, Christoph M; Lea, Susan M

    2012-01-01

    Type III secretion systems (T3SSs) are bacterial membrane-embedded secretion nanomachines designed to export specifically targeted sets of proteins from the bacterial cytoplasm. Secretion through T3SS is governed by a subset of inner membrane proteins termed the ‘export apparatus’. We show that a key member of the Shigella flexneri export apparatus, MxiA, assembles into a ring essential for secretion in vivo. The ring forming interfaces are well conserved in both non-flagellar and flagellar homologues, implying that the ring is an evolutionary conserved feature in these systems. Electron cryo-tomography reveals a T3SS-associated cytoplasmic torus of size and shape corresponding to the MxiA ring aligned to the secretion channel located between the secretion pore and the ATPase complex. This defines the molecular architecture of the dominant component of the export apparatus and allows us to propose a model for the molecular mechanisms controlling secretion. PMID:23222644

  1. Abstract Interfaces for Data Analysis - Component Architecture for Data Analysis Tools

    SciTech Connect

    Barrand, Guy

    2002-08-20

    The fast turnover of software technologies, in particular in the domain of interactivity (covering user interface and visualization), makes it difficult for a small group of people to produce complete and polished software-tools before the underlying technologies make them obsolete. At the HepVis '99 workshop, a working group has been formed to improve the production of software tools for data analysis in HENP. Beside promoting a distributed development organization, one goal of the group is to systematically design a set of abstract interfaces based on using modern OO analysis and OO design techniques. An initial domain analysis has come up with several categories (components) found in typical data analysis tools: Histograms, Ntuples, Functions, Vectors, Fitter, Plotter, Analyzer and Controller. Special emphasis was put on reducing the couplings between the categories to a minimum, thus optimizing re-use and maintainability of any component individually. The interfaces have been defined in Java and C++ and implementations exist in the form of libraries and tools using C++ (Anaphe/Lizard, OpenScientist) and Java (Java Analysis Studio). A special implementation aims at accessing the Java libraries (through their Abstract Interfaces) from C++. This paper gives an overview of the architecture and design of the various components for data analysis as discussed in AIDA.

  2. Component Analysis versus Common Factor Analysis: Some issues in Selecting an Appropriate Procedure.

    PubMed

    Velicer, W F; Jackson, D N

    1990-01-01

    Should one do a component analysis or a factor analysis? The choice is not obvious, because the two broad classes of procedures serve a similar purpose, and share many important mathematical characteristics. Despite many textbooks describing common factor analysis as the preferred procedure, principal component analysis has been the most widely applied. Here we summarize relevant information for the prospective factor/component analyst. First, we discuss the key algebraic similarities and differences. Next, we analyze a number of theoretical and practical issues. The more practical aspects include: the degree of numeric similarity between solutions from the two methods, some common rules for the number of factors to be retained, effects resulting from overextraction, problems with improper solutions, and comparisons in computational efficiency. Finally, we review some broader theoretical issues: the factor indeterminacy issue, the differences between exploratory and confirmatory procedures, and the issue of latent versus manifest variables. PMID:26741964

  3. Statistical intercomparison of global climate models: A common principal component approach with application to GCM data

    SciTech Connect

    Sengupta, S.K.; Boyle, J.S.

    1993-05-01

    Variables describing atmospheric circulation and other climate parameters derived from various GCMs and obtained from observations can be represented on a spatio-temporal grid (lattice) structure. The primary objective of this paper is to explore existing as well as some new statistical methods to analyze such data structures for the purpose of model diagnostics and intercomparison from a statistical perspective. Among the several statistical methods considered here, a new method based on common principal components appears most promising for the purpose of intercomparison of spatio-temporal data structures arising in the task of model/model and model/data intercomparison. A complete strategy for such an intercomparison is outlined. The strategy includes two steps. First, the commonality of spatial structures in two (or more) fields is captured in the common principal vectors. Second, the corresponding principal components obtained as time series are then compared on the basis of similarities in their temporal evolution.

  4. Common object request broker architecture (CORBA)-based security services for the virtual radiology environment.

    PubMed

    Martinez, R; Cole, C; Rozenblit, J; Cook, J F; Chacko, A K

    2000-05-01

    The US Army Great Plains Regional Medical Command (GPRMC) has a requirement to conform to Department of Defense (DoD) and Army security policies for the Virtual Radiology Environment (VRE) Project. Within the DoD, security policy is defined as the set of laws, rules, and practices that regulate how an organization manages, protects, and distributes sensitive information. Security policy in the DoD is described by the Trusted Computer System Evaluation Criteria (TCSEC), Army Regulation (AR) 380-19, Defense Information Infrastructure Common Operating Environment (DII COE), Military Health Services System Automated Information Systems Security Policy Manual, and National Computer Security Center-TG-005, "Trusted Network Interpretation." These documents were used to develop a security policy that defines information protection requirements that are made with respect to those laws, rules, and practices that are required to protect the information stored and processed in the VRE Project. The goal of the security policy is to provide for a C2-level of information protection while also satisfying the functional needs of the GPRMC's user community. This report summarizes the security policy for the VRE and defines the CORBA security services that satisfy the policy. In the VRE, the information to be protected is embedded into three major information components: (1) Patient information consists of Digital Imaging and Communications in Medicine (DICOM)-formatted fields. The patient information resides in the digital imaging network picture archiving and communication system (DIN-PACS) networks in the database archive systems and includes (a) patient demographics; (b) patient images from x-ray, computed tomography (CT), magnetic resonance imaging (MRI), and ultrasound (US); and (c) prior patient images and related patient history. (2) Meta-Manager information to be protected consists of several data objects. This information is distributed to the Meta-Manager nodes and

  5. Application of the component paradigm for analysis and design of advanced health system architectures.

    PubMed

    Blobel, B

    2000-12-01

    Based on the component paradigm for software engineering as well as on a consideration of common middleware approaches for health information systems, a generic component model has been developed supporting analysis, design, implementation and harmonisation of such complex systems. Using methods like abstract automatons and the Unified Modelling Language (UML), it could be shown that such components enable the modelling of real-world systems at different levels of abstractions and granularity, so reflecting different views on the same system in a generic and consistent way. Therefore, not only programs and technologies could be modelled, but also business processes, organisational frameworks or security issues as done successfully within the framework of several European projects. PMID:11137472

  6. Optimal Architecture for an Asteroid Mining Mission: System Components and Project Execution

    NASA Astrophysics Data System (ADS)

    Erickson, Ken R.

    2007-01-01

    Near-Earth asteroids (NEAs) offer potential profits both in the near-term (mining platinum group metals, or PGMs) and long-term (harvesting water, volatiles and ore to provide the economic backbone for lunar, Martian and other space exploration). The abundance of raw materials in NEAs include: water and other volatiles for life-support and power, nickel, iron and other metals for construction and manufacturing; carbonaceous compounds for ceramics and building materials; and PGMs for fuel cells and numerous applications on Earth. An efficient, flexible and cost-effective mission utilizing adaptable and resilient robotic compo-nents is essential to successfully establish NEA mining as a comer-cial enterprise. This paper presents an optimized architecture, detailing necessary engineering components, task integration between them, and methods to address the more likely problems encountered. Candidate NEAs are suggested that could offer optimal PGM resources and that have already been evaluated by rendezvous mapping. Mission delta-V and propellant selection are based upon launch from and return to LEO. On-site equipment includes AI-guided robotics, with human telecontrol from Earth to minimize risk and cost. A command-control-communication (CCC) unit orbits the NEA, and coordinates four small lander-miners (LMs), each of which acquire and process regolith. Two LMs are specialized for water and volatiles, two for PGM and Ni-Fe ore. A solar-powered unit hydrolyzes water from the NEA into H2 and O2 for use as propellant, and a solar-thermal propulsion unit returns additional water, PGMs and Ni-Fe ore to LEO. The pro-posed architecture emphasizes flexibility, redundancy of critical units, and fail-safes to maximize probability of mission success. Potential problems addressed include: failure of components, varying surface conditions and mineralogic content, fluctuating solar exposure (due to asteroid rotation) and its impact on solar power units, extreme temperature changes

  7. Muscle architecture of the common chimpanzee (Pan troglodytes): perspectives for investigating chimpanzee behavior.

    PubMed

    Carlson, Kristian J

    2006-07-01

    Thorpe et al. (Am J Phys Anthropol 110:179-199, 1999) quantified chimpanzee (Pan troglodytes) muscle architecture and joint moment arms to determine whether they functionally compensated for structural differences between chimpanzees and humans. They observed enough distinction to conclude that musculoskeletal properties were not compensatory and suggested that chimpanzees and humans do not exhibit dynamically similar movements. These investigators based their assessment on unilateral limb musculatures from three male chimpanzees, of which they called one non-adult representative. Factors such as age, sex, and behavioral lateralization may be responsible for variation in chimpanzee muscle architecture, but this is presently unknown. While the full extent of variation in chimpanzee muscle architecture due to such factors cannot be evaluated with data presently available, the present study expands the chimpanzee dataset and provides a preliminary glimpse of the potential relevance of these factors. Thirty-seven forelimb and 36 hind limb muscles were assessed in two chimpanzee cadavers: one unilaterally (right limbs), and one bilaterally. Mass, fiber length, and physiological cross-sectional area (PCSA) are reported for individual muscles and muscle groups. The musculature of an adult female is more similar in architectural patterns to a young male chimpanzee than to humans, particularly when comparing muscle groups. Age- and sex-related intraspecific differences do not obscure chimpanzee-human interspecific differences. Side asymmetry in one chimpanzee, despite consistent forelimb directional asymmetry, also does not exceed the magnitude of chimpanzee-human differences. Left forelimb muscles, on average, usually had higher masses and longer fiber lengths than right, while right forelimb muscles, on average, usually had greater PCSAs than left. Most muscle groups from the left forelimb exhibited greater masses than right groups, but group asymmetry was significant

  8. Miniaturized Analytical Platforms From Nanoparticle Components: Studies in the Construction, Characterization, and High-Throughput Usage of These Novel Architectures

    SciTech Connect

    Andrew David Pris

    2003-08-05

    The scientific community has recently experienced an overall effort to reduce the physical size of many experimental components to the nanometer size range. This size is unique as the characteristics of this regime involve aspects of pure physics, biology, and chemistry. One extensively studied example of a nanometer sized experimental component, which acts as a junction between these three principle scientific theologies, is deoxyribonucleic acid (DNA) or ribonucleic acid (RNA). These biopolymers not only contain the biological genetic guide to code for the production of life-sustaining materials, but are also being probed by physicists as a means to create electrical circuits and furthermore as controllable architectural and sensor motifs in the chemical disciplines. Possibly the most common nano-sized component between these sciences are nanoparticles composed of a variety of materials. The cross discipline employment of nanoparticles is evident from the vast amount of literature that has been produced from each of the individual communities within the last decade. Along these cross-discipline lines, this dissertation examines the use of several different types of nanoparticles with a wide array of surface chemistries to understand their adsorption properties and to construct unique miniaturized analytical and immunoassay platforms. This introduction will act as a literature review to provide key information regarding the synthesis and surface chemistries of several types of nanoparticles. This material will set the stage for a discussion of assembling ordered arrays of nanoparticles into functional platforms, architectures, and sensors. The introduction will also include a short explanation of the atomic force microscope that is used throughout the thesis to characterize the nanoparticle-based structures. Following the Introduction, four research chapters are presented as separate manuscripts. Chapter 1 examines the self-assembly of polymeric nanoparticles

  9. Toxic and nontoxic components of botulinum neurotoxin complex are evolved from a common ancestral zinc protein

    SciTech Connect

    Inui, Ken; Sagane, Yoshimasa; Miyata, Keita; Miyashita, Shin-Ichiro; Suzuki, Tomonori; Shikamori, Yasuyuki; Ohyama, Tohru; Niwa, Koichi; Watanabe, Toshihiro

    2012-03-16

    Highlights: Black-Right-Pointing-Pointer BoNT and NTNHA proteins share a similar protein architecture. Black-Right-Pointing-Pointer NTNHA and BoNT were both identified as zinc-binding proteins. Black-Right-Pointing-Pointer NTNHA does not have a classical HEXXH zinc-coordinating motif similar to that found in all serotypes of BoNT. Black-Right-Pointing-Pointer Homology modeling implied probable key residues involved in zinc coordination. -- Abstract: Zinc atoms play an essential role in a number of enzymes. Botulinum neurotoxin (BoNT), the most potent toxin known in nature, is a zinc-dependent endopeptidase. Here we identify the nontoxic nonhemagglutinin (NTNHA), one of the BoNT-complex constituents, as a zinc-binding protein, along with BoNT. A protein structure classification database search indicated that BoNT and NTNHA share a similar domain architecture, comprising a zinc-dependent metalloproteinase-like, BoNT coiled-coil motif and concanavalin A-like domains. Inductively coupled plasma-mass spectrometry analysis demonstrated that every single NTNHA molecule contains a single zinc atom. This is the first demonstration of a zinc atom in this protein, as far as we know. However, the NTNHA molecule does not possess any known zinc-coordinating motif, whereas all BoNT serotypes possess the classical HEXXH motif. Homology modeling of the NTNHA structure implied that a consensus K-C-L-I-K-X{sub 35}-D sequence common among all NTNHA serotype molecules appears to coordinate a single zinc atom. These findings lead us to propose that NTNHA and BoNT may have evolved distinct functional specializations following their branching out from a common ancestral zinc protein.

  10. Identifying common components across biological network graphs using a bipartite data model

    PubMed Central

    2014-01-01

    The GeneWeaver bipartite data model provides an efficient means to evaluate shared molecular components from sets derived across diverse species, disease states and biological processes. In order to adapt this model for examining related molecular components and biological networks, such as pathway or gene network data, we have developed a means to leverage the bipartite data structure to extract and analyze shared edges. Using the Pathway Commons database we demonstrate the ability to rapidly identify shared connected components among a diverse set of pathways. In addition, we illustrate how results from maximal bipartite discovery can be decomposed into hierarchical relationships, allowing shared pathway components to be mapped through various parent-child relationships to help visualization and discovery of emergent kernel driven relationships. Interrogating common relationships among biological networks and conventional GeneWeaver gene lists will increase functional specificity and reliability of the shared biological components. This approach enables self-organization of biological processes through shared biological networks. PMID:25374613

  11. Raw Data Maximum Likelihood Estimation for Common Principal Component Models: A State Space Approach.

    PubMed

    Gu, Fei; Wu, Hao

    2016-09-01

    The specifications of state space model for some principal component-related models are described, including the independent-group common principal component (CPC) model, the dependent-group CPC model, and principal component-based multivariate analysis of variance. Some derivations are provided to show the equivalence of the state space approach and the existing Wishart-likelihood approach. For each model, a numeric example is used to illustrate the state space approach. In addition, a simulation study is conducted to evaluate the standard error estimates under the normality and nonnormality conditions. In order to cope with the nonnormality conditions, the robust standard errors are also computed. Finally, other possible applications of the state space approach are discussed at the end. PMID:27364333

  12. Whole-genome sequencing to understand the genetic architecture of common gene expression and biomarker phenotypes

    PubMed Central

    Wood, Andrew R.; Tuke, Marcus A.; Nalls, Mike; Hernandez, Dena; Gibbs, J. Raphael; Lin, Haoxiang; Xu, Christopher S.; Li, Qibin; Shen, Juan; Jun, Goo; Almeida, Marcio; Tanaka, Toshiko; Perry, John R. B.; Gaulton, Kyle; Rivas, Manny; Pearson, Richard; Curran, Joanne E.; Johnson, Matthew P.; Göring, Harald H. H.; Duggirala, Ravindranath; Blangero, John; Mccarthy, Mark I.; Bandinelli, Stefania; Murray, Anna; Weedon, Michael N.; Singleton, Andrew; Melzer, David; Ferrucci, Luigi; Frayling, Timothy M

    2015-01-01

    Initial results from sequencing studies suggest that there are relatively few low-frequency (<5%) variants associated with large effects on common phenotypes. We performed low-pass whole-genome sequencing in 680 individuals from the InCHIANTI study to test two primary hypotheses: (i) that sequencing would detect single low-frequency–large effect variants that explained similar amounts of phenotypic variance as single common variants, and (ii) that some common variant associations could be explained by low-frequency variants. We tested two sets of disease-related common phenotypes for which we had statistical power to detect large numbers of common variant–common phenotype associations—11 132 cis-gene expression traits in 450 individuals and 93 circulating biomarkers in all 680 individuals. From a total of 11 657 229 high-quality variants of which 6 129 221 and 5 528 008 were common and low frequency (<5%), respectively, low frequency–large effect associations comprised 7% of detectable cis-gene expression traits [89 of 1314 cis-eQTLs at P < 1 × 10−06 (false discovery rate ∼5%)] and one of eight biomarker associations at P < 8 × 10−10. Very few (30 of 1232; 2%) common variant associations were fully explained by low-frequency variants. Our data show that whole-genome sequencing can identify low-frequency variants undetected by genotyping based approaches when sample sizes are sufficiently large to detect substantial numbers of common variant associations, and that common variant associations are rarely explained by single low-frequency variants of large effect. PMID:25378555

  13. A Flexible Component based Access Control Architecture for OPeNDAP Services

    NASA Astrophysics Data System (ADS)

    Kershaw, Philip; Ananthakrishnan, Rachana; Cinquini, Luca; Lawrence, Bryan; Pascoe, Stephen; Siebenlist, Frank

    2010-05-01

    Network data access services such as OPeNDAP enable widespread access to data across user communities. However, without ready means to restrict access to data for such services, data providers and data owners are constrained from making their data more widely available. Even with such capability, the range of different security technologies available can make interoperability between services and user client tools a challenge. OPeNDAP is a key data access service in the infrastructure under development to support the CMIP5 (Couple Model Intercomparison Project Phase 5). The work is being carried out as part of an international collaboration including the US Earth System Grid and Curator projects and the EU funded IS-ENES and Metafor projects. This infrastructure will bring together Petabytes of climate model data and associated metadata from over twenty modelling centres around the world in a federation with a core archive mirrored at three data centres. A security system is needed to meet the requirements of organisations responsible for model data including the ability to restrict data access to registered users, keep them up to date with changes to data and services, audit access and protect finite computing resources. Individual organisations have existing tools and services such as OPeNDAP with which users in the climate research community are already familiar. The security system should overlay access control in a way which maintains the usability and ease of access to these services. The BADC (British Atmospheric Data Centre) has been working in collaboration with the Earth System Grid development team and partner organisations to develop the security architecture. OpenID and MyProxy were selected at an early stage in the ESG project to provide single sign-on capability across the federation of participating organisations. Building on the existing OPeNDAP specification an architecture based on pluggable server side components has been developed at the BADC

  14. The pherophorins: common, versatile building blocks in the evolution of extracellular matrix architecture in Volvocales.

    PubMed

    Hallmann, Armin

    2006-01-01

    a result of the highly diverse extensions of the HR domains. Pherophorins have therefore been a versatile element during the evolution of ECM architecture in these green algae. PMID:16367971

  15. Common Components of Industrial Metal-Working Fluids as Sources of Carbon for Bacterial Growth

    PubMed Central

    Foxall-VanAken, S.; Brown, J. A.; Young, W.; Salmeen, I.; McClure, T.; Napier, S.; Olsen, R. H.

    1986-01-01

    Water-based metal-working fluids used in large-scale industrial operations consist of many components, but in the most commonly used formulations only three classes of components are present in high enough concentrations that they could, in principle, provide enough carbon to support the high bacterial densities (109 CFU/ml) often observed in contaminated factory fluids. These components are petroleum oil (1 to 5%), petroleum sulfonates (0.1 to 0.5%), and fatty acids (less than 0.1%, mainly linoleic and oleic acids supplied as tall oils). We isolated pure strains of predominating bacteria from contaminated reservoirs of two metal-working systems and randomly selected 12 strains which we tested in liquid culture for growth with each of the metal-working fluid components as the sole source of carbon. Of the 12 strains, 7 reached high density (109 CFU/ml from an initial inoculum of less than 2 × 103) in 24 h, and 1 strain did the same in 48 h with 0.05% oleic or linoleic acid as the carbon source. These same strains also grew on 1% naphthenic petroleum oil but required up to 72 h to reach densities near 108 CFU/ml. One strain grew slightly and the others not at all on the petroleum sulfonates. The four remaining strains did not grow on any of the components, even though they were among the predominating bacteria in the contaminated system. Of the seven strains that grew best on the fatty acids and on the naphthenic petroleum oil, five were tentatively identified as Acinetobacter species and two were identified as Pseudomonas species. Four of the bacteria that did not grow were tentatively identified as species of Pseudomonas, and one could not be identified. PMID:16347072

  16. Disease and Polygenic Architecture: Avoid Trio Design and Appropriately Account for Unscreened Control Subjects for Common Disease

    PubMed Central

    Peyrot, Wouter J.; Boomsma, Dorret I.; Penninx, Brenda W.J.H.; Wray, Naomi R.

    2016-01-01

    Genome-wide association studies (GWASs) are an optimal design for discovery of disease risk loci for diseases whose underlying genetic architecture includes many common causal loci of small effect (a polygenic architecture). We consider two designs that deserve careful consideration if the true underlying genetic architecture of the trait is polygenic: parent-offspring trios and unscreened control subjects. We assess these designs in terms of quantification of the total contribution of genome-wide genetic markers to disease risk (SNP heritability) and power to detect an associated risk allele. First, we show that trio designs should be avoided when: (1) the disease has a lifetime risk > 1%; (2) trio probands are ascertained from families with more than one affected sibling under which scenario the SNP heritability can drop by more than 50% and power can drop as much as from 0.9 to 0.15 for a sample of 20,000 subjects; or (3) assortative mating occurs (spouse correlation of the underlying liability to the disorder), which decreases the SNP heritability but not the power to detect a single locus in the trio design. Some studies use unscreened rather than screened control subjects because these can be easier to collect; we show that the estimated SNP heritability should then be scaled by dividing by (1 − K × u)2 for disorders with population prevalence K and proportion of unscreened control subjects u. When omitting to scale appropriately, the SNP heritability of, for example, major depressive disorder (K = 0.15) would be underestimated by 28% when none of the control subjects are screened. PMID:26849113

  17. Systems-level quantification of division timing reveals a common genetic architecture controlling asynchrony and fate asymmetry

    PubMed Central

    Ho, Vincy Wing Sze; Wong, Ming-Kin; An, Xiaomeng; Guan, Daogang; Shao, Jiaofang; Ng, Hon Chun Kaoru; Ren, Xiaoliang; He, Kan; Liao, Jinyue; Ang, Yingjin; Chen, Long; Huang, Xiaotai; Yan, Bin; Xia, Yiji; Chan, Leanne Lai Hang; Chow, King Lau; Yan, Hong; Zhao, Zhongying

    2015-01-01

    Coordination of cell division timing is crucial for proper cell fate specification and tissue growth. However, the differential regulation of cell division timing across or within cell types during metazoan development remains poorly understood. To elucidate the systems-level genetic architecture coordinating division timing, we performed a high-content screening for genes whose depletion produced a significant reduction in the asynchrony of division between sister cells (ADS) compared to that of wild-type during Caenorhabditis elegans embryogenesis. We quantified division timing using 3D time-lapse imaging followed by computer-aided lineage analysis. A total of 822 genes were selected for perturbation based on their conservation and known roles in development. Surprisingly, we find that cell fate determinants are not only essential for establishing fate asymmetry, but also are imperative for setting the ADS regardless of cellular context, indicating a common genetic architecture used by both cellular processes. The fate determinants demonstrate either coupled or separate regulation between the two processes. The temporal coordination appears to facilitate cell migration during fate specification or tissue growth. Our quantitative dataset with cellular resolution provides a resource for future analyses of the genetic control of spatial and temporal coordination during metazoan development. PMID:26063786

  18. Moving Towards a Common Ground and Flight Data Systems Architecture for NASA's Exploration Missions

    NASA Technical Reports Server (NTRS)

    Rader. Steve; Kearney, Mike; McVittie, Thom; Smith, Dan

    2006-01-01

    The National Aeronautics and Space Administration has embarked on an ambitious effort to return man to the moon and then on to Mars. The Exploration Vision requires development of major new space and ground assets and poses challenges well beyond those faced by many of NASA's recent programs. New crewed vehicles must be developed. Compatible supply vehicles, surface mobility modules and robotic exploration capabilities will supplement the manned exploration vehicle. New launch systems will be developed as well as a new ground communications and control infrastructure. The development must take place in a cost-constrained environment and must advance along an aggressive schedule. Common solutions and system interoperability and will be critical to the successful development of the Exploration data systems for this wide variety of flight and ground elements. To this end, NASA has assembled a team of engineers from across the agency to identify the key challenges for Exploration data systems and to establish the most beneficial strategic approach to be followed. Key challenges and the planned NASA approach for flight and ground systems will be discussed in the paper. The described approaches will capitalize on new technologies, and will result in cross-program interoperability between spacecraft and ground systems, from multiple suppliers and agencies.

  19. Genetic Architecture of Atherosclerosis in Mice: A Systems Genetics Analysis of Common Inbred Strains.

    PubMed

    Bennett, Brian J; Davis, Richard C; Civelek, Mete; Orozco, Luz; Wu, Judy; Qi, Hannah; Pan, Calvin; Packard, René R Sevag; Eskin, Eleazar; Yan, Mujing; Kirchgessner, Todd; Wang, Zeneng; Li, Xinmin; Gregory, Jill C; Hazen, Stanley L; Gargalovic, Peter S; Lusis, Aldons J

    2015-12-01

    Common forms of atherosclerosis involve multiple genetic and environmental factors. While human genome-wide association studies have identified numerous loci contributing to coronary artery disease and its risk factors, these studies are unable to control environmental factors or examine detailed molecular traits in relevant tissues. We now report a study of natural variations contributing to atherosclerosis and related traits in over 100 inbred strains of mice from the Hybrid Mouse Diversity Panel (HMDP). The mice were made hyperlipidemic by transgenic expression of human apolipoprotein E-Leiden (APOE-Leiden) and human cholesteryl ester transfer protein (CETP). The mice were examined for lesion size and morphology as well as plasma lipid, insulin and glucose levels, and blood cell profiles. A subset of mice was studied for plasma levels of metabolites and cytokines. We also measured global transcript levels in aorta and liver. Finally, the uptake of acetylated LDL by macrophages from HMDP mice was quantitatively examined. Loci contributing to the traits were mapped using association analysis, and relationships among traits were examined using correlation and statistical modeling. A number of conclusions emerged. First, relationships among atherosclerosis and the risk factors in mice resemble those found in humans. Second, a number of trait-loci were identified, including some overlapping with previous human and mouse studies. Third, gene expression data enabled enrichment analysis of pathways contributing to atherosclerosis and prioritization of candidate genes at associated loci in both mice and humans. Fourth, the data provided a number of mechanistic inferences; for example, we detected no association between macrophage uptake of acetylated LDL and atherosclerosis. Fifth, broad sense heritability for atherosclerosis was much larger than narrow sense heritability, indicating an important role for gene-by-gene interactions. Sixth, stepwise linear regression

  20. Genetic Architecture of Atherosclerosis in Mice: A Systems Genetics Analysis of Common Inbred Strains

    PubMed Central

    Bennett, Brian J.; Davis, Richard C.; Civelek, Mete; Orozco, Luz; Wu, Judy; Qi, Hannah; Pan, Calvin; Packard, René R. Sevag; Eskin, Eleazar; Yan, Mujing; Kirchgessner, Todd; Wang, Zeneng; Li, Xinmin; Gregory, Jill C.; Hazen, Stanley L.; Gargalovic, Peter S.; Lusis, Aldons J.

    2015-01-01

    Common forms of atherosclerosis involve multiple genetic and environmental factors. While human genome-wide association studies have identified numerous loci contributing to coronary artery disease and its risk factors, these studies are unable to control environmental factors or examine detailed molecular traits in relevant tissues. We now report a study of natural variations contributing to atherosclerosis and related traits in over 100 inbred strains of mice from the Hybrid Mouse Diversity Panel (HMDP). The mice were made hyperlipidemic by transgenic expression of human apolipoprotein E-Leiden (APOE-Leiden) and human cholesteryl ester transfer protein (CETP). The mice were examined for lesion size and morphology as well as plasma lipid, insulin and glucose levels, and blood cell profiles. A subset of mice was studied for plasma levels of metabolites and cytokines. We also measured global transcript levels in aorta and liver. Finally, the uptake of acetylated LDL by macrophages from HMDP mice was quantitatively examined. Loci contributing to the traits were mapped using association analysis, and relationships among traits were examined using correlation and statistical modeling. A number of conclusions emerged. First, relationships among atherosclerosis and the risk factors in mice resemble those found in humans. Second, a number of trait-loci were identified, including some overlapping with previous human and mouse studies. Third, gene expression data enabled enrichment analysis of pathways contributing to atherosclerosis and prioritization of candidate genes at associated loci in both mice and humans. Fourth, the data provided a number of mechanistic inferences; for example, we detected no association between macrophage uptake of acetylated LDL and atherosclerosis. Fifth, broad sense heritability for atherosclerosis was much larger than narrow sense heritability, indicating an important role for gene-by-gene interactions. Sixth, stepwise linear regression

  1. Studies on the thermal breakdown of common Li-ion battery electrolyte components

    DOE PAGESBeta

    Lamb, Joshua; Orendorff, Christopher J.; Roth, Emanuel Peter; Langendorf, Jill Louise

    2015-08-06

    While much attention is paid to the impact of the active materials on the catastrophic failure of lithium ion batteries, much of the severity of a battery failure is also governed by the electrolytes used, which are typically flammable themselves and can decompose during battery failure. The use of LiPF6 salt can be problematic as well, not only catalyzing electrolyte decomposition, but also providing a mechanism for HF production. This work evaluates the safety performance of the common components ethylene carbonate (EC), diethyl carbonate (DEC), dimethyl carbonate (DMC), and ethyl methyl carbonate (EMC) in the context of the gasses producedmore » during thermal decomposition, looking at both the quantity and composition of the vapor produced. EC and DEC were found to be the largest contributors to gas production, both producing upwards of 1.5 moles of gas/mole of electrolyte. DMC was found to be relatively stable, producing very little gas regardless of the presence of LiPF6. EMC was stable on its own, but the addition of LiPF6 catalyzed decomposition of the solvent. As a result, while gas analysis did not show evidence of significant quantities of any acutely toxic materials, the gasses themselves all contained enough flammable components to potentially ignite in air.« less

  2. Studies on the thermal breakdown of common Li-ion battery electrolyte components

    SciTech Connect

    Lamb, Joshua; Orendorff, Christopher J.; Roth, Emanuel Peter; Langendorf, Jill Louise

    2015-08-06

    While much attention is paid to the impact of the active materials on the catastrophic failure of lithium ion batteries, much of the severity of a battery failure is also governed by the electrolytes used, which are typically flammable themselves and can decompose during battery failure. The use of LiPF6 salt can be problematic as well, not only catalyzing electrolyte decomposition, but also providing a mechanism for HF production. This work evaluates the safety performance of the common components ethylene carbonate (EC), diethyl carbonate (DEC), dimethyl carbonate (DMC), and ethyl methyl carbonate (EMC) in the context of the gasses produced during thermal decomposition, looking at both the quantity and composition of the vapor produced. EC and DEC were found to be the largest contributors to gas production, both producing upwards of 1.5 moles of gas/mole of electrolyte. DMC was found to be relatively stable, producing very little gas regardless of the presence of LiPF6. EMC was stable on its own, but the addition of LiPF6 catalyzed decomposition of the solvent. As a result, while gas analysis did not show evidence of significant quantities of any acutely toxic materials, the gasses themselves all contained enough flammable components to potentially ignite in air.

  3. Unbiased analysis of senescence associated secretory phenotype (SASP) to identify common components following different genotoxic stresses.

    PubMed

    Özcan, Servet; Alessio, Nicola; Acar, Mustafa B; Mert, Eda; Omerli, Fatih; Peluso, Gianfranco; Galderisi, Umberto

    2016-07-01

    Senescent cells secrete senescence-associated secretory phenotype (SASP) proteins to carry out several functions, such as sensitizing surrounding cells to senesce; immunomodulation; impairing or fostering cancer growth; and promoting tissue development. Identifying secreted factors that achieve such tasks is a challenging issue since the profile of secreted proteins depends on genotoxic stress and cell type. Currently, researchers are trying to identify common markers for SASP. The present investigation compared the secretome composition of five different senescent phenotypes in two different cell types: bone marrow and adipose mesenchymal stromal cells (MSC). We induced MSC senescence by oxidative stress, doxorubicin treatment, X-ray irradiation, and replicative exhaustion. We took advantage of LC-MS/MS proteome identification and subsequent gene ontology (GO) evaluation to perform an unbiased analysis (hypothesis free manner) of senescent secretomes. GO analysis allowed us to distribute SASP components into four classes: extracellular matrix/cytoskeleton/cell junctions; metabolic processes; ox-redox factors; and regulators of gene expression. We used Ingenuity Pathway Analysis (IPA) to determine common pathways among the different senescent phenotypes. This investigation, along with identification of eleven proteins that were exclusively expressed in all the analyzed senescent phenotypes, permitted the identification of three key signaling paths: MMP2 - TIMP2; IGFBP3 - PAI-1; and Peroxiredoxin 6 - ERP46 - PARK7 - Cathepsin D - Major vault protein. We suggest that these paths could be involved in the paracrine circuit that induces senescence in neighboring cells and may confer apoptosis resistance to senescent cells. PMID:27288264

  4. Unbiased analysis of senescence associated secretory phenotype (SASP) to identify common components following different genotoxic stresses

    PubMed Central

    Özcan, Servet; Alessio, Nicola; Acar, Mustafa B.; Mert, Eda; Omerli, Fatih; Peluso, Gianfranco; Galderisi, Umberto

    2016-01-01

    Senescent cells secrete senescence-associated secretory phenotype (SASP) proteins to carry out several functions, such as sensitizing surrounding cells to senesce; immunomodulation; impairing or fostering cancer growth; and promoting tissue development. Identifying secreted factors that achieve such tasks is a challenging issue since the profile of secreted proteins depends on genotoxic stress and cell type. Currently, researchers are trying to identify common markers for SASP. The present investigation compared the secretome composition of five different senescent phenotypes in two different cell types: bone marrow and adipose mesenchymal stromal cells (MSC). We induced MSC senescence by oxidative stress, doxorubicin treatment, X-ray irradiation, and replicative exhaustion. We took advantage of LC-MS/MS proteome identification and subsequent gene ontology (GO) evaluation to perform an unbiased analysis (hypothesis free manner) of senescent secretomes. GO analysis allowed us to distribute SASP components into four classes: extracellular matrix/cytoskeleton/cell junctions; metabolic processes; ox-redox factors; and regulators of gene expression. We used Ingenuity Pathway Analysis (IPA) to determine common pathways among the different senescent phenotypes. This investigation, along with identification of eleven proteins that were exclusively expressed in all the analyzed senescent phenotypes, permitted the identification of three key signaling paths: MMP2 - TIMP2; IGFBP3 - PAI-1; and Peroxiredoxin 6 - ERP46 - PARK7 - Cathepsin D - Major vault protein. We suggest that these paths could be involved in the paracrine circuit that induces senescence in neighboring cells and may confer apoptosis resistance to senescent cells. PMID:27288264

  5. Antibacterial interactions of monolaurin with commonly used antimicrobials and food components.

    PubMed

    Zhang, Hui; Wei, Hewen; Cui, Yinan; Zhao, Guoqun; Feng, Fengqin

    2009-09-01

    Monolaurin is a nontraditional antimicrobial agent that possesses better antimicrobial activities but causes no health problems to consumers, but the use of monolaurin in the food industry as a preservative is still limited. Using a microtiter plate assay, the minimum inhibitory concentrations for monolaurin were 25 microg/mL against Escherichia coli, 12.5 microg/mL against Staphylococcus aureus, and 30 microg/mL against Bacillus subtilis. The interaction with commonly used antimicrobials revealed that monolaurin and nisin acted synergistically against the test microorganisms, monolaurin in combination with sodium dehydroacetate or ethylenediaminetetraacetic acid was synergistic against E. coli and B. subtilis but not S. aureus, and monolaurin combined with calcium propionate or sodium lactate showed no synergistic effects against any test microorganism. The interaction with food components revealed that the antibacterial effectiveness of monolaurin was reduced by fat or starch while the monolaurin activity remained unchanged in the presence of protein. This study contributes to a better understanding on the use of monolaurin as a nontraditional preservative in food products. Results from this study suggest the potential use of monolaurin as a nontraditional preservative in combination with commonly used antimicrobials, such as nisin, sodium dehydroacetate, or ethylenediaminetetraacetic acid, and suggest that the antibacterial effectiveness of monolaurin may be reduced significantly in high-fat or low-starch food products. PMID:19895490

  6. A Common Variant in the Telomerase RNA Component Is Associated with Short Telomere Length

    PubMed Central

    Njajou, Omer T.; Blackburn, Elizabeth H.; Pawlikowska, Ludmila; Mangino, Massimo; Damcott, Coleen M.; Kwok, Pui-Yan; Spector, Timothy D.; Newman, Anne B.; Harris, Tamara B.; Cummings, Steven R.; Cawthon, Richard M.; Shuldiner, Alan R.; Valdes, Ana M.; Hsueh, Wen-Chi

    2010-01-01

    Background Telomeres shorten as cells divide. This shortening is compensated by the enzyme telomerase. We evaluated the effect of common variants in the telomerase RNA component (TERC) gene on telomere length (TL) in the population-based Health Aging and Body Composition (Health ABC) Study and in two replication samples (the TwinsUK Study and the Amish Family Osteoporosis Study, AFOS). Methodology Five variants were identified in the TERC region by sequence analysis and only one SNP was common (rs2293607, G/A). The frequency of the G allele was 0.26 and 0.07 in white and black, respectively. Testing for association between TL and rs2293607 was performed using linear regression models or variance component analysis conditioning on relatedness among subjects. Results The adjusted mean TL was significantly shorter in 665 white carriers of the G allele compared to 887 non-carriers from the Health ABC Study (4.69±0.05 kbp vs. 4.86±0.04 kbp, measured by quantitative PCR, p = 0.005). This association was replicated in another white sample from the TwinsUK Study (6.90±0.03 kbp in 301 carriers compared to 7.06±0.03 kbp in 395 non-carriers, measured by Southern blots, p = 0.009). A similar pattern of association was observed in whites from the family-based AFOS and blacks from the Health ABC cohort, although not statistically significant, possibly due to the lower allele frequency in these populations. Combined analysis using 2,953 white subjects from 3 studies showed a significant association between TL and rs2293607 (β = −0.19±0.04 kbp, p = 0.001). Conclusion Our study shows a significant association between a common variant in TERC and TL in humans, suggesting that TERC may play a role in telomere homeostasis. PMID:20885959

  7. Helplets: A Common Sense-Based Collaborative Help Collection and Retrieval Architecture for Web-Enabled Systems

    NASA Astrophysics Data System (ADS)

    Nauman, Mohammad; Khan, Shahbaz; Khan, Sanaullah

    All computer software systems, whether online or offline, require a help system. Help texts are traditionally written by software development companies and answer targeted questions in the form of how-tos and troubleshooting procedures. However, when the popularity of an application grows, users of the application themselves start adding to the corpus of help for the system in the form of online tutorials. There is, however, one problem with such tutorials. They have no direct link with the software for which they are written. Users have to search the Internet for different tutorials that are usually hosted on dispersed locations, and there is no ideal way of finding the relevant information without ending up with lots of noise in the search results. In this chapter, we describe a model for a help system which enhances this concept using collaborative tagging for categorization of "helplets." For the knowledge retrieval part of the system, we utilize a previously developed technique based on common sense and user personalization. We use a freely available common sense reasoning toolkit for knowledge retrieval. Our architecture can be implemented in Web-based systems as well as in stand-alone desktop applications.

  8. Virtual management of radiology examinations in the virtual radiology environment using common object request broker architecture services.

    PubMed

    Martinez, R; Rozenblit, J; Cook, J F; Chacko, A K; Timboe, H L

    1999-05-01

    In the Department of Defense (DoD), US Army Medical Command is now embarking on an extremely exciting new project--creating a virtual radiology environment (VRE) for the management of radiology examinations. The business of radiology in the military is therefore being reengineered on several fronts by the VRE Project. In the VRE Project, a set of intelligent agent algorithms determine where examinations are to routed for reading bases on a knowledge base of the entire VRE. The set of algorithms, called the Meta-Manager, is hierarchical and uses object-based communications between medical treatment facilities (MTFs) and medical centers that have digital imaging network picture archiving and communications systems (DIN-PACS) networks. The communications is based on use of common object request broker architecture (CORBA) objects and services to send patient demographics and examination images from DIN-PACS networks in the MTFs to the DIN-PACS networks at the medical centers for diagnosis. The Meta-Manager is also responsible for updating the diagnosis at the originating MTF. CORBA services are used to perform secure message communications between DIN-PACS nodes in the VRE network. The Meta-Manager has a fail-safe architecture that allows the master Meta-Manager function to float to regional Meta-Manager sites in case of server failure. A prototype of the CORBA-based Meta-Manager is being developed by the University of Arizona's Computer Engineering Research Laboratory using the unified modeling language (UML) as a design tool. The prototype will implement the main functions described in the Meta-Manager design specification. The results of this project are expected to reengineer the process of radiology in the military and have extensions to commercial radiology environments. PMID:10342205

  9. Towards Zero Retraining for Myoelectric Control Based on Common Model Component Analysis.

    PubMed

    Liu, Jianwei; Sheng, Xinjun; Zhang, Dingguo; Jiang, Ning; Zhu, Xiangyang

    2016-04-01

    In spite of several decades of intense research and development, the existing algorithms of myoelectric pattern recognition (MPR) are yet to satisfy the criteria that a practical upper extremity prostheses should fulfill. This study focuses on the criterion of the short, or even zero subject training. Due to the inherent nonstationarity in surface electromyography (sEMG) signals, current myoelectric control algorithms usually need to be retrained daily during a multiple days' usage. This study was conducted based on the hypothesis that there exist some invariant characteristics in the sEMG signals when a subject performs the same motion in different days. Therefore, given a set of classifiers (models) trained on several days, it is possible to find common characteristics among them. To this end, we proposed to use common model component analysis (CMCA) framework, in which an optimized projection was found to minimize the dissimilarity among multiple models of linear discriminant analysis (LDA) trained using data from different days. Five intact-limbed subjects and two transradial amputee subjects participated in an experiment including six sessions of sEMG data recording, which were performed in six different days, to simulate the application of MPR over multiple days. The results demonstrate that CMCA has a significant better generalization ability with unseen data (not included in the training data), leading to classification accuracy improvement and increase of completion rate in a motion test simulation, when comparing with the baseline reference method. The results indicate that CMCA holds a great potential in the effort of developing zero retraining of MPR. PMID:25879963

  10. DELLA proteins are common components of symbiotic rhizobial and mycorrhizal signalling pathways.

    PubMed

    Jin, Yue; Liu, Huan; Luo, Dexian; Yu, Nan; Dong, Wentao; Wang, Chao; Zhang, Xiaowei; Dai, Huiling; Yang, Jun; Wang, Ertao

    2016-01-01

    Legumes form symbiotic associations with either nitrogen-fixing bacteria or arbuscular mycorrhizal fungi. Formation of these two symbioses is regulated by a common set of signalling components that act downstream of recognition of rhizobia or mycorrhizae by host plants. Central to these pathways is the calcium and calmodulin-dependent protein kinase (CCaMK)-IPD3 complex which initiates nodule organogenesis following calcium oscillations in the host nucleus. However, downstream signalling events are not fully understood. Here we show that Medicago truncatula DELLA proteins, which are the central regulators of gibberellic acid signalling, positively regulate rhizobial symbiosis. Rhizobia colonization is impaired in della mutants and we provide evidence that DELLAs can promote CCaMK-IPD3 complex formation and increase the phosphorylation state of IPD3. DELLAs can also interact with NSP2-NSP1 and enhance the expression of Nod-factor-inducible genes in protoplasts. We show that DELLA is able to bridge a protein complex containing IPD3 and NSP2. Our results suggest a transcriptional framework for regulation of root nodule symbiosis. PMID:27514472

  11. DELLA proteins are common components of symbiotic rhizobial and mycorrhizal signalling pathways

    PubMed Central

    Jin, Yue; Liu, Huan; Luo, Dexian; Yu, Nan; Dong, Wentao; Wang, Chao; Zhang, Xiaowei; Dai, Huiling; Yang, Jun; Wang, Ertao

    2016-01-01

    Legumes form symbiotic associations with either nitrogen-fixing bacteria or arbuscular mycorrhizal fungi. Formation of these two symbioses is regulated by a common set of signalling components that act downstream of recognition of rhizobia or mycorrhizae by host plants. Central to these pathways is the calcium and calmodulin-dependent protein kinase (CCaMK)–IPD3 complex which initiates nodule organogenesis following calcium oscillations in the host nucleus. However, downstream signalling events are not fully understood. Here we show that Medicago truncatula DELLA proteins, which are the central regulators of gibberellic acid signalling, positively regulate rhizobial symbiosis. Rhizobia colonization is impaired in della mutants and we provide evidence that DELLAs can promote CCaMK–IPD3 complex formation and increase the phosphorylation state of IPD3. DELLAs can also interact with NSP2–NSP1 and enhance the expression of Nod-factor-inducible genes in protoplasts. We show that DELLA is able to bridge a protein complex containing IPD3 and NSP2. Our results suggest a transcriptional framework for regulation of root nodule symbiosis. PMID:27514472

  12. Architectural integration of the components necessary for electrical energy storage on the nanoscale and in 3D.

    PubMed

    Rhodes, Christopher P; Long, Jeffrey W; Pettigrew, Katherine A; Stroud, Rhonda M; Rolison, Debra R

    2011-04-01

    We describe fabrication of three-dimensional (3D) multifunctional nanoarchitectures in which the three critical components of a battery--cathode, separator/electrolyte, and anode--are internally assembled as tricontinuous nanoscopic phases. The architecture is initiated using sol-gel chemistry and processing to erect a 3D self-wired nanoparticulate scaffold of manganese oxide (>200 m(2) g(-1)) with a continuous, open, and mesoporous void volume. The integrated 3D system is generated by exhaustive coverage of the oxide network by an ultrathin, conformal layer of insulating polymer that forms via self-limiting electrodeposition of poly(phenylene oxide). The remaining interconnected void volume is then wired with RuO(2) nanowebs using subambient thermal decomposition of RuO(4). Transmission electron microscopy demonstrates that the three nanoscopic charge-transfer functional components--manganese oxide, polymer separator/cation conductor, and RuO(2)--exhibit the stratified, tricontinuous design of the phase-by-phase construction. This architecture contains all three components required for a solid-state energy storage device within a void volume sized at tens of nanometres such that nanometre-thick distances are established between the opposing electrodes. We have now demonstrated the ability to assemble multifunctional energy-storage nanoarchitectures on the nanoscale and in three dimensions. PMID:21327256

  13. Comparing the Magnitude of Two Fractions with Common Components: Which Representations Are Used by 10- and 12-Year-Olds?

    ERIC Educational Resources Information Center

    Meert, Gaelle; Gregoire, Jacques; Noel, Marie-Pascale

    2010-01-01

    This study tested whether 10- and 12-year-olds who can correctly compare the magnitudes of fractions with common components access the magnitudes of the whole fractions rather than only compare the magnitudes of their components. Time for comparing two fractions was predicted by the numerical distance between the whole fractions, suggesting an…

  14. A Parallel Algorithm for Connected Component Labelling of Gray-scale Images on Homogeneous Multicore Architectures

    NASA Astrophysics Data System (ADS)

    Niknam, Mehdi; Thulasiraman, Parimala; Camorlinga, Sergio

    2010-11-01

    Connected component labelling is an essential step in image processing. We provide a parallel version of Suzuki's sequential connected component algorithm in order to speed up the labelling process. Also, we modify the algorithm to enable labelling gray-scale images. Due to the data dependencies in the algorithm we used a method similar to pipeline to exploit parallelism. The parallel algorithm method achieved a speedup of 2.5 for image size of 256 × 256 pixels using 4 processing threads.

  15. Architecture, Voltage, and Components for a Turboelectric Distributed Propulsion Electric Grid (AVC-TeDP)

    NASA Technical Reports Server (NTRS)

    Gemin, Paul; Kupiszewski, Tom; Radun, Arthur; Pan, Yan; Lai, Rixin; Zhang, Di; Wang, Ruxi; Wu, Xinhui; Jiang, Yan; Galioto, Steve; Haran, Kiruba; Premerlani, William; Bray, Jim; Caiafa, Antonio

    2015-01-01

    The purpose of this effort was to advance the selection, characterization, and modeling of a propulsion electric grid for a Turboelectric Distributed Propulsion (TeDP) system for transport aircraft. The TeDP aircraft would constitute a miniature electric grid with 50 MW or more of total power, two or more generators, redundant transmission lines, and multiple electric motors driving propulsion fans. The study proposed power system architectures, investigated electromechanical and solid state circuit breakers, estimated the impact of the system voltage on system mass, and recommended DC bus voltage range. The study assumed an all cryogenic power system. Detailed assumptions within the study include hybrid circuit breakers, a two cryogen system, and supercritical cyrogens. A dynamic model was developed to investigate control and parameter selection.

  16. Sequence System Building Blocks: Using a Component Architecture for Sequencing Software

    NASA Technical Reports Server (NTRS)

    Streiffert, Barbara A.; O'Reilly, Taifun

    2005-01-01

    Over the last few years software engineering has made significant strides in making more flexible architectures and designs possible. However, at the same time, spacecraft have become more complex and flight software has become more sophisticated. Typically spacecraft are often one-of-a-kind entities that have different hardware designs, different capabilities, different instruments, etc. Ground software has become more complex and operations teams have had to learn a myriad of tools that all have different user interfaces and represent data in different ways. At Jet Propulsion Laboratory (JPL) these themes have collided to require an new approach to producing ground system software. Two different groups have been looking at tackling this particular problem. One group is working for the JPL Mars Technology Program in the Mars Science Laboratory (MSL) Focused Technology area. The other group is the JPL Multi-Mission Planning and Sequencing Group . The major concept driving these two approaches on a similar path is to provide software that can be a more cohesive flexible system that provides a act of planning and sequencing system of services. This paper describes the efforts that have been made to date to create a unified approach from these disparate groups.

  17. Sequencing System Building Blocks: Using a Component Architecture for Sequencing Software

    NASA Technical Reports Server (NTRS)

    Streiffert, Barbara A.; O'Reilly, Taifun

    2006-01-01

    Over the last few years software engineering has made significant strides in making more flexible architectures and designs possible. However, at the same time, spacecraft have become more complex and flight software has become more sophisticated. Typically spacecraft are often one-of-a-kind entities that have different hardware designs, different capabilities, different instruments, etc. Ground software has become more complex and operations teams have had to learn a myriad of tools that all have different user interfaces and represent data in different ways. At Jet Propulsion Laboratory (JPL) these themes have collided to require a new approach to producing ground system software. Two different groups have been looking at tackling this particular problem. One group is working for the JPL Mars Technology Program in the Mars Science Laboratory (MSL) Focused Technology area. The other group is the JPL Multi-Mission Planning and Sequencing Group. The major concept driving these two approaches on a similar path is to provide software that can be a more cohesive flexible system that provides a set of planning and sequencing system of services. This paper describes the efforts that have been made to date to create a unified approach from these disparate groups.

  18. Architecture, Voltage, and Components for a Turboelectric Distributed Propulsion Electric Grid

    NASA Technical Reports Server (NTRS)

    Armstrong, Michael J.; Blackwelder, Mark; Bollman, Andrew; Ross, Christine; Campbell, Angela; Jones, Catherine; Norman, Patrick

    2015-01-01

    The development of a wholly superconducting turboelectric distributed propulsion system presents unique opportunities for the aerospace industry. However, this transition from normally conducting systems to superconducting systems significantly increases the equipment complexity necessary to manage the electrical power systems. Due to the low technology readiness level (TRL) nature of all components and systems, current Turboelectric Distributed Propulsion (TeDP) technology developments are driven by an ambiguous set of system-level electrical integration standards for an airborne microgrid system (Figure 1). While multiple decades' worth of advancements are still required for concept realization, current system-level studies are necessary to focus the technology development, target specific technological shortcomings, and enable accurate prediction of concept feasibility and viability. An understanding of the performance sensitivity to operating voltages and an early definition of advantageous voltage regulation standards for unconventional airborne microgrids will allow for more accurate targeting of technology development. Propulsive power-rated microgrid systems necessitate the introduction of new aircraft distribution system voltage standards. All protection, distribution, control, power conversion, generation, and cryocooling equipment are affected by voltage regulation standards. Information on the desired operating voltage and voltage regulation is required to determine nominal and maximum currents for sizing distribution and fault isolation equipment, developing machine topologies and machine controls, and the physical attributes of all component shielding and insulation. Voltage impacts many components and system performance.

  19. Method for producing components with internal architectures, such as micro-channel reactors, via diffusion bonding sheets

    DOEpatents

    Alman, David E.; Wilson, Rick D.; Davis, Daniel L.

    2011-03-08

    This invention relates to a method for producing components with internal architectures, and more particularly, this invention relates to a method for producing structures with microchannels via the use of diffusion bonding of stacked laminates. Specifically, the method involves weakly bonding a stack of laminates forming internal voids and channels with a first generally low uniaxial pressure and first temperature such that bonding at least between the asperites of opposing laminates occurs and pores are isolated in interfacial contact areas, followed by a second generally higher isostatic pressure and second temperature for final bonding. The method thereby allows fabrication of micro-channel devices such as heat exchangers, recuperators, heat-pumps, chemical separators, chemical reactors, fuel processing units, and combustors without limitation on the fin aspect ratio.

  20. Evolution of multi-component anion relay chemistry (ARC): construction of architecturally complex natural and unnatural products.

    PubMed

    Smith, Amos B; Wuest, William M

    2008-12-01

    Efficient construction of architecturally complex natural and unnatural products is the hallmark of organic chemistry. Anion relay chemistry (ARC)-a multi-component coupling protocol-has the potential to provide the chemist with a powerful synthetic tactic, enabling efficient, rapid elaboration of structurally complex scaffolds in a single operation with precise stereochemical control. The ARC tactic can be subdivided into two main classes, comprising the relay of negative charge either through bonds or through space, the latter with aid of a transfer agent. This review will present the current state of through-space anion relay, in conjunction with examples of natural and unnatural product syntheses that illustrate the utility of this synthetic method. PMID:19030533

  1. Promoting the Effect of the Qing Dynasty Imperial Garden Architectural Component Library on the Digitalization of Cultural Heritage

    NASA Astrophysics Data System (ADS)

    Jindan, C.; Junsong, Z.; Jiujun, Z.

    2015-08-01

    With the development of computer technology and practical verification, digital virtual technology has matured and is increasingly being widely applied to cultural heritage protection and research. With this advancement in technology, there is pressing need to simplify heritage-related puzzles. Thus the main question that has increasingly become the most central and fundamental problem in heritage digitalization work is how to choose the "proper technology" that provides support directly, accurately and rapidly for the research, protection and exchange of cultural heritage. Based on the principles of "authenticity" and "completeness" found in the Venice Charter in regards to dealing with cultural heritage; this paper proposes the concept of the component library which facilitates the improvement and efficiency of virtual reconstruction, provides a visual discussion platform for cultural heritage protection, virtual scene construction, accuracy assessment, and multi-space-time exhibition; thereby implementing the spirit of tolerance and respect found in the Nara Document on Authenticity. The paper further aims to illustrate the significance of the Qing dynasty imperial garden architectural component library for cultural heritage study and protection, the principles for virtual library construction, use and maintenance of the library, and classification approaches, and also provide some suggestions about making high quality 3D models and effective means for database integration.

  2. Architecture of a Coarse-Grained Upper Middle Cambrian Alluvial Delta Dominated by Braidplain and Gilbert-Style Delta Components

    NASA Astrophysics Data System (ADS)

    Pound, K. S.

    2014-12-01

    The ~500-m thick upper Middle Cambrian Lockett Conglomerate was deposited as part of an alluvial delta that includes Gilbert-type mega-crossbeds as well as braidplain conglomerates, and was constructed across an accretionary prism. Internal Lockett Conglomerate architecture indicates at least three phases of progradation are recorded by Gilbert-type, delta-front deposits that are separated by delta-top distributaries and/or braidplain deposits, all of which form discontinuous sheets and lenses, and record aggradation. Evaluation of sedimentary features (particle size and organization, bedding features) allows identification of eight facies within the Lockett Conglomerate; sedimentary features were used to infer transportational and depositional mechanisms. Conglomerate facies HL-1 - HL-8 were assigned to one or more of the following depositional associations: Beachface/shoreface, Deltafront, Alluvial fan, Braidplain (fluvial, unchannelized), Delta-top distributaries, and Mouth-bars. A series of Depositional Packages was identified, and mapped; integration with measured sections allowed development of a facies model for an alluvial delta in which the subaerial component is dominated by the braidplain association, and the subaqueous component by the (Gilbert-type) deltafront association as well as the delta-top distributary and mouthbar associations. Locally, the beachface association marks the transition between the subaqueous and subaerial components of the alluvial delta. Alluvial fan deposits are absent, but the rounded pebbles, cobbles and boulders with a new and distinctive provenance signature indicate derivation from a newly exposed igneous and metamorphic basement, and abrasion during transport through the fluvial (braidplain) system prior to deposition as part of the alluvial delta.

  3. Mapping genomic loci for cotton plant architecture, yield components, and fiber properties in an interspecific (Gossypium hirsutum L. x G. barbadense L.) RIL population

    Technology Transfer Automated Retrieval System (TEKTRAN)

    A quantitative trait loci (QTL) analysis was conducted to better understand the genetic control of plant architecture (PA), yield components (YC), and fiber properties (FP) in the two cultivated tetraploid species of cotton (Gossypium hirsutum L. and G. barbadense L.). Genomic regions were identifi...

  4. Experimental study of impact-cratering damage on brittle cylindrical column model as a fundamental component of space architecture

    NASA Astrophysics Data System (ADS)

    Fujiwara, Akira; Onose, Naomi; Setoh, Masato; Nakamura, Akiko M.; Hiraoka, Kensuke; Hasegawa, Sunao; Okudaira, Kyoko

    2014-10-01

    The cylindrical column of brittle material processed from soil and rock is a fundamental component of architectures on the surface of solid bodies in the solar system. One of the most hazardous events for the structure is damaging by hypervelocity impacts by meteoroids and debris. In such a background, cylindrical columns made of plaster of Paris and glass-bead-sintered ceramic were impacted by spherical projectiles of nylon, glass, and steel at velocity of about 1-4.5 km/s. Measured crater radii, depth, and excavated mass expressed by a function of the cylinder radius are similar irrespective of the target material, if those parameters are normalized by appropriate parameters of the crater produced on the flat-surface target. The empirical scaling relations of the normalized crater radii and depth are provided. Using them, crater dimensions and excavated mass of crater on cylindrical surface of any radius can be predicted from the existing knowledge of those for flat surface. Recommendation for the minimum diameter of a cylinder so as to resist against a given impact is provided.

  5. An Intelligent Architecture Based on Field Programmable Gate Arrays Designed to Detect Moving Objects by Using Principal Component Analysis

    PubMed Central

    Bravo, Ignacio; Mazo, Manuel; Lázaro, José L.; Gardel, Alfredo; Jiménez, Pedro; Pizarro, Daniel

    2010-01-01

    This paper presents a complete implementation of the Principal Component Analysis (PCA) algorithm in Field Programmable Gate Array (FPGA) devices applied to high rate background segmentation of images. The classical sequential execution of different parts of the PCA algorithm has been parallelized. This parallelization has led to the specific development and implementation in hardware of the different stages of PCA, such as computation of the correlation matrix, matrix diagonalization using the Jacobi method and subspace projections of images. On the application side, the paper presents a motion detection algorithm, also entirely implemented on the FPGA, and based on the developed PCA core. This consists of dynamically thresholding the differences between the input image and the one obtained by expressing the input image using the PCA linear subspace previously obtained as a background model. The proposal achieves a high ratio of processed images (up to 120 frames per second) and high quality segmentation results, with a completely embedded and reliable hardware architecture based on commercial CMOS sensors and FPGA devices. PMID:22163406

  6. Dissection of the interferon gamma-MHC class II signal transduction pathway reveals that type I and type II interferon systems share common signalling component(s).

    PubMed Central

    Loh, J E; Chang, C H; Fodor, W L; Flavell, R A

    1992-01-01

    We have used a herpes virus thymidine kinase (HSV-TK) based metabolic selection system to isolate mutants defective in the interferon gamma mediated induction of the MHC class II promoter. All the mutations act in trans and result in no detectable induction of MHC and invariant chain (Ii) gene expression. Scatchard analysis indicates that the mutants have a normal number of surface IFN gamma receptors with the same affinity constant. The mutants fall into two broad categories. One class of mutants is still able to induce MHC class I, IRF-1, 9-27, 1-8 and GBP genes by IFN gamma. A second class of mutants is defective for the IFN gamma induction of all the genes tested; surprisingly, the IFN alpha/beta induction of MHC class I, 9-27, ISG54 and ISG15 genes is also defective in these mutants, although different members of this class can be discriminated by the response of the GBP and IRF-1 genes to type I interferons. These data demonstrate that the signalling pathways of both type I and type II interferon systems share common signal transduction component(s). These mutants will be useful for the study of IFN gamma regulation of class II genes and Ii chain, and to elucidate molecular components of type I and type II interferon signal transduction. Images PMID:1314162

  7. Comparative brain architecture of the European shore crab Carcinus maenas (Brachyura) and the common hermit crab Pagurus bernhardus (Anomura) with notes on other marine hermit crabs.

    PubMed

    Krieger, Jakob; Sombke, Andy; Seefluth, Florian; Kenning, Matthes; Hansson, Bill S; Harzsch, Steffen

    2012-04-01

    The European shore crab Carcinus maenas and the common hermit crab Pagurus bernhardus are members of the sister taxa Brachyura and Anomura (together forming the taxon Meiura) respectively. Both species share similar coastal marine habitats and thus are confronted with similar environmental conditions. This study sets out to explore variations of general brain architecture of species that live in seemingly similar habitats but belong to different major malacostracan taxa and to understand possible differences of sensory systems and related brain compartments. We examined the brains of Carcinus maenas, Pagurus bernhardus, and three other hermit crab species with immunohistochemistry against tyrosinated tubulin, f-actin, synaptic proteins, RF-amides and allatostatin. Our comparison showed that their optic neuropils within the eyestalks display strong resemblance in gross morphology as well as in detailed organization, suggesting a rather similar potential of processing visual input. Besides the well-developed visual system, the olfactory neuropils are distinct components in the brain of both C. maenas and P. bernhardus as well as the other hermit crabs, suggesting that close integration of olfactory and visual information may be useful in turbid marine environments with low visibility, as is typical for many habitats such as, e.g., the Baltic and the North Sea. Comparing the shape of the olfactory glomeruli in the anomurans showed some variations, ranging from a wedge shape to an elongate morphology. Furthermore, the tritocerebrum and the organization of the second antennae associated with the tritocerebrum seem to differ markedly in C. maenas and P. bernhardus, indicating better mechanosensory abilities in the latter close to those of other Decapoda with long second antennae, such as Astacida, Homarida, or Achelata. This aspect may also represent an adaptation to the "hermit lifestyle" in which competition for shells is a major aspect of their life history. The shore

  8. Origin of the genetic components of the vomeronasal system in the common ancestor of all extant vertebrates.

    PubMed

    Grus, Wendy E; Zhang, Jianzhi

    2009-02-01

    Comparative genomics provides a valuable tool for inferring the evolutionary history of physiological systems, particularly when this information is difficult to ascertain by morphological traits. One such example is the vomeronasal system (VNS), a vertebrate nasal chemosensory system that is responsible for detecting intraspecific pheromonal cues as well as environmental odorants. The morphological components of the VNS are found only in tetrapods, but the genetic components of the system have been found in teleost fish, in addition to tetrapods. To determine when the genetic components of the VNS originated, we searched for the VNS-specific genes in the genomes of two early diverging vertebrate lineages: the sea lamprey from jawless fishes and the elephant shark from cartilaginous fishes. Genes encoding vomeronasal type 1 receptors (V1Rs) and Trpc2, two components of the vomeronasal signaling pathway, are present in the sea lamprey genome, and both are expressed in the olfactory organ, revealing that the genetic components of the present-day VNS existed in the common ancestor of all extant vertebrates. Additionally, all three VNS genes, Trpc2, V1Rs, and vomeronasal type 2 receptors (V2Rs), are found in the elephant shark genome. Because V1Rs and V2Rs are related to two families of taste receptors, we also searched the early diverging vertebrate genomes for taste system genes and found them in the shark genome but not in the lamprey. Coupled with known distributions of the genetic components of the vertebrate main olfactory system, our results suggest staggered origins of vertebrate sensory systems. These findings are important for understanding the evolution of vertebrate sensory systems and illustrate the utility of the genome sequences of early diverging vertebrates for uncovering the evolution of vertebrate-specific traits. PMID:19008528

  9. Common components of industrial metal-working fluids as sources of carbon for bacterial growth. [Acinetobacter; Pseudomonas

    SciTech Connect

    Foxall-vanAken, S.; Brown, J.A. Jr.; Young, W.; Salmeen, I.; McClure, T.; Napier, S. Jr.; Olsen, R.H.

    1986-06-01

    Water-based metal-working fluids in large-scale industrial operations consist of many components, but in the most commonly used formulations only three classes of components are present in high enough concentrations that they could, in principle, provide enough carbon to support the high bacterial densities (10/sup 9/ CFU/ml) often observed in contaminated factory fluids. These components are petroleum oil (1 to 5%), petroleum sulfonates (0.1 to 0.5%), and fatty acids (less than 0.1%, mainly linoleic and oleic acids supplied as tall oils). Pure strains of predominating bacteria were isolated from contaminated reservoirs of two metal-working systems and randomly selected 12 strains which were tested in liquid culture for growth with each of the metal-working fluid components as the sole source of carbon. Of the 12 strains, 7 reached high density (10/sup 9/ CFU/ml from an initial inoculum of less than 2 x 10/sup 3/) in 24 h, and 1 strain did the same in 48 h with 0.05% oleic or linoleic acid as the carbon source. These same strains also grew on 1% naphthenic petroleum oil but required up to 72 h to reach densities near 10/sup 8/ CFU/ml. One strain grew slightly and the others not at all on the petroleum sulfonates. The four remaining strains did not grow on any of the components, even though they were among the predominating bacteria in the contaminated system. Of the seven strains that grew best on the fatty acids and on the naphthenic petroleum oil, five were tentatively identified as Acinetobacter species and two were identified as Pseudomonas species. Four of the bacteria that did not grow were tentatively identified as species of Pseudomonas, and one could not be identified.

  10. [Rapid identification 15 effective components of anti common cold medicine with MRM by LC-MS/MS].

    PubMed

    Jiang, Jian-Guo; Zhang, Xi-Ru; Zhang, Yi-Hua; Song, Geng-Shen

    2013-01-01

    This paper reports the establishment of a method for rapid identification 15 effective components of anti common cold medicine (paracetamol, aminophenazone, pseudoephedrine hydrochloride, methylephedrine hydrochloride, caffeine, amantadine hydrochloride, phenazone, guaifenesin, chlorphenamine maleate, dextromethorphen hydrobromide, diphenhydramine hydrochloride, promethazine hydrochloride, propyphenazone, benorilate and diclofenac sodium) with MRM by LC-MS/MS. The samples were extracted by methanol and were separated from a Altantis T3 column within 15 min with a gradient of acetonitrile-ammonium acetate (containing 0.25% glacial acetic acid), a tandem quadrupole mass spectrometer equipped with electrospray ionization source (ESI) was used in positive ion mode, and multiple reaction monitoring (MRM) was performed for qualitative analysis of these compounds. The minimum detectable quantity were 0.33-2.5 microg x kg(-1) of the 15 compounds. The method is simple, accurate and with good reproducibility for rapid identification many components in the same chromatographic condition, and provides a reference for qualitative analysis illegally added chemicals in anti common cold medicine. PMID:23600148

  11. Extracting the regional common-mode component of GPS station position time series from dense continuous network

    NASA Astrophysics Data System (ADS)

    Tian, Yunfeng; Shen, Zheng-Kang

    2016-02-01

    We develop a spatial filtering method to remove random noise and extract the spatially correlated transients (i.e., common-mode component (CMC)) that deviate from zero mean over the span of detrended position time series of a continuous Global Positioning System (CGPS) network. The technique utilizes a weighting scheme that incorporates two factors—distances between neighboring sites and their correlations of long-term residual position time series. We use a grid search algorithm to find the optimal thresholds for deriving the CMC that minimizes the root-mean-square (RMS) of the filtered residual position time series. Comparing to the principal component analysis technique, our method achieves better (>13% on average) reduction of residual position scatters for the CGPS stations in western North America, eliminating regional transients of all spatial scales. It also has advantages in data manipulation: less intervention and applicable to a dense network of any spatial extent. Our method can also be used to detect CMC irrespective of its origins (i.e., tectonic or nontectonic), if such signals are of particular interests for further study. By varying the filtering distance range, the long-range CMC related to atmospheric disturbance can be filtered out, uncovering CMC associated with transient tectonic deformation. A correlation-based clustering algorithm is adopted to identify stations cluster that share the common regional transient characteristics.

  12. Overview of the surface architecture and elements common to a wide range of Lunar and Mars missions

    NASA Technical Reports Server (NTRS)

    Connolly, John F.; Toups, Larry D.

    1990-01-01

    NASA has studied future missions to the moon and Mars since the 1960's, and most recently during the studies for the Space Exploration Initiative chartered by President Bush. With these most recent studies, the Lunar and Mars Exploration Program Office is looking at a number of possible options for the human exploration of the solar system. Objectives of these options include science and exploration, testing and learning centers, local planetary resource development, and self sufficient bases. To meet the objectives of any particular mission, efforts have focused primarily in three areas: (1) space transportation vehicles, (2) the associated space infrastructure to support these vehicles, and (3) the necessary infrastructure on the planet surface to carry out the mission objectives. This paper looks at work done by the Planet Surface Systems Office at JSC in the third area, and presents an overview of the approach to determining appropriate equipment and elements of the surface infrastructure needed for these mission alternatives. It describes the process of deriving appropriate surface architectures with consideration of mission objectives leading to system concepts, designation of elements and element placement.

  13. Ground-penetrating radar survey on the island of Pantelleria (Italy) reveals an ancient architectural complex with likely Punic and Roman components

    NASA Astrophysics Data System (ADS)

    Urban, Thomas M.; Murray, Carrie Ann; Vella, Clive; Lahikainen, Amanda

    2015-12-01

    A ground-penetrating radar (GPR) survey conducted on the small volcanic island of Pantelleria, in the Strait of Sicily, south-central Mediterranean, revealed an apparent complex of Punic/Roman architecture. The survey focused on the Lago di Venere area, where a previously investigated ritual Punic site was built alongside a brackish volcanic lake. The site also exhibits evidence of earlier Eneolithic components and later Roman components. The full extent of the site has remained undetermined, however, with only the small area of the Punic ritual complex having been excavated from 1996 to 2002. The GPR survey was intended to explore whether additional architecture remained unseen in surrounding areas, thus taking a first step toward determining the site's full spatial extent and archaeological potential. This survey revealed a complex of architectural ruins beneath an active agricultural field immediately west of the previously excavated features, and extending to a depth of approximately 2 m. These newly discovered features expand the known architectural footprint of the immediate site by three-fold. This GPR study is the first published archaeo-geophysical investigation on the island.

  14. Developmental plasticity of the microscopic placental architecture in relation to litter size variation in the common marmoset monkey (Callithrix jacchus)

    PubMed Central

    Rutherford, Julienne N.; Tardif, Suzette D.

    2012-01-01

    Fetal demand, shaped by factors such as number of fetuses, may alter placental regulation of exchange, even when maternal nutrition restriction is not overt. The marmoset is an interesting model in which to examine this aspect of placental function due to unique placentation that leads to multiple fetuses sharing a unified placental mass. We demonstrated previously that the triplet marmoset placenta exhibits significantly higher efficiency than does the twin placenta. Here, we test the hypothesis that this increased efficiency is due to changes in the microscopic morphology of the placenta. Stereology was employed to analyze the microscopic architecture of placentas from twin and triplet pregnancies. Compartments of interest were the trabeculae, intertrabecular space, fetal capillaries, and the surface area of the maternal-fetal interface. Placentas from the two litters did not differ significantly in overall volume or individual volumetric compartments, but triplet placentas exhibited significant expansion of the trabecular surface area in comparison to twins (p=0.039). Further, the two groups differed in the isomorphy coefficient, with triplet placentas having a significantly higher coefficient (p=0.001) and potentially a more complex microscopic topography. Differences in the maternal-fetal interface may be due to developmental constraints on gross placental growth that occur earlier in gestation, such that the only option for maintaining sufficient access to maternal resources or signaling pathways late in gestation is via an expansion of the interface. Despite the significant increase in overall surface area, individual triplet fetuses are associated with much less surface area than are individual twins, suggestive of alterations in metabolic efficiency, perhaps via differential amino acid transport regulation. PMID:19038443

  15. Developmental plasticity of the microscopic placental architecture in relation to litter size variation in the common marmoset monkey (Callithrix jacchus).

    PubMed

    Rutherford, J N; Tardif, S D

    2009-01-01

    Fetal demand, shaped by factors such as number of fetuses, may alter placental regulation of exchange, even when maternal nutrition restriction is not overt. The marmoset is an interesting model in which to examine this aspect of placental function due to unique placentation that leads to multiple fetuses sharing a unified placental mass. We demonstrated previously that the triplet marmoset placenta exhibits significantly higher efficiency than does the twin placenta. Here, we test the hypothesis that this increased efficiency is due to increases in changes in the microscopic morphology of the placenta. Stereology was employed to analyze the microscopic architecture of placentas from twin and triplet pregnancies. Compartments of interest were the trabeculae, intertrabecular space, fetal capillaries, and the surface area of the maternal-fetal interface. Placentas from the two litters did not differ significantly in overall volume or individual volumetric compartments, but triplet placentas exhibited significant expansion of the trabecular surface area in comparison to twins (p=0.039). Further, the two groups differed in the isomorphy coefficient, with triplet placentas having a significantly higher coefficient (p=0.001) and potentially a more complex microscopic topography. Differences in the maternal-fetal interface may be due to developmental constraints on gross placental growth that occur earlier in gestation, such that the only option for maintaining sufficient access to maternal resources or signaling pathways late in gestation is via an expansion of the interface. Despite the significant increase in overall surface area, individual triplet fetuses are associated with much less surface area than are individual twins, suggestive of alterations in metabolic efficiency, perhaps via differential amino acid transport regulation. PMID:19038443

  16. The structures of cytosolic and plastid-located glutamine synthetases from Medicago truncatula reveal a common and dynamic architecture

    SciTech Connect

    Torreira, Eva; Seabra, Ana Rita; Marriott, Hazel; Zhou, Min; Llorca, Óscar; Robinson, Carol V.; Carvalho, Helena G.; Fernández-Tornero, Carlos; Pereira, Pedro José Barbosa

    2014-04-01

    The experimental models of dicotyledonous cytoplasmic and plastid-located glutamine synthetases unveil a conserved eukaryotic-type decameric architecture, with subtle structural differences in M. truncatula isoenzymes that account for their distinct herbicide resistance. The first step of nitrogen assimilation in higher plants, the energy-driven incorporation of ammonia into glutamate, is catalyzed by glutamine synthetase. This central process yields the readily metabolizable glutamine, which in turn is at the basis of all subsequent biosynthesis of nitrogenous compounds. The essential role performed by glutamine synthetase makes it a prime target for herbicidal compounds, but also a suitable intervention point for the improvement of crop yields. Although the majority of crop plants are dicotyledonous, little is known about the structural organization of glutamine synthetase in these organisms and about the functional differences between the different isoforms. Here, the structural characterization of two glutamine synthetase isoforms from the model legume Medicago truncatula is reported: the crystallographic structure of cytoplasmic GSII-1a and an electron cryomicroscopy reconstruction of plastid-located GSII-2a. Together, these structural models unveil a decameric organization of dicotyledonous glutamine synthetase, with two pentameric rings weakly connected by inter-ring loops. Moreover, rearrangement of these dynamic loops changes the relative orientation of the rings, suggesting a zipper-like mechanism for their assembly into a decameric enzyme. Finally, the atomic structure of M. truncatula GSII-1a provides important insights into the structural determinants of herbicide resistance in this family of enzymes, opening new avenues for the development of herbicide-resistant plants.

  17. Hardware Architecture Study for NASA's Space Software Defined Radios

    NASA Technical Reports Server (NTRS)

    Reinhart, Richard C.; Scardelletti, Maximilian C.; Mortensen, Dale J.; Kacpura, Thomas J.; Andro, Monty; Smith, Carl; Liebetreu, John

    2008-01-01

    This study defines a hardware architecture approach for software defined radios to enable commonality among NASA space missions. The architecture accommodates a range of reconfigurable processing technologies including general purpose processors, digital signal processors, field programmable gate arrays (FPGAs), and application-specific integrated circuits (ASICs) in addition to flexible and tunable radio frequency (RF) front-ends to satisfy varying mission requirements. The hardware architecture consists of modules, radio functions, and and interfaces. The modules are a logical division of common radio functions that comprise a typical communication radio. This paper describes the architecture details, module definitions, and the typical functions on each module as well as the module interfaces. Trade-offs between component-based, custom architecture and a functional-based, open architecture are described. The architecture does not specify the internal physical implementation within each module, nor does the architecture mandate the standards or ratings of the hardware used to construct the radios.

  18. GITEWS, an extensible and open integration platform for manifold sensor systems and processing components based on Sensor Web Enablement and the principles of Service Oriented Architectures

    NASA Astrophysics Data System (ADS)

    Haener, Rainer; Waechter, Joachim; Fleischer, Jens; Herrnkind, Stefan; Schwarting, Herrmann

    2010-05-01

    The German Indonesian Tsunami Early Warning System (GITEWS) is a multifaceted system consisting of various sensor types like seismometers, sea level sensors or GPS stations, and processing components, all with their own system behavior and proprietary data structure. To operate a warning chain, beginning from measurements scaling up to warning products, all components have to interact in a correct way, both syntactically and semantically. Designing the system great emphasis was laid on conformity to the Sensor Web Enablement (SWE) specification by the Open Geospatial Consortium (OGC). The technical infrastructure, the so called Tsunami Service Bus (TSB) follows the blueprint of Service Oriented Architectures (SOA). The TSB is an integration concept (SWE) where functionality (observe, task, notify, alert, and process) is grouped around business processes (Monitoring, Decision Support, Sensor Management) and packaged as interoperable services (SAS, SOS, SPS, WNS). The benefits of using a flexible architecture together with SWE lead to an open integration platform: • accessing and controlling heterogeneous sensors in a uniform way (Functional Integration) • assigns functionality to distinct services (Separation of Concerns) • allows resilient relationship between systems (Loose Coupling) • integrates services so that they can be accessed from everywhere (Location Transparency) • enables infrastructures which integrate heterogeneous applications (Encapsulation) • allows combination of services (Orchestration) and data exchange within business processes Warning systems will evolve over time: New sensor types might be added, old sensors will be replaced and processing components will be improved. From a collection of few basic services it shall be possible to compose more complex functionality essential for specific warning systems. Given these requirements a flexible infrastructure is a prerequisite for sustainable systems and their architecture must be

  19. GITEWS, an extensible and open integration platform for manifold sensor systems and processing components based on Sensor Web Enablement and the principles of Service Oriented Architectures

    NASA Astrophysics Data System (ADS)

    Haener, Rainer; Waechter, Joachim; Fleischer, Jens; Herrnkind, Stefan; Schwarting, Herrmann

    2010-05-01

    The German Indonesian Tsunami Early Warning System (GITEWS) is a multifaceted system consisting of various sensor types like seismometers, sea level sensors or GPS stations, and processing components, all with their own system behavior and proprietary data structure. To operate a warning chain, beginning from measurements scaling up to warning products, all components have to interact in a correct way, both syntactically and semantically. Designing the system great emphasis was laid on conformity to the Sensor Web Enablement (SWE) specification by the Open Geospatial Consortium (OGC). The technical infrastructure, the so called Tsunami Service Bus (TSB) follows the blueprint of Service Oriented Architectures (SOA). The TSB is an integration concept (SWE) where functionality (observe, task, notify, alert, and process) is grouped around business processes (Monitoring, Decision Support, Sensor Management) and packaged as interoperable services (SAS, SOS, SPS, WNS). The benefits of using a flexible architecture together with SWE lead to an open integration platform: • accessing and controlling heterogeneous sensors in a uniform way (Functional Integration) • assigns functionality to distinct services (Separation of Concerns) • allows resilient relationship between systems (Loose Coupling) • integrates services so that they can be accessed from everywhere (Location Transparency) • enables infrastructures which integrate heterogeneous applications (Encapsulation) • allows combination of services (Orchestration) and data exchange within business processes Warning systems will evolve over time: New sensor types might be added, old sensors will be replaced and processing components will be improved. From a collection of few basic services it shall be possible to compose more complex functionality essential for specific warning systems. Given these requirements a flexible infrastructure is a prerequisite for sustainable systems and their architecture must be

  20. Plastic and Heritable Components of Phenotypic Variation in Nucella lapillus: An Assessment Using Reciprocal Transplant and Common Garden Experiments

    PubMed Central

    Pascoal, Sonia; Carvalho, Gary; Creer, Simon; Rock, Jenny; Kawaii, Kei; Mendo, Sonia; Hughes, Roger

    2012-01-01

    Assessment of plastic and heritable components of phenotypic variation is crucial for understanding the evolution of adaptive character traits in heterogeneous environments. We assessed the above in relation to adaptive shell morphology of the rocky intertidal snail Nucella lapillus by reciprocal transplantation of snails between two shores differing in wave action and rearing snails of the same provenance in a common garden. Results were compared with those reported for similar experiments conducted elsewhere. Microsatellite variation indicated limited gene flow between the populations. Intrinsic growth rate was greater in exposed-site than sheltered-site snails, but the reverse was true of absolute growth rate, suggesting heritable compensation for reduced foraging opportunity at the exposed site. Shell morphology of reciprocal transplants partially converged through plasticity toward that of native snails. Shell morphology of F2s in the common garden partially retained characteristics of the P-generation, suggesting genetic control. A maternal effect was revealed by greater resemblance of F1s than F2s to the P-generation. The observed synergistic effects of plastic, maternal and genetic control of shell-shape may be expected to maximise fitness when environmental characteristics become unpredictable through dispersal. PMID:22299035

  1. The genetic architecture of autism spectrum disorders (ASDs) and the potential importance of common regulatory genetic variants.

    PubMed

    Saffen, David

    2015-10-01

    Currently, there is great interest in identifying genetic variants that contribute to the risk of developing autism spectrum disorders (ASDs), due in part to recent increases in the frequency of diagnosis of these disorders worldwide. While there is nearly universal agreement that ASDs are complex diseases, with multiple genetic and environmental contributing factors, there is less agreement concerning the relative importance of common vs rare genetic variants in ASD liability. Recent observations that rare mutations and copy number variants (CNVs) are frequently associated with ASDs, combined with reduced fecundity of individuals with these disorders, has led to the hypothesis that ASDs are caused primarily by de novo or rare genetic mutations. Based on this model, large-scale whole-genome DNA sequencing has been proposed as the most appropriate method for discovering ASD liability genes. While this approach will undoubtedly identify many novel candidate genes and produce important new insights concerning the genetic causes of these disorders, a full accounting of the genetics of ASDs will be incomplete absent an understanding of the contributions of common regulatory variants, which are likely to influence ASD liability by modifying the effects of rare variants or, by assuming unfavorable combinations, directly produce these disorders. Because it is not yet possible to identify regulatory genetic variants by examination of DNA sequences alone, their identification will require experimentation. In this essay, I discuss these issues and describe the advantages of measurements of allelic expression imbalance (AEI) of mRNA expression for identifying cis-acting regulatory variants that contribute to ASDs. PMID:26335735

  2. Uncovering the genetic architecture of Colletotrichum lindemuthianum resistance through QTL mapping and epistatic interaction analysis in common bean

    PubMed Central

    González, Ana M.; Yuste-Lisbona, Fernando J.; Rodiño, A. Paula; De Ron, Antonio M.; Capel, Carmen; García-Alcázar, Manuel; Lozano, Rafael; Santalla, Marta

    2015-01-01

    Colletotrichum lindemuthianum is a hemibiotrophic fungal pathogen that causes anthracnose disease in common bean. Despite the genetics of anthracnose resistance has been studied for a long time, few quantitative trait loci (QTLs) studies have been conducted on this species. The present work examines the genetic basis of quantitative resistance to races 23 and 1545 of C. lindemuthianum in different organs (stem, leaf and petiole). A population of 185 recombinant inbred lines (RIL) derived from the cross PMB0225 × PHA1037 was evaluated for anthracnose resistance under natural and artificial photoperiod growth conditions. Using multi-environment QTL mapping approach, 10 and 16 main effect QTLs were identified for resistance to anthracnose races 23 and 1545, respectively. The homologous genomic regions corresponding to 17 of the 26 main effect QTLs detected were positive for the presence of resistance-associated gene cluster encoding nucleotide-binding and leucine-rich repeat (NL) proteins. Among them, it is worth noting that the main effect QTLs detected on linkage group 05 for resistance to race 1545 in stem, petiole and leaf were located within a 1.2 Mb region. The NL gene Phvul.005G117900 is located in this region, which can be considered an important candidate gene for the non-organ-specific QTL identified here. Furthermore, a total of 39 epistatic QTL (E-QTLs) (21 for resistance to race 23 and 18 for resistance to race 1545) involved in 20 epistatic interactions (eleven and nine interactions for resistance to races 23 and 1545, respectively) were identified. None of the main and epistatic QTLs detected displayed significant environment interaction effects. The present research provides essential information not only for the better understanding of the plant-pathogen interaction but also for the application of genomic assisted breeding for anthracnose resistance improvement in common bean through application of marker-assisted selection (MAS). PMID:25852706

  3. A Pan-Cancer Modular Regulatory Network Analysis to Identify Common and Cancer-Specific Network Components

    PubMed Central

    Knaack, Sara A; Siahpirani, Alireza Fotuhi; Roy, Sushmita

    2014-01-01

    Many human diseases including cancer are the result of perturbations to transcriptional regulatory networks that control context-specific expression of genes. A comparative approach across multiple cancer types is a powerful approach to illuminate the common and specific network features of this family of diseases. Recent efforts from The Cancer Genome Atlas (TCGA) have generated large collections of functional genomic data sets for multiple types of cancers. An emerging challenge is to devise computational approaches that systematically compare these genomic data sets across different cancer types that identify common and cancer-specific network components. We present a module- and network-based characterization of transcriptional patterns in six different cancers being studied in TCGA: breast, colon, rectal, kidney, ovarian, and endometrial. Our approach uses a recently developed regulatory network reconstruction algorithm, modular regulatory network learning with per gene information (MERLIN), within a stability selection framework to predict regulators for individual genes and gene modules. Our module-based analysis identifies a common theme of immune system processes in each cancer study, with modules statistically enriched for immune response processes as well as targets of key immune response regulators from the interferon regulatory factor (IRF) and signal transducer and activator of transcription (STAT) families. Comparison of the inferred regulatory networks from each cancer type identified a core regulatory network that included genes involved in chromatin remodeling, cell cycle, and immune response. Regulatory network hubs included genes with known roles in specific cancer types as well as genes with potentially novel roles in different cancer types. Overall, our integrated module and network analysis recapitulated known themes in cancer biology and additionally revealed novel regulatory hubs that suggest a complex interplay of immune response, cell

  4. Improving the Discoverability and Availability of Sample Data and Imagery in NASA's Astromaterials Curation Digital Repository Using a New Common Architecture for Sample Databases

    NASA Technical Reports Server (NTRS)

    Todd, N. S.; Evans, C.

    2015-01-01

    The Astromaterials Acquisition and Curation Office at NASA's Johnson Space Center (JSC) is the designated facility for curating all of NASA's extraterrestrial samples. The suite of collections includes the lunar samples from the Apollo missions, cosmic dust particles falling into the Earth's atmosphere, meteorites collected in Antarctica, comet and interstellar dust particles from the Stardust mission, asteroid particles from the Japanese Hayabusa mission, and solar wind atoms collected during the Genesis mission. To support planetary science research on these samples, NASA's Astromaterials Curation Office hosts the Astromaterials Curation Digital Repository, which provides descriptions of the missions and collections, and critical information about each individual sample. Our office is implementing several informatics initiatives with the goal of better serving the planetary research community. One of these initiatives aims to increase the availability and discoverability of sample data and images through the use of a newly designed common architecture for Astromaterials Curation databases.

  5. IAIMS Architecture

    PubMed Central

    Hripcsak, George

    1997-01-01

    Abstract An information system architecture defines the components of a system and the interfaces among the components. A good architecture is essential for creating an Integrated Advanced Information Management System (IAIMS) that works as an integrated whole yet is flexible enough to accommodate many users and roles, multiple applications, changing vendors, evolving user needs, and advancing technology. Modularity and layering promote flexibility by reducing the complexity of a system and by restricting the ways in which components may interact. Enterprise-wide mediation promotes integration by providing message routing, support for standards, dictionary-based code translation, a centralized conceptual data schema, business rule implementation, and consistent access to databases. Several IAIMS sites have adopted a client-server architecture, and some have adopted a three-tiered approach, separating user interface functions, application logic, and repositories. PMID:9067884

  6. IAIMS architecture.

    PubMed

    Hripcsak, G

    1997-01-01

    An information system architecture defines the components of a system and the interfaces among the components. A good architecture is essential for creating an Integrated Advanced Information Management System (IAIMS) that works as an integrated whole yet is flexible enough to accommodate many users and roles, multiple applications, changing vendors, evolving user needs, and advancing technology. Modularity and layering promote flexibility by reducing the complexity of a system and by restricting the ways in which components may interact. Enterprise-wide mediation promotes integration by providing message routing, support for standards, dictionary-based code translation, a centralized conceptual data schema, business rule implementation, and consistent access to databases. Several IAIMS sites have adopted a client-server architecture, and some have adopted a three-tiered approach, separating user interface functions, application logic, and repositories. PMID:9067884

  7. Components of the Plasminogen Activation System Promote Engraftment of Porous Polyethylene Biomaterial via Common and Distinct Effects

    PubMed Central

    Reichel, Christoph A.; Hessenauer, Maximilian E. T.; Pflieger, Kerstin; Rehberg, Markus; Kanse, Sandip M.; Zahler, Stefan; Krombach, Fritz; Berghaus, Alexander; Strieth, Sebastian

    2015-01-01

    Rapid fibrovascularization is a prerequisite for successful biomaterial engraftment. In addition to their well-known roles in fibrinolysis, urokinase-type plasminogen activator (uPA) and tissue plasminogen activator (tPA) or their inhibitor plasminogen activator inhibitor-1 (PAI-1) have recently been implicated as individual mediators in non-fibrinolytic processes, including cell adhesion, migration, and proliferation. Since these events are critical for fibrovascularization of biomaterial, we hypothesized that the components of the plasminogen activation system contribute to biomaterial engraftment. Employing in vivo and ex vivo microscopy techniques, vessel and collagen network formation within porous polyethylene (PPE) implants engrafted into dorsal skinfold chambers were found to be significantly impaired in uPA-, tPA-, or PAI-1-deficient mice. Consequently, the force required for mechanical disintegration of the implants out of the host tissue was significantly lower in the mutant mice than in wild-type controls. Conversely, surface coating with recombinant uPA, tPA, non-catalytic uPA, or PAI-1, but not with non-catalytic tPA, accelerated implant vascularization in wild-type mice. Thus, uPA, tPA, and PAI-1 contribute to the fibrovascularization of PPE implants through common and distinct effects. As clinical perspective, surface coating with recombinant uPA, tPA, or PAI-1 might provide a novel strategy for accelerating the vascularization of this biomaterial. PMID:25658820

  8. Overall Architecture of the Intraflagellar Transport (IFT)-B Complex Containing Cluap1/IFT38 as an Essential Component of the IFT-B Peripheral Subcomplex.

    PubMed

    Katoh, Yohei; Terada, Masaya; Nishijima, Yuya; Takei, Ryota; Nozaki, Shohei; Hamada, Hiroshi; Nakayama, Kazuhisa

    2016-05-20

    Intraflagellar transport (IFT) is essential for assembly and maintenance of cilia and flagella as well as ciliary motility and signaling. IFT is mediated by multisubunit complexes, including IFT-A, IFT-B, and the BBSome, in concert with kinesin and dynein motors. Under high salt conditions, purified IFT-B complex dissociates into a core subcomplex composed of at least nine subunits and at least five peripherally associated proteins. Using the visible immunoprecipitation assay, which we recently developed as a convenient protein-protein interaction assay, we determined the overall architecture of the IFT-B complex, which can be divided into core and peripheral subcomplexes composed of 10 and 6 subunits, respectively. In particular, we identified TTC26/IFT56 and Cluap1/IFT38, neither of which was included with certainty in previous models of the IFT-B complex, as integral components of the core and peripheral subcomplexes, respectively. Consistent with this, a ciliogenesis defect of Cluap1-deficient mouse embryonic fibroblasts was rescued by exogenous expression of wild-type Cluap1 but not by mutant Cluap1 lacking the binding ability to other IFT-B components. The detailed interaction map as well as comparison of subcellular localization of IFT-B components between wild-type and Cluap1-deficient cells provides insights into the functional relevance of the architecture of the IFT-B complex. PMID:26980730

  9. Information architecture for a planetary 'exploration web'

    NASA Technical Reports Server (NTRS)

    Lamarra, N.; McVittie, T.

    2002-01-01

    'Web services' is a common way of deploying distributed applications whose software components and data sources may be in different locations, formats, languages, etc. Although such collaboration is not utilized significantly in planetary exploration, we believe there is significant benefit in developing an architecture in which missions could leverage each others capabilities. We believe that an incremental deployment of such an architecture could significantly contribute to the evolution of increasingly capable, efficient, and even autonomous remote exploration.

  10. A multi-step phosphorelay two-component system impacts on tolerance against dehydration stress in common wheat.

    PubMed

    Gahlaut, Vijay; Mathur, Saloni; Dhariwal, Raman; Khurana, Jitendra P; Tyagi, Akhilesh K; Balyan, Harindra S; Gupta, Pushpendra K

    2014-12-01

    Wheat is an important staple crop, and its productivity is severely constrained by drought stress (DS). An understanding of the molecular basis of drought tolerance is necessary for genetic improvement of wheat for tolerance to DS. The two-component system (TCS) serves as a common sensor-regulator coupling mechanism implicated in the regulation of diverse biological processes (including response to DS) not only in prokaryotes, but also in higher plants. In the latter, TCS generally consists of two signalling elements, a histidine kinase (HK) and a response regulator (RR) associated with an intermediate element called histidine phosphotransferase (HPT). Keeping in view the possible utility of TCS in developing water use efficient (WUE) wheat cultivars, we identified and characterized 62 wheat genes encoding TCS elements in a silico study; these included 7 HKs, 45 RRs along with 10 HPTs. Twelve of the 62 genes showed relatively higher alterations in the expression under drought. The quantitative RT-PCR (qRT-PCR)-based expression analysis of these 12 TCS genes was carried out in wheat seedlings of a drought sensitive (HD2967) and a tolerant (Dharwar Dry) cultivar subjected to either dehydration stress or cytokinin treatment. The expression of these 12 genes under dehydration stress differed in sensitive and tolerant genotypes, even though for individual genes, both showed either up-regulation or down-regulation. In response to the treatment of cytokinin, the expression of type-A RR genes was higher in the tolerant genotype, relative to that in the sensitive genotype, the situation being reverse for the type-B RRs. These results have been discussed in the context of the role of TCS elements in drought tolerance in wheat. PMID:25228409

  11. The Prp19 U-box Crystal Structure Suggests a Common Dimeric Architecture for a Class of Oligomeric E3 Ubiquitin Ligases †,‡

    PubMed Central

    Vander Kooi, Craig W.; Ohi, Melanie D.; Rosenberg, Joshua A.; Oldham, Michael L.; Newcomer, Marcia E.; Gould, Kathleen L.; Chazin, Walter J.

    2008-01-01

    Prp19 is an essential splicing factor and a member of the U-box family of E3 ubiquitin ligases. Prp19 forms a tetramer via a central coiled-coil domain. Here we show the U-box domain of Prp19 exists as a dimer within the context of the Prp19 tetramer. A high-resolution structure of the homo-dimeric state of the Prp19 U-box was determined by x-ray crystallography. Mutation of the U-box dimer interface abrogates U-box dimer formation and is lethal in vivo. The structure of the U-box dimer enables construction of a complete model of Prp19 providing insights into how the tetrameric protein functions as an E3 ligase. Finally, comparison of the Prp19 U-box homodimer with the heterodimeric complex of BRCA1/BARD1 RING-finger domains uncovers a common architecture for a family of oligmeric U-box and RING-finger E3 ubiquitin ligases, which has mechanistic implications for E3 ligase mediated poly-ubiquitination and E4 poly-ubiquitin ligases. PMID:16388587

  12. Analysis of the genetic architecture of susceptibility to cervical cancer indicates that common SNPs explain a large proportion of the heritability.

    PubMed

    Chen, Dan; Cui, Tao; Ek, Weronica E; Liu, Han; Wang, Huibo; Gyllensten, Ulf

    2015-09-01

    The genetic architecture of susceptibility to cervical cancer is not well-understood. By using a genome-wide association study (GWAS) of 1034 cervical cancer patients and 3948 controls with 632668 single-nucleotide polymorphisms (SNPs), we estimated that 24.0% [standard error (SE) = 5.9%, P = 3.19×10(-6)] of variation in liability to cervical cancer is captured by autosomal SNPs, a bit lower than the heritability estimated from family study (27.0%), suggesting that a substantial proportion of the heritability is tagged by common SNPs. The remaining missing heritability most probably reflects incomplete linkage disequilibrium between causal variants and the genotyped SNPs. The variance explained by each chromosome is not related to its length (R (2) = 0.020, P = 0.516). Published genome-wide significant variants only explain 2.1% (SE = 1.5%, P = 0) of phenotypic variance, which reveals that most of the heritability has not been detected, presumably due to small individual effects. Another 2.1% (SE = 1.1%, P = 0.013) of variation is attributable to biological pathways associated with risk of cervical cancer, supporting that pathway analysis can identify part of the hidden heritability. Except for human leukocyte antigen genes and MHC class I polypeptide-related sequence A (MICA), none of the 82 candidate genes/regions reported in other association studies contributes to the heritability of cervical cancer in our dataset. This study shows that risk of cervical cancer is influenced by many common germline genetic variants of small effects. The findings are important for further study design to identify the hiding heritability that has not yet been revealed. More susceptibility loci are yet to be found in GWASs with higher power. PMID:26045304

  13. PICNIC Architecture.

    PubMed

    Saranummi, Niilo

    2005-01-01

    The PICNIC architecture aims at supporting inter-enterprise integration and the facilitation of collaboration between healthcare organisations. The concept of a Regional Health Economy (RHE) is introduced to illustrate the varying nature of inter-enterprise collaboration between healthcare organisations collaborating in providing health services to citizens and patients in a regional setting. The PICNIC architecture comprises a number of PICNIC IT Services, the interfaces between them and presents a way to assemble these into a functioning Regional Health Care Network meeting the needs and concerns of its stakeholders. The PICNIC architecture is presented through a number of views relevant to different stakeholder groups. The stakeholders of the first view are national and regional health authorities and policy makers. The view describes how the architecture enables the implementation of national and regional health policies, strategies and organisational structures. The stakeholders of the second view, the service viewpoint, are the care providers, health professionals, patients and citizens. The view describes how the architecture supports and enables regional care delivery and process management including continuity of care (shared care) and citizen-centred health services. The stakeholders of the third view, the engineering view, are those that design, build and implement the RHCN. The view comprises four sub views: software engineering, IT services engineering, security and data. The proposed architecture is founded into the main stream of how distributed computing environments are evolving. The architecture is realised using the web services approach. A number of well established technology platforms and generic standards exist that can be used to implement the software components. The software components that are specified in PICNIC are implemented in Open Source. PMID:16160218

  14. Microstructural architecture developed in the fabrication of solid and open-cellular copper components by additive manufacturing using electron beam melting

    NASA Astrophysics Data System (ADS)

    Ramirez, Diana Alejandra

    The fabrication of Cu components were first built by additive manufacturing using electron beam melting (EBM) from low-purity, atomized Cu powder containing a high density of Cu2O precipitates leading to a novel example of precipitate-dislocation architecture. These microstructures exhibit cell-like arrays (1-3microm) in the horizontal reference plane perpendicular to the build direction with columnar-like arrays extending from ~12 to >60 microm in length and corresponding spatial dimensions of 1-3 microm. These observations were observed by the use of optical metallography, and scanning and transmission electron microscopy. The hardness measurements were taken both on the atomized powder and the Cu components. The hardness for these architectures ranged from ~HV 83 to 88, in contrast to the original Cu powder microindentation hardness of HV 72 and the commercial Cu base plate hardness of HV 57. These observations were utilized for the fabrication of open-cellular copper structures by additive manufacturing using EBM and illustrated the ability to fabricate some form of controlled microstructural architecture by EBM parameter alteration or optimizing. The fabrication of these structures ranged in densities from 0.73g/cm3 to 6.67g/cm3. These structures correspond to four different articulated mesh arrays. While these components contained some porosity as a consequence of some unmelted regions, the Cu2O precipitates also contributed to a reduced density. Using X-ray Diffraction showed the approximate volume fraction estimated to be ~2%. The addition of precipitates created in the EBM melt scan formed microstructural arrays which contributed to hardening contributing to the strength of mesh struts and foam ligaments. The measurements of relative stiffness versus relative density plots for Cu compared very closely with Ti-6Al-4V open cellular structures - both mesh and foams. The Cu reticulated mesh structures exhibit a slope of n = 2 in contrast to a slope of n = 2

  15. Treatment of Acute Cough Due to the Common Cold: Multi-component, Multi-symptom Therapy is Preferable to Single-Component, Single-Symptom Therapy--A Pro/Con Debate.

    PubMed

    Eccles, Ronald; Turner, Ronald B; Dicpinigaitis, Peter V

    2016-02-01

    Acute viral upper respiratory tract infection, or, the common cold, affects essentially every human being, and cough is reported as its most frequent associated symptom. Billions of dollars are spent worldwide annually by individuals seeking relief from this multi-symptom syndrome. Thousands of non-prescription, over-the-counter products are available worldwide, aimed at relieving the various bothersome symptoms induced by the common cold. Differences of opinion exist as to whether optimal therapy for cough associated with the common cold consists of multi-component, multi-symptom cough/cold preparations, or, whether single-component medications, aimed at relief of specific symptoms, represent the optimal therapeutic approach. The 5th American Cough Conference, held in Washington, D.C. in June, 2015, provided an ideal forum for discussion and debate of this issue between two internationally recognized experts in the field of the common cold and its treatment. PMID:26420163

  16. Inhibitory and excitatory axon terminals share a common nano-architecture of their Cav2.1 (P/Q-type) Ca2+ channels

    PubMed Central

    Althof, Daniel; Baehrens, David; Watanabe, Masahiko; Suzuki, Noboru; Fakler, Bernd; Kulik, Ákos

    2015-01-01

    Tuning of the time course and strength of inhibitory and excitatory neurotransmitter release is fundamental for the precise operation of cortical network activity and is controlled by Ca2+ influx into presynaptic terminals through the high voltage-activated P/Q-type Ca2+ (Cav2.1) channels. Proper channel-mediated Ca2+-signaling critically depends on the topographical arrangement of the channels in the presynaptic membrane. Here, we used high-resolution SDS-digested freeze-fracture replica immunoelectron microscopy together with automatized computational analysis of Cav2.1 immunogold labeling to determine the precise subcellular organization of Cav2.1 channels in both inhibitory and excitatory terminals. Immunoparticles labeling the pore-forming α1 subunit of Cav2.1 channels were enriched over the active zone of the boutons with the number of channels (3–62) correlated with the area of the synaptic membrane. Detailed analysis showed that Cav2.1 channels are non-uniformly distributed over the presynaptic membrane specialization where they are arranged in clusters of an average five channels per cluster covering a mean area with a diameter of about 70 nm. Importantly, clustered arrangement and cluster properties did not show any significant difference between GABAergic and glutamatergic terminals. Our data demonstrate a common nano-architecture of Cav2.1 channels in inhibitory and excitatory boutons in stratum radiatum of the hippocampal CA1 area suggesting that the cluster arrangement is crucial for the precise release of transmitters from the axonal boutons. PMID:26321916

  17. Inhibitory and excitatory axon terminals share a common nano-architecture of their Cav2.1 (P/Q-type) Ca(2+) channels.

    PubMed

    Althof, Daniel; Baehrens, David; Watanabe, Masahiko; Suzuki, Noboru; Fakler, Bernd; Kulik, Ákos

    2015-01-01

    Tuning of the time course and strength of inhibitory and excitatory neurotransmitter release is fundamental for the precise operation of cortical network activity and is controlled by Ca(2+) influx into presynaptic terminals through the high voltage-activated P/Q-type Ca(2+) (Cav2.1) channels. Proper channel-mediated Ca(2+)-signaling critically depends on the topographical arrangement of the channels in the presynaptic membrane. Here, we used high-resolution SDS-digested freeze-fracture replica immunoelectron microscopy together with automatized computational analysis of Cav2.1 immunogold labeling to determine the precise subcellular organization of Cav2.1 channels in both inhibitory and excitatory terminals. Immunoparticles labeling the pore-forming α1 subunit of Cav2.1 channels were enriched over the active zone of the boutons with the number of channels (3-62) correlated with the area of the synaptic membrane. Detailed analysis showed that Cav2.1 channels are non-uniformly distributed over the presynaptic membrane specialization where they are arranged in clusters of an average five channels per cluster covering a mean area with a diameter of about 70 nm. Importantly, clustered arrangement and cluster properties did not show any significant difference between GABAergic and glutamatergic terminals. Our data demonstrate a common nano-architecture of Cav2.1 channels in inhibitory and excitatory boutons in stratum radiatum of the hippocampal CA1 area suggesting that the cluster arrangement is crucial for the precise release of transmitters from the axonal boutons. PMID:26321916

  18. Space Telecommunications Radio Systems (STRS) Hardware Architecture Standard: Release 1.0 Hardware Section

    NASA Technical Reports Server (NTRS)

    Reinhart, Richard C.; Kacpura, Thomas J.; Smith, Carl R.; Liebetreu, John; Hill, Gary; Mortensen, Dale J.; Andro, Monty; Scardelletti, Maximilian C.; Farrington, Allen

    2008-01-01

    This report defines a hardware architecture approach for software-defined radios to enable commonality among NASA space missions. The architecture accommodates a range of reconfigurable processing technologies including general-purpose processors, digital signal processors, field programmable gate arrays, and application-specific integrated circuits (ASICs) in addition to flexible and tunable radiofrequency front ends to satisfy varying mission requirements. The hardware architecture consists of modules, radio functions, and interfaces. The modules are a logical division of common radio functions that compose a typical communication radio. This report describes the architecture details, the module definitions, the typical functions on each module, and the module interfaces. Tradeoffs between component-based, custom architecture and a functional-based, open architecture are described. The architecture does not specify a physical implementation internally on each module, nor does the architecture mandate the standards or ratings of the hardware used to construct the radios.

  19. HL7 document patient record architecture: an XML document architecture based on a shared information model.

    PubMed

    Dolin, R H; Alschuler, L; Behlen, F; Biron, P V; Boyer, S; Essin, D; Harding, L; Lincoln, T; Mattison, J E; Rishel, W; Sokolowski, R; Spinosa, J; Williams, J P

    1999-01-01

    The HL7 SGML/XML Special Interest Group is developing the HL7 Document Patient Record Architecture. This draft proposal strives to create a common data architecture for the interoperability of healthcare documents. Key components are that it is under the umbrella of HL7 standards, it is specified in Extensible Markup Language, the semantics are drawn from the HL7 Reference Information Model, and the document specifications form an architecture that, in aggregate, define the semantics and structural constraints necessary for the exchange of clinical documents. The proposal is a work in progress and has not yet been submitted to HL7's formal balloting process. PMID:10566319

  20. Identical mutations of the p53 tumor suppressor gene in the gliomatous and the sarcomatous components of gliosarcomas suggest a common origin from glial cells

    SciTech Connect

    Biernat, W.; Aguzzi, A.; Sure, U.

    1995-09-01

    Gliosarcomas are morphologically heterogeneous tumors of the central nervous system composed of gliomatous and sarcomatous components. The histogenesis of the latter is still a matter of debate. As mutations of the p53 tumor suppressor gene represent an early event in the development of gliomas, we attempted to determine whether both components of gliosarcomas share identical alterations of the p53 gene. Using single-strand conformation analysis (SSCA) and direct DNA sequencing of the p53 gene, we analyzed dissected gliomatous and sarcomatous parts of 12 formalin-fixed, paraffin-embedded gliosarcomas. The two tumors that contained a p53 alteration were found to carry the identical mutation (exon 5; codon 151, CCC {r_arrow} TCC; codon 173, GTG {r_arrow} GTA) in the gliomatous and the sarcomatous components. These findings suggest a common origin of the two cellular components from neoplastic glial cells. 37 refs., 3 figs., 1 tab.

  1. Security Aspects of an Enterprise-Wide Network Architecture.

    ERIC Educational Resources Information Center

    Loew, Robert; Stengel, Ingo; Bleimann, Udo; McDonald, Aidan

    1999-01-01

    Presents an overview of two projects that concern local area networks and the common point between networks as they relate to network security. Discusses security architectures based on firewall components, packet filters, application gateways, security-management components, an intranet solution, user registration by Web form, and requests for…

  2. A Reference Architecture for Space Information Management

    NASA Technical Reports Server (NTRS)

    Mattmann, Chris A.; Crichton, Daniel J.; Hughes, J. Steven; Ramirez, Paul M.; Berrios, Daniel C.

    2006-01-01

    We describe a reference architecture for space information management systems that elegantly overcomes the rigid design of common information systems in many domains. The reference architecture consists of a set of flexible, reusable, independent models and software components that function in unison, but remain separately managed entities. The main guiding principle of the reference architecture is to separate the various models of information (e.g., data, metadata, etc.) from implemented system code, allowing each to evolve independently. System modularity, systems interoperability, and dynamic evolution of information system components are the primary benefits of the design of the architecture. The architecture requires the use of information models that are substantially more advanced than those used by the vast majority of information systems. These models are more expressive and can be more easily modularized, distributed and maintained than simpler models e.g., configuration files and data dictionaries. Our current work focuses on formalizing the architecture within a CCSDS Green Book and evaluating the architecture within the context of the C3I initiative.

  3. Distinguishing the common components of oil- and water-based metalworking fluids for assessment of cancer incidence risk in autoworkers

    PubMed Central

    Friesen, Melissa C; Costello, Sadie; Thurston, Sally W; Eisen, Ellen A

    2012-01-01

    Background Metalworking fluids (MWF) — straight, soluble, and synthetic — have overlapping components. We derived constituent-based metrics of polycyclic aromatic hydrocarbons (PAHs), water-based MWF, biocides, and nitrosamines to account for this overlap and examined their relations with cancer incidence. Methods An autoworkers cohort of 30,000 was followed for cancer incidence. Hazard ratios were estimated for each cancer and cumulative exposure (lagged) to each new metric; soluble MWF contributed variably to several metrics with weight k=0–1. Results For most cancer sites, the constituent-based metrics resulted in stronger exposure-disease associations than the MWF classes alone. Laryngeal and bladder cancer were most strongly associated with PAH (k=0). Protective effects for stomach and lung cancer were observed with biocide, a component that may be a surrogate for endotoxin. Conclusions Our findings provide support and clarification of possible etiologies for previous positive associations and provide support for distinguishing exposure from oil- and water-based MWF in epidemiologic studies. PMID:21328414

  4. Identifications and limited spectroscopy for Luyten common proper motion stars with probable white dwarf components. I - Pair brighter than 17th magnitude

    NASA Technical Reports Server (NTRS)

    Oswalt, Terry D.; Hintzen, Paul M.; Luyten, Willem J.

    1988-01-01

    Identifications are provided for 103 bright Luyten common proper motion (CPM) stellar systems with m(pg) less than 17.0 mag containing likely white dwarf (WD) components. New spectral types are presented for 55 components, and spectral types for 51 more are available in the literature. With the CPM systems previously published by Giclas et al. (1978), the Luyten stars provide a uniform sample of nearly 200 pairs or multiples brighter than 17h magnitude. Selection effects biasing the combined samples are discussed; in particular, evidence is presented that fewer than 1 percent of wide WD binaries have been detected.

  5. Phosphorus runoff from a phosphorus deficient soil under common bean (Phaseolus vulgaris L.) and soybean (Glycine max L.) genotypes with contrasting root architecture

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Selection of plant materials on the basis of root characteristics is key to improving nutrient and water use efficiency in low-input farming systems. Crop genotypes with superior root architecture can make more efficient use of available soil resources and, through improved growth, may also lower er...

  6. Single-Cell Analysis Reveals that Insulation Maintains Signaling Specificity between Two Yeast MAPK Pathways with Common Components

    PubMed Central

    Patterson, Jesse C.; Klimenko, Evguenia S.; Thorner, Jeremy

    2014-01-01

    Eukaryotic cells use multiple mitogen-activated protein kinase (MAPK) cascades to evoke appropriate responses to external stimuli. In Saccharomyces cerevisiae, the MAPK Fus3 is activated by pheromone-binding G protein-coupled receptors to promote mating, whereas the MAPK Hog1 is activated by hyperosmotic stress to elicit the high osmolarity glycerol (HOG) response. Although these MAPK pathways share several upstream components, exposure to either pheromone or osmolyte alone triggers only the appropriate response. We used fluorescent localization- and transcription-specific reporters to assess activation of these pathways in individual cells on the minute and hour timescale, respectively. Dual activation of these two MAPK pathways occurred over a broad range of stimulant concentrations and temporal regimes in wild-type cells subjected to co-stimulation. Thus, signaling specificity is achieved through an “insulation” mechanism, not a “cross-inhibition” mechanism. Furthermore, we showed that there was a critical period during which Hog1 activity had to occur for proper insulation of the HOG pathway. PMID:20959523

  7. Two Novel AP2/EREBP Transcription Factor Genes TaPARG Have Pleiotropic Functions on Plant Architecture and Yield-Related Traits in Common Wheat

    PubMed Central

    Li, Bo; Li, Qiaoru; Mao, Xinguo; Li, Ang; Wang, Jingyi; Chang, Xiaoping; Hao, Chenyang; Zhang, Xueyong; Jing, Ruilian

    2016-01-01

    AP2/EREBPs play significant roles in plant growth and development. A novel, pleiotropic TaPARG (PLANT ARCHITECTURE-RELATED GENE), a member of the AP2/EREBP transcription factor gene family, and its flanking sequences were isolated in wheat (Triticum aestivum L.). Two TaPARG genes were identified and named as TaPARG-2A and TaPARG-2D. Their amino acid sequences were highly similar especially in the functional domains. TaPARG-2A on chromosome 2A was flanked by markers Xwmc63 and Xgwm372. TaPARG-2D was mapped to chromosome 2D. Subcellular localization revealed that TaPARG-2D was localized in the nucleus. The results of tissue expression pattern, overexpression in rice, association analysis and distinct population verification jointly revealed that TaPARG functions during the entire growth cycle of wheat. Its functions include regulation of plant architecture-related and yield-related traits. Association analysis, geographic distribution and allelic frequencies suggested that favored haplotypes Hap-2A-2 and Hap-2A-3 were selected in Chinese wheat breeding programs. Both favored haplotypes might be caused by a single amino acid substitution (His/Tyr). These results suggest that TaPARG is a regulatory factor in plant growth and development, and that the favored alleles might be useful for improving plant architecture and grain yield of wheat. PMID:27555860

  8. Two Novel AP2/EREBP Transcription Factor Genes TaPARG Have Pleiotropic Functions on Plant Architecture and Yield-Related Traits in Common Wheat.

    PubMed

    Li, Bo; Li, Qiaoru; Mao, Xinguo; Li, Ang; Wang, Jingyi; Chang, Xiaoping; Hao, Chenyang; Zhang, Xueyong; Jing, Ruilian

    2016-01-01

    AP2/EREBPs play significant roles in plant growth and development. A novel, pleiotropic TaPARG (PLANT ARCHITECTURE-RELATED GENE), a member of the AP2/EREBP transcription factor gene family, and its flanking sequences were isolated in wheat (Triticum aestivum L.). Two TaPARG genes were identified and named as TaPARG-2A and TaPARG-2D. Their amino acid sequences were highly similar especially in the functional domains. TaPARG-2A on chromosome 2A was flanked by markers Xwmc63 and Xgwm372. TaPARG-2D was mapped to chromosome 2D. Subcellular localization revealed that TaPARG-2D was localized in the nucleus. The results of tissue expression pattern, overexpression in rice, association analysis and distinct population verification jointly revealed that TaPARG functions during the entire growth cycle of wheat. Its functions include regulation of plant architecture-related and yield-related traits. Association analysis, geographic distribution and allelic frequencies suggested that favored haplotypes Hap-2A-2 and Hap-2A-3 were selected in Chinese wheat breeding programs. Both favored haplotypes might be caused by a single amino acid substitution (His/Tyr). These results suggest that TaPARG is a regulatory factor in plant growth and development, and that the favored alleles might be useful for improving plant architecture and grain yield of wheat. PMID:27555860

  9. A Tool for Managing Software Architecture Knowledge

    SciTech Connect

    Babar, Muhammad A.; Gorton, Ian

    2007-08-01

    This paper describes a tool for managing architectural knowledge and rationale. The tool has been developed to support a framework for capturing and using architectural knowledge to improve the architecture process. This paper describes the main architectural components and features of the tool. The paper also provides examples of using the tool for supporting wellknown architecture design and analysis methods.

  10. Robotic Intelligence Kernel: Architecture

    Energy Science and Technology Software Center (ESTSC)

    2009-09-16

    The INL Robotic Intelligence Kernel Architecture (RIK-A) is a multi-level architecture that supports a dynamic autonomy structure. The RIK-A is used to coalesce hardware for sensing and action as well as software components for perception, communication, behavior and world modeling into a framework that can be used to create behaviors for humans to interact with the robot.

  11. Reference Avionics Architecture for Lunar Surface Systems

    NASA Technical Reports Server (NTRS)

    Somervill, Kevin M.; Lapin, Jonathan C.; Schmidt, Oron L.

    2010-01-01

    Developing and delivering infrastructure capable of supporting long-term manned operations to the lunar surface has been a primary objective of the Constellation Program in the Exploration Systems Mission Directorate. Several concepts have been developed related to development and deployment lunar exploration vehicles and assets that provide critical functionality such as transportation, habitation, and communication, to name a few. Together, these systems perform complex safety-critical functions, largely dependent on avionics for control and behavior of system functions. These functions are implemented using interchangeable, modular avionics designed for lunar transit and lunar surface deployment. Systems are optimized towards reuse and commonality of form and interface and can be configured via software or component integration for special purpose applications. There are two core concepts in the reference avionics architecture described in this report. The first concept uses distributed, smart systems to manage complexity, simplify integration, and facilitate commonality. The second core concept is to employ extensive commonality between elements and subsystems. These two concepts are used in the context of developing reference designs for many lunar surface exploration vehicles and elements. These concepts are repeated constantly as architectural patterns in a conceptual architectural framework. This report describes the use of these architectural patterns in a reference avionics architecture for Lunar surface systems elements.

  12. Particulate matter components and subclinical atherosclerosis: common approaches to estimating exposure in a Multi-Ethnic Study of Atherosclerosis cross-sectional study

    PubMed Central

    2013-01-01

    Background Concentrations of outdoor fine particulate matter (PM2.5) have been associated with cardiovascular disease. PM2.5 chemical composition may be responsible for effects of exposure to PM2.5. Methods Using data from the Multi-Ethnic Study of Atherosclerosis (MESA) collected in 2000–2002 on 6,256 US adults without clinical cardiovascular disease in six U.S. metropolitan areas, we investigated cross-sectional associations of estimated long-term exposure to total PM2.5 mass and PM2.5 components (elemental carbon [EC], organic carbon [OC], silicon and sulfur) with measures of subclinical atherosclerosis (coronary artery calcium [CAC] and right common carotid intima-media thickness [CIMT]). Community monitors deployed for this study from 2007 to 2008 were used to estimate exposures at baseline addresses using three commonly-used approaches: (1) nearest monitor (the primary approach), (2) inverse-distance monitor weighting and (3) city-wide average. Results Using the exposure estimate based on nearest monitor, in single-pollutant models, increased OC (effect estimate [95% CI] per IQR: 35.1 μm [26.8, 43.3]), EC (9.6 μm [3.6,15.7]), sulfur (22.7 μm [15.0,30.4]) and total PM2.5 (14.7 μm [9.0,20.5]) but not silicon (5.2 μm [−9.8,20.1]), were associated with increased CIMT; in two-pollutant models, only the association with OC was robust to control for the other pollutants. Findings were generally consistent across the three exposure estimation approaches. None of the PM measures were positively associated with either the presence or extent of CAC. In sensitivity analyses, effect estimates for OC and silicon were particularly sensitive to control for metropolitan area. Conclusion Employing commonly-used exposure estimation approaches, all of the PM2.5 components considered, except silicon, were associated with increased CIMT, with the evidence being strongest for OC; no component was associated with increased CAC. PM2.5 chemical components, or other features

  13. Sensor Open System Architecture (SOSA)

    NASA Astrophysics Data System (ADS)

    Collier, Charles P.; Lipkin, Ilya; Davidson, Steven A.; Dirner, Jason

    2016-05-01

    The Sensor Open System Architecture (SOSA) is a C4ISR-focused technical and economic collaborative effort between the Air Force, Navy, Army, the Department of Defense (DoD), Industry, and other Governmental agencies to develop (and incorporate) technical Open Systems Architecture standards in order to maximize C4ISR sub-system, system, and platform affordability, re-configurability, overall performance, and hardware/software/firmware re-use. The SOSA effort will effectively create an operational and technical framework for the integration of disparate payloads into C4ISR systems; with a focus on the development of a functional decomposition for common multi-purpose backbone architecture for radar, EO/IR, SIGINT, EW, and communications modalities. SOSA addresses hardware, software, and mechanical/electrical interfaces. The functional decomposition will produce a set of re-useable components, interfaces, and sub-systems that engender re-usable capabilities. This, in effect, creates a realistic and affordable ecosystem enabling mission effectiveness through systematic re-use of all available re-composed hardware, software, and electrical/mechanical base components and interfaces.

  14. The Software Architecture of Global Climate Models

    NASA Astrophysics Data System (ADS)

    Alexander, K. A.; Easterbrook, S. M.

    2011-12-01

    It has become common to compare and contrast the output of multiple global climate models (GCMs), such as in the Climate Model Intercomparison Project Phase 5 (CMIP5). However, intercomparisons of the software architecture of GCMs are almost nonexistent. In this qualitative study of seven GCMs from Canada, the United States, and Europe, we attempt to fill this gap in research. We describe the various representations of the climate system as computer programs, and account for architectural differences between models. Most GCMs now practice component-based software engineering, where Earth system components (such as the atmosphere or land surface) are present as highly encapsulated sub-models. This architecture facilitates a mix-and-match approach to climate modelling that allows for convenient sharing of model components between institutions, but it also leads to difficulty when choosing where to draw the lines between systems that are not encapsulated in the real world, such as sea ice. We also examine different styles of couplers in GCMs, which manage interaction and data flow between components. Finally, we pay particular attention to the varying levels of complexity in GCMs, both between and within models. Many GCMs have some components that are significantly more complex than others, a phenomenon which can be explained by the respective institution's research goals as well as the origin of the model components. In conclusion, although some features of software architecture have been adopted by every GCM we examined, other features show a wide range of different design choices and strategies. These architectural differences may provide new insights into variability and spread between models.

  15. Modular avionic architectures

    NASA Astrophysics Data System (ADS)

    Trujillo, Edward

    The author presents an analysis revealing some of the salient features of modular avionics. A decomposition of the modular avionics concept is performed, highlighting some of the key features of such architectures. Several layers of architecture can be found in such concepts, including those relating to software structure, communication, and supportability. Particular emphasis is placed on the layer relating to partitioning, which gives rise to those features of integration, modularity, and commonality. Where integration is the sharing of common tasks or items to gain efficiency and flexibility, modularity is the partitioning of a system into reconfigurable and maintainable items, and commonality is partitioning to maximize the use of identical items across the range of applications. Two architectures, MASA (Modular Avionics System Architecture) and Pave Pillar, are considered in particular.

  16. Compositional Specification of Software Architecture

    NASA Technical Reports Server (NTRS)

    Penix, John; Lau, Sonie (Technical Monitor)

    1998-01-01

    This paper describes our experience using parameterized algebraic specifications to model properties of software architectures. The goal is to model the decomposition of requirements independent of the style used to implement the architecture. We begin by providing an overview of the role of architecture specification in software development. We then describe how architecture specifications are build up from component and connector specifications and give an overview of insights gained from a case study used to validate the method.

  17. Extracellular Matrix Remodeling: The Common Denominator in Connective Tissue DiseasesPossibilities for Evaluation and Current Understanding of the Matrix as More Than a Passive Architecture, but a Key Player in Tissue Failure

    PubMed Central

    Nielsen, Mette J.; Sand, Jannie M.; Henriksen, Kim; Genovese, Federica; Bay-Jensen, Anne-Christine; Smith, Victoria; Adamkewicz, Joanne I.; Christiansen, Claus; Leeming, Diana J.

    2013-01-01

    Abstract Increased attention is paid to the structural components of tissues. These components are mostly collagens and various proteoglycans. Emerging evidence suggests that altered components and noncoded modifications of the matrix may be both initiators and drivers of disease, exemplified by excessive tissue remodeling leading to tissue stiffness, as well as by changes in the signaling potential of both intact matrix and fragments thereof. Although tissue structure until recently was viewed as a simple architecture anchoring cells and proteins, this complex grid may contain essential information enabling the maintenance of the structure and normal functioning of tissue. The aims of this review are to (1) discuss the structural components of the matrix and the relevance of their mutations to the pathology of diseases such as fibrosis and cancer, (2) introduce the possibility that post-translational modifications (PTMs), such as protease cleavage, citrullination, cross-linking, nitrosylation, glycosylation, and isomerization, generated during pathology, may be unique, disease-specific biochemical markers, (3) list and review the range of simple enzyme-linked immunosorbent assays (ELISAs) that have been developed for assessing the extracellular matrix (ECM) and detecting abnormal ECM remodeling, and (4) discuss whether some PTMs are the cause or consequence of disease. New evidence clearly suggests that the ECM at some point in the pathogenesis becomes a driver of disease. These pathological modified ECM proteins may allow insights into complicated pathologies in which the end stage is excessive tissue remodeling, and provide unique and more pathology-specific biochemical markers. PMID:23046407

  18. The NASA Integrated Information Technology Architecture

    NASA Technical Reports Server (NTRS)

    Baldridge, Tim

    1997-01-01

    of IT systems, 3) the Technical Architecture: a common, vendor-independent framework for design, integration and implementation of IT systems and 4) the Product Architecture: vendor=specific IT solutions. The Systems Architecture is effectively a description of the end-user "requirements". Generalized end-user requirements are discussed and subsequently organized into specific mission and project functions. The Technical Architecture depicts the framework, and relationship, of the specific IT components that enable the end-user functionality as described in the Systems Architecture. The primary components as described in the Technical Architecture are: 1) Applications: Basic Client Component, Object Creation Applications, Collaborative Applications, Object Analysis Applications, 2) Services: Messaging, Information Broker, Collaboration, Distributed Processing, and 3) Infrastructure: Network, Security, Directory, Certificate Management, Enterprise Management and File System. This Architecture also provides specific Implementation Recommendations, the most significant of which is the recognition of IT as core to NASA activities and defines a plan, which is aligned with the NASA strategic planning processes, for keeping the Architecture alive and useful.

  19. Towards a Domain Specific Software Architecture for Scientific Data Distribution

    NASA Astrophysics Data System (ADS)

    Wilson, A.; Lindholm, D. M.

    2011-12-01

    A reference architecture is a "design that satisfies a clearly distinguished subset of the functional capabilities identified in the reference requirements within the boundaries of certain design and implementation constraints, also identified in reference requirements." [Tracz, 1995] Recognizing the value of a reference architecture, NASA's ESDSWG's Standards Process Group (SPG) is introducing a multi-disciplinary science data systems (SDS) reference architecture in order to provide an implementation neutral, template solution for an architecture to support scientific data systems in general [Burnett, et al, 2011]. This reference architecture describes common features and patterns in scientific data systems, and can thus provide guidelines in building and improving such systems. But, guidelines alone may not be sufficient to actually build a system. A domain specific software architecture (DSSA) is "an assemblage of software components, specialized for a particular type of task (domain), generalized for effective use across that domain, composed in a standardized structure (topology) effective for building successful applications." [Tracz, 1995]. It can be thought of as relatively specific reference architecture. The "DSSA Process" is a software life cycle developed at Carnegie Melon's Software Engineering Institute that is based on the development and use of domain-specific software architectures, components, and tools. The process has four distinct activities: 1) develop a domain specific base/model, 2) populate and maintain the library, 3) build applications, 4) operate and maintain applications [Armitage, 1993]. The DSSA process may provide the missing link between guidelines and actual system construction. In this presentation we focus specifically on the realm of scientific data access and distribution. Assuming the role of domain experts in building data access systems, we report the results of creating a DSSA for scientific data distribution. We describe

  20. Microcomponent sheet architecture

    DOEpatents

    Wegeng, R.S.; Drost, M.K..; McDonald, C.E.

    1997-03-18

    The invention is a microcomponent sheet architecture wherein macroscale unit processes are performed by microscale components. The sheet architecture may be a single laminate with a plurality of separate microcomponent sections or the sheet architecture may be a plurality of laminates with one or more microcomponent sections on each laminate. Each microcomponent or plurality of like microcomponents perform at least one unit operation. A first laminate having a plurality of like first microcomponents is combined with at least a second laminate having a plurality of like second microcomponents thereby combining at least two unit operations to achieve a system operation. 14 figs.

  1. Microcomponent sheet architecture

    DOEpatents

    Wegeng, Robert S.; Drost, M. Kevin; McDonald, Carolyn E.

    1997-01-01

    The invention is a microcomponent sheet architecture wherein macroscale unit processes are performed by microscale components. The sheet architecture may be a single laminate with a plurality of separate microcomponent sections or the sheet architecture may be a plurality of laminates with one or more microcomponent sections on each laminate. Each microcomponent or plurality of like microcomponents perform at least one unit operation. A first laminate having a plurality of like first microcomponents is combined with at least a second laminate having a plurality of like second microcomponents thereby combining at least two unit operations to achieve a system operation.

  2. Architecture & Environment

    ERIC Educational Resources Information Center

    Erickson, Mary; Delahunt, Michael

    2010-01-01

    Most art teachers would agree that architecture is an important form of visual art, but they do not always include it in their curriculums. In this article, the authors share core ideas from "Architecture and Environment," a teaching resource that they developed out of a long-term interest in teaching architecture and their fascination with the…

  3. Genetic architecture of type 2 diabetes.

    PubMed

    Hara, Kazuo; Shojima, Nobuhiro; Hosoe, Jun; Kadowaki, Takashi

    2014-09-19

    Genome-wide association studies (GWAS) have identified over 70 loci associated with type 2 diabetes (T2D). Most genetic variants associated with T2D are common variants with modest effects on T2D and are shared with major ancestry groups. To what extent the genetic component of T2D can be explained by common variants relies upon the shape of the genetic architecture of T2D. Fine mapping utilizing populations with different patterns of linkage disequilibrium and functional annotation derived from experiments in relevant tissues are mandatory to track down causal variants responsible for the pathogenesis of T2D. PMID:25111817

  4. Numerical Propulsion System Simulation Architecture

    NASA Technical Reports Server (NTRS)

    Naiman, Cynthia G.

    2004-01-01

    The Numerical Propulsion System Simulation (NPSS) is a framework for performing analysis of complex systems. Because the NPSS was developed using the object-oriented paradigm, the resulting architecture is an extensible and flexible framework that is currently being used by a diverse set of participants in government, academia, and the aerospace industry. NPSS is being used by over 15 different institutions to support rockets, hypersonics, power and propulsion, fuel cells, ground based power, and aerospace. Full system-level simulations as well as subsystems may be modeled using NPSS. The NPSS architecture enables the coupling of analyses at various levels of detail, which is called numerical zooming. The middleware used to enable zooming and distributed simulations is the Common Object Request Broker Architecture (CORBA). The NPSS Developer's Kit offers tools for the developer to generate CORBA-based components and wrap codes. The Developer's Kit enables distributed multi-fidelity and multi-discipline simulations, preserves proprietary and legacy codes, and facilitates addition of customized codes. The platforms supported are PC, Linux, HP, Sun, and SGI.

  5. Project Integration Architecture: Application Architecture

    NASA Technical Reports Server (NTRS)

    Jones, William Henry

    2005-01-01

    The Project Integration Architecture (PIA) implements a flexible, object-oriented, wrapping architecture which encapsulates all of the information associated with engineering applications. The architecture allows the progress of a project to be tracked and documented in its entirety. Additionally, by bringing all of the information sources and sinks of a project into a single architectural space, the ability to transport information between those applications is enabled.

  6. The EPOS ICT Architecture

    NASA Astrophysics Data System (ADS)

    Jeffery, Keith; Harrison, Matt; Bailo, Daniele

    2016-04-01

    The EPOS-PP Project 2010-2014 proposed an architecture and demonstrated feasibility with a prototype. Requirements based on use cases were collected and an inventory of assets (e.g. datasets, software, users, computing resources, equipment/detectors, laboratory services) (RIDE) was developed. The architecture evolved through three stages of refinement with much consultation both with the EPOS community representing EPOS users and participants in geoscience and with the overall ICT community especially those working on research such as the RDA (Research Data Alliance) community. The architecture consists of a central ICS (Integrated Core Services) consisting of a portal and catalog, the latter providing to end-users a 'map' of all EPOS resources (datasets, software, users, computing, equipment/detectors etc.). ICS is extended to ICS-d (distributed ICS) for certain services (such as visualisation software services or Cloud computing resources) and CES (Computational Earth Science) for specific simulation or analytical processing. ICS also communicates with TCS (Thematic Core Services) which represent European-wide portals to national and local assets, resources and services in the various specific domains (e.g. seismology, volcanology, geodesy) of EPOS. The EPOS-IP project 2015-2019 started October 2015. Two work-packages cover the ICT aspects; WP6 involves interaction with the TCS while WP7 concentrates on ICS including interoperation with ICS-d and CES offerings: in short the ICT architecture. Based on the experience and results of EPOS-PP the ICT team held a pre-meeting in July 2015 and set out a project plan. The first major activity involved requirements (re-)collection with use cases and also updating the inventory of assets held by the various TCS in EPOS. The RIDE database of assets is currently being converted to CERIF (Common European Research Information Format - an EU Recommendation to Member States) to provide the basis for the EPOS-IP ICS Catalog. In

  7. Flexible weapons architecture design

    NASA Astrophysics Data System (ADS)

    Pyant, William C., III

    Present day air-delivered weapons are of a closed architecture, with little to no ability to tailor the weapon for the individual engagement. The closed architectures require weaponeers to make the target fit the weapon instead of fitting the individual weapons to a target. The concept of a flexible weapons aims to modularize weapons design using an open architecture shell into which different modules are inserted to achieve the desired target fractional damage while reducing cost and civilian casualties. This thesis shows that the architecture design factors of damage mechanism, fusing, weapons weight, guidance, and propulsion are significant in enhancing weapon performance objectives, and would benefit from modularization. Additionally, this thesis constructs an algorithm that can be used to design a weapon set for a particular target class based on these modular components.

  8. Green Architecture

    NASA Astrophysics Data System (ADS)

    Lee, Seung-Ho

    Today, the environment has become a main subject in lots of science disciplines and the industrial development due to the global warming. This paper presents the analysis of the tendency of Green Architecture in France on the threes axes: Regulations and Approach for the Sustainable Architecture (Certificate and Standard), Renewable Materials (Green Materials) and Strategies (Equipments) of Sustainable Technology. The definition of 'Green Architecture' will be cited in the introduction and the question of the interdisciplinary for the technological development in 'Green Architecture' will be raised up in the conclusion.

  9. Standardization and program effect analysis (Study 2.4). Volume 2: Equipment commonality analysis. [cost savings of using flight-proven components in designing spacecraft

    NASA Technical Reports Server (NTRS)

    Shiokari, T.

    1975-01-01

    The feasibility and cost savings of using flight-proven components in designing spacecraft were investigated. The components analyzed were (1) large space telescope, (2) stratospheric aerosol and gas equipment, (3) mapping mission, (4) solar maximum mission, and (5) Tiros-N. It is concluded that flight-proven hardware can be used with not-too-extensive modification, and significant savings can be realized. The cost savings for each component are presented.

  10. Avionics System Architecture Tool

    NASA Technical Reports Server (NTRS)

    Chau, Savio; Hall, Ronald; Traylor, marcus; Whitfield, Adrian

    2005-01-01

    Avionics System Architecture Tool (ASAT) is a computer program intended for use during the avionics-system-architecture- design phase of the process of designing a spacecraft for a specific mission. ASAT enables simulation of the dynamics of the command-and-data-handling functions of the spacecraft avionics in the scenarios in which the spacecraft is expected to operate. ASAT is built upon I-Logix Statemate MAGNUM, providing a complement of dynamic system modeling tools, including a graphical user interface (GUI), modeling checking capabilities, and a simulation engine. ASAT augments this with a library of predefined avionics components and additional software to support building and analyzing avionics hardware architectures using these components.

  11. A Hybrid Model for Research on Subjective Well-Being: Examining Common- and Component-Specific Sources of Variance in Life Satisfaction, Positive Affect, and Negative Affect

    ERIC Educational Resources Information Center

    Busseri, Michael; Sadava, Stanley; DeCourville, Nancy

    2007-01-01

    The primary components of subjective well-being (SWB) include life satisfaction (LS), positive affect (PA), and negative affect (NA). There is little consensus, however, concerning how these components form a model of SWB. In this paper, six longitudinal studies varying in demographic characteristics, length of time between assessment periods,…

  12. Project Integration Architecture: Architectural Overview

    NASA Technical Reports Server (NTRS)

    Jones, William Henry

    2001-01-01

    The Project Integration Architecture (PIA) implements a flexible, object-oriented, wrapping architecture which encapsulates all of the information associated with engineering applications. The architecture allows the progress of a project to be tracked and documented in its entirety. By being a single, self-revealing architecture, the ability to develop single tools, for example a single graphical user interface, to span all applications is enabled. Additionally, by bringing all of the information sources and sinks of a project into a single architectural space, the ability to transport information between those applications becomes possible, Object-encapsulation further allows information to become in a sense self-aware, knowing things such as its own dimensionality and providing functionality appropriate to its kind.

  13. Scientific Software Component Technology

    SciTech Connect

    Kohn, S.; Dykman, N.; Kumfert, G.; Smolinski, B.

    2000-02-16

    We are developing new software component technology for high-performance parallel scientific computing to address issues of complexity, re-use, and interoperability for laboratory software. Component technology enables cross-project code re-use, reduces software development costs, and provides additional simulation capabilities for massively parallel laboratory application codes. The success of our approach will be measured by its impact on DOE mathematical and scientific software efforts. Thus, we are collaborating closely with library developers and application scientists in the Common Component Architecture forum, the Equation Solver Interface forum, and other DOE mathematical software groups to gather requirements, write and adopt a variety of design specifications, and develop demonstration projects to validate our approach. Numerical simulation is essential to the science mission at the laboratory. However, it is becoming increasingly difficult to manage the complexity of modern simulation software. Computational scientists develop complex, three-dimensional, massively parallel, full-physics simulations that require the integration of diverse software packages written by outside development teams. Currently, the integration of a new software package, such as a new linear solver library, can require several months of effort. Current industry component technologies such as CORBA, JavaBeans, and COM have all been used successfully in the business domain to reduce software development costs and increase software quality. However, these existing industry component infrastructures will not scale to support massively parallel applications in science and engineering. In particular, they do not address issues related to high-performance parallel computing on ASCI-class machines, such as fast in-process connections between components, language interoperability for scientific languages such as Fortran, parallel data redistribution between components, and massively

  14. Intelligent Agent Architectures: Reactive Planning Testbed

    NASA Technical Reports Server (NTRS)

    Rosenschein, Stanley J.; Kahn, Philip

    1993-01-01

    An Integrated Agent Architecture (IAA) is a framework or paradigm for constructing intelligent agents. Intelligent agents are collections of sensors, computers, and effectors that interact with their environments in real time in goal-directed ways. Because of the complexity involved in designing intelligent agents, it has been found useful to approach the construction of agents with some organizing principle, theory, or paradigm that gives shape to the agent's components and structures their relationships. Given the wide variety of approaches being taken in the field, the question naturally arises: Is there a way to compare and evaluate these approaches? The purpose of the present work is to develop common benchmark tasks and evaluation metrics to which intelligent agents, including complex robotic agents, constructed using various architectural approaches can be subjected.

  15. Lunar architecture and urbanism

    NASA Technical Reports Server (NTRS)

    Sherwood, Brent

    1992-01-01

    Human civilization and architecture have defined each other for over 5000 years on Earth. Even in the novel environment of space, persistent issues of human urbanism will eclipse, within a historically short time, the technical challenges of space settlement that dominate our current view. By adding modern topics in space engineering, planetology, life support, human factors, material invention, and conservation to their already renaissance array of expertise, urban designers can responsibly apply ancient, proven standards to the exciting new opportunities afforded by space. Inescapable facts about the Moon set real boundaries within which tenable lunar urbanism and its component architecture must eventually develop.

  16. Embedded instrumentation systems architecture

    NASA Astrophysics Data System (ADS)

    Visnevski, Nikita A.

    2007-04-01

    This paper describes the operational concept of the Embedded Instrumentation Systems Architecture (EISA) that is being developed for Test and Evaluation (T&E) applications. The architecture addresses such future T&E requirements as interoperability, flexibility, and non-intrusiveness. These are the ultimate requirements that support continuous T&E objectives. In this paper, we demonstrate that these objectives can be met by decoupling the Embedded Instrumentation (EI) system into an on-board and an off-board component. An on-board component is responsible for sampling, pre-processing, buffering, and transmitting data to the off-board component. The latter is responsible for aggregating, post-processing, and storing test data as well as providing access to the data via a clearly defined interface including such aspects as security, user authentication and access control. The power of the EISA architecture approach is in its inherent ability to support virtual instrumentation as well as enabling interoperability with such important T&E systems as Integrated Network-Enhanced Telemetry (iNET), Test and Training Enabling Architecture (TENA) and other relevant Department of Defense initiatives.

  17. Common Avionics Display Processor (CADP)

    NASA Astrophysics Data System (ADS)

    Farley, Paul E.

    1995-06-01

    The 1970s saw the start of a trend towards integrated digital avionics. In the 1980s, the Air Force's Pave Pillar initiative defined centralized digital processing as the cost- effective approach to tactical avionics. The avionics systems of the two advanced aircraft presently under development, a fixed-wing tactical fighter and an armed scout/reconnaissance helicopter, were based on this architecture. Both platforms relied upon custom, single-purpose hardware and software to generate images for their advanced multifunctional flat panel cockpit displays. The technology to generate real-time synthetic images with common data and signal processors was not available during the development of the platforms. Harris IR&D investigations have focused on an approach that Harris GASD has named the Common Avionics Display Processor (CADP). This programmable device can generate sophisticated images or perform sensor image manipulation and processing. The Common Avionics Display Processor is a general purpose image synthesizer. It consists of software and hardware components configured at run time by a downloaded program. The CADP offers two advantages over custom, special purpose devices. First, it solves a class of problems, not a single one. It can generate many types of images, from alphanumeric to sensor simulation. Only one module type is required for any of these functions. Second, as program schedules become shorter, traditional hardware design time becomes the delivery limiting task. Because both the software and hardware components are programmable at run time, the CADP can adapt to changing requirements without redesign.

  18. Splicing remodels messenger ribonucleoprotein architecture via eIF4A3-dependent and -independent recruitment of exon junction complex components.

    PubMed

    Zhang, Zuo; Krainer, Adrian R

    2007-07-10

    Pre-mRNA splicing not only removes introns and joins exons to generate spliced mRNA but also results in remodeling of the spliced messenger ribonucleoprotein, influencing various downstream events. This remodeling includes the loading of an exon-exon junction complex (EJC). It is unclear how the spliceosome recruits the EJC onto the mRNA and whether EJC formation or EJC components are required for pre-mRNA splicing. Here we immunodepleted the EJC core component eIF4A3 from HeLa cell nuclear extract and found that eIF4A3 is dispensable for pre-mRNA splicing in vitro. However, eIF4A3 is required for the splicing-dependent loading of the Y14/Magoh heterodimer onto mRNA, and this activity of human eIF4A3 is also present in the Drosophila ortholog. Surprisingly, the loading of six other EJC components was not affected by eIF4A3 depletion, suggesting that their binding to mRNA involves different or redundant pathways. Finally, we found that the assembly of the EJC onto mRNA occurs at the late stages of the splicing reaction and requires the second-step splicing and mRNA-release factor HRH1/hPrp22. The EJC-dependent and -independent recruitment of RNA-binding proteins onto mRNA suggests a role for the EJC in messenger ribonucleoprotein remodeling involving interactions with other proteins already bound to the pre-mRNA, which has implications for nonsense-mediated mRNA decay and other mRNA transactions. PMID:17606899

  19. Teaching Case: Enterprise Architecture Specification Case Study

    ERIC Educational Resources Information Center

    Steenkamp, Annette Lerine; Alawdah, Amal; Almasri, Osama; Gai, Keke; Khattab, Nidal; Swaby, Carval; Abaas, Ramy

    2013-01-01

    A graduate course in enterprise architecture had a team project component in which a real-world business case, provided by an industry sponsor, formed the basis of the project charter and the architecture statement of work. The paper aims to share the team project experience on developing the architecture specifications based on the business case…

  20. Global ocean biogeochemistry model HAMOCC: Model architecture and performance as component of the MPI-Earth system model in different CMIP5 experimental realizations

    NASA Astrophysics Data System (ADS)

    Ilyina, Tatiana; Six, Katharina D.; Segschneider, Joachim; Maier-Reimer, Ernst; Li, Hongmei; NúñEz-Riboni, Ismael

    2013-06-01

    Ocean biogeochemistry is a novel standard component of fifth phase of the Coupled Model Intercomparison Project (CMIP5) experiments which project future climate change caused by anthropogenic emissions of greenhouse gases. Of particular interest here is the evolution of the oceanic sink of carbon and the oceanic contribution to the climate-carbon cycle feedback loop. The Hamburg ocean carbon cycle model (HAMOCC), a component of the Max Planck Institute for Meteorology Earth system model (MPI-ESM), is employed to address these challenges. In this paper we describe the version of HAMOCC used in the CMIP5 experiments (HAMOCC 5.2) and its implementation in the MPI-ESM to provide a documentation and basis for future CMIP5-related studies. Modeled present day distributions of biogeochemical variables calculated in two different horizontal resolutions compare fairly well with observations. Statistical metrics indicate that the model performs better at the ocean surface and worse in the ocean interior. There is a tendency for improvements in the higher resolution model configuration in representing deeper ocean variables; however, there is little to no improvement at the ocean surface. An experiment with interactive carbon cycle driven by emissions of CO2 produces a 25% higher variability in the oceanic carbon uptake over the historical period than the same model forced by prescribed atmospheric CO2 concentrations. Furthermore, a climate warming of 3.5 K projected at atmospheric CO2 concentration of four times the preindustrial value, reduced the atmosphere-ocean CO2 flux by 1 GtC yr-1. Overall, the model shows consistent results in different configurations, being suitable for the type of simulations required within the CMIP5 experimental design.

  1. Predicate calculus for an architecture of multiple neural networks

    NASA Astrophysics Data System (ADS)

    Consoli, Robert H.

    1990-08-01

    Future projects with neural networks will require multiple individual network components. Current efforts along these lines are ad hoc. This paper relates the neural network to a classical device and derives a multi-part architecture from that model. Further it provides a Predicate Calculus variant for describing the location and nature of the trainings and suggests Resolution Refutation as a method for determining the performance of the system as well as the location of needed trainings for specific proofs. 2. THE NEURAL NETWORK AND A CLASSICAL DEVICE Recently investigators have been making reports about architectures of multiple neural networksL234. These efforts are appearing at an early stage in neural network investigations they are characterized by architectures suggested directly by the problem space. Touretzky and Hinton suggest an architecture for processing logical statements1 the design of this architecture arises from the syntax of a restricted class of logical expressions and exhibits syntactic limitations. In similar fashion a multiple neural netword arises out of a control problem2 from the sequence learning problem3 and from the domain of machine learning. 4 But a general theory of multiple neural devices is missing. More general attempts to relate single or multiple neural networks to classical computing devices are not common although an attempt is made to relate single neural devices to a Turing machines and Sun et a!. develop a multiple neural architecture that performs pattern classification.

  2. Infrared thermography at EDF: common technique for high-voltage lines but new in monitoring and diagnosis of PWR plant components

    NASA Astrophysics Data System (ADS)

    Provost, Daniel

    1996-03-01

    Infrared thermography is a remarkable aid in maintenance, and has been used for a number of years in testing high-voltage lines and transformer substations. Electricite de France (EDF) has developed a special infrared thermography system for this type of application. Until recently, use of IRT in both fossil and nuclear power plants was only sporadic and depended on the interest shown in the technique by individual maintenance managers. In power stations, it was primarily used for tests on switchyards, electrical control cabinets and insulation. The General Engineering Department of the EDF Generation and Transmission Division was responsible for assessing new equipment and studying special development requirements as they arose. Routine infrared thermography tests were performed by two teams from the Division, one handling northern France and the other southern France. Today, infrared thermography has become a fully-fledged monitoring and diagnosis tool in its own right, and related activities are being reorganized accordingly. Its recent success can be attributed to a number of factors: more high-powered IRT techniques, valuable feedback from American utility companies, and technical and economic assessments conducted by EDF over the last two years on equipment such as electrical and mechanical components, valves and insulation. EDF's reorganization of infrared thermography activities will begin with an overview of the resources now existing within the company. This inventory will be carried out by the General Engineering Department. At the same time, a report will be drawn up bearing on IRT testing over the last decade in conventional and nuclear power plants in France and the United States. Lastly, EDF will draw up a list of components to be monitored in this way, essentially on the basis of RCM studies. These measures will provide power plants with a catalogue of infrared thermography applications for specific component/failure combinations.

  3. Evolution of bow-tie architectures in biology.

    PubMed

    Friedlander, Tamar; Mayo, Avraham E; Tlusty, Tsvi; Alon, Uri

    2015-03-01

    Bow-tie or hourglass structure is a common architectural feature found in many biological systems. A bow-tie in a multi-layered structure occurs when intermediate layers have much fewer components than the input and output layers. Examples include metabolism where a handful of building blocks mediate between multiple input nutrients and multiple output biomass components, and signaling networks where information from numerous receptor types passes through a small set of signaling pathways to regulate multiple output genes. Little is known, however, about how bow-tie architectures evolve. Here, we address the evolution of bow-tie architectures using simulations of multi-layered systems evolving to fulfill a given input-output goal. We find that bow-ties spontaneously evolve when the information in the evolutionary goal can be compressed. Mathematically speaking, bow-ties evolve when the rank of the input-output matrix describing the evolutionary goal is deficient. The maximal compression possible (the rank of the goal) determines the size of the narrowest part of the network-that is the bow-tie. A further requirement is that a process is active to reduce the number of links in the network, such as product-rule mutations, otherwise a non-bow-tie solution is found in the evolutionary simulations. This offers a mechanism to understand a common architectural principle of biological systems, and a way to quantitate the effective rank of the goals under which they evolved. PMID:25798588

  4. Evolution of Bow-Tie Architectures in Biology

    PubMed Central

    Friedlander, Tamar; Mayo, Avraham E.; Tlusty, Tsvi; Alon, Uri

    2015-01-01

    Bow-tie or hourglass structure is a common architectural feature found in many biological systems. A bow-tie in a multi-layered structure occurs when intermediate layers have much fewer components than the input and output layers. Examples include metabolism where a handful of building blocks mediate between multiple input nutrients and multiple output biomass components, and signaling networks where information from numerous receptor types passes through a small set of signaling pathways to regulate multiple output genes. Little is known, however, about how bow-tie architectures evolve. Here, we address the evolution of bow-tie architectures using simulations of multi-layered systems evolving to fulfill a given input-output goal. We find that bow-ties spontaneously evolve when the information in the evolutionary goal can be compressed. Mathematically speaking, bow-ties evolve when the rank of the input-output matrix describing the evolutionary goal is deficient. The maximal compression possible (the rank of the goal) determines the size of the narrowest part of the network—that is the bow-tie. A further requirement is that a process is active to reduce the number of links in the network, such as product-rule mutations, otherwise a non-bow-tie solution is found in the evolutionary simulations. This offers a mechanism to understand a common architectural principle of biological systems, and a way to quantitate the effective rank of the goals under which they evolved. PMID:25798588

  5. Component architecture - the software architecture for mission requirements

    NASA Technical Reports Server (NTRS)

    Huang, T.

    2003-01-01

    This paper presents the challenges in developing a dynamic service such as FEI to support various mission requirements while being able to reduce cost on maintenance without sacrificing reliability and performance.

  6. Genome-wide association study of agronomic traits in common bean

    Technology Transfer Automated Retrieval System (TEKTRAN)

    A genome-wide association study (GWAS) using a global Andean diversity panel (ADP) of 237 genotypes of common bean, Phaseolus vulgaris was conducted to gain insight into the genetic architecture of several agronomic traits controlling phenology, biomass, yield components and seed yield. The panel wa...

  7. Advanced information processing system: The Army fault tolerant architecture conceptual study. Volume 2: Army fault tolerant architecture design and analysis

    NASA Technical Reports Server (NTRS)

    Harper, R. E.; Alger, L. S.; Babikyan, C. A.; Butler, B. P.; Friend, S. A.; Ganska, R. J.; Lala, J. H.; Masotto, T. K.; Meyer, A. J.; Morton, D. P.

    1992-01-01

    Described here is the Army Fault Tolerant Architecture (AFTA) hardware architecture and components and the operating system. The architectural and operational theory of the AFTA Fault Tolerant Data Bus is discussed. The test and maintenance strategy developed for use in fielded AFTA installations is presented. An approach to be used in reducing the probability of AFTA failure due to common mode faults is described. Analytical models for AFTA performance, reliability, availability, life cycle cost, weight, power, and volume are developed. An approach is presented for using VHSIC Hardware Description Language (VHDL) to describe and design AFTA's developmental hardware. A plan is described for verifying and validating key AFTA concepts during the Dem/Val phase. Analytical models and partial mission requirements are used to generate AFTA configurations for the TF/TA/NOE and Ground Vehicle missions.

  8. Staged Event Architecture

    Energy Science and Technology Software Center (ESTSC)

    2005-05-30

    Sea is a framework for a Staged Event Architecture, designed around non-blocking asynchronous communication facilities that are decoupled from the threading model chosen by any given application, Components for P networking and in-memory communication are provided. The Sea Java library encapsulates these concepts. Sea is used to easily build efficient and flexible low-level network clients and servers, and in particular as a basic communication substrate for Peer-to-Peer applications.

  9. Power system commonality study

    NASA Astrophysics Data System (ADS)

    Littman, Franklin D.

    1992-07-01

    A limited top level study was completed to determine the commonality of power system/subsystem concepts within potential lunar and Mars surface power system architectures. A list of power system concepts with high commonality was developed which can be used to synthesize power system architectures which minimize development cost. Examples of potential high commonality power system architectures are given in this report along with a mass comparison. Other criteria such as life cycle cost (which includes transportation cost), reliability, safety, risk, and operability should be used in future, more detailed studies to select optimum power system architectures. Nineteen potential power system concepts were identified and evaluated for planetary surface applications including photovoltaic arrays with energy storage, isotope, and nuclear power systems. A top level environmental factors study was completed to assess environmental impacts on the identified power system concepts for both lunar and Mars applications. Potential power system design solutions for commonality between Mars and lunar applications were identified. Isotope, photovoltaic array (PVA), regenerative fuel cell (RFC), stainless steel liquid-metal cooled reactors (less than 1033 K maximum) with dynamic converters, and in-core thermionic reactor systems were found suitable for both lunar and Mars environments. The use of SP-100 thermoelectric (TE) and SP-100 dynamic power systems in a vacuum enclosure may also be possible for Mars applications although several issues need to be investigated further (potential single point failure of enclosure, mass penalty of enclosure and active pumping system, additional installation time and complexity). There are also technical issues involved with development of thermionic reactors (life, serviceability, and adaptability to other power conversion units). Additional studies are required to determine the optimum reactor concept for Mars applications. Various screening

  10. A murine monoclonal antibody, MoAb HMSA-5, against a melanosomal component highly expressed in early stages, and common to normal and neoplastic melanocytes.

    PubMed Central

    Der, J. E.; Dixon, W. T.; Jimbow, K.; Horikoshi, T.

    1993-01-01

    The melanosome is a secretory organelle unique to the melanocyte and its neoplastic counterpart, malignant melanoma. The synthesis and assembly of these intracytoplasmic organelles is not yet fully understood. We have developed a murine monoclonal antibody (MoAb) against melanosomes isolated from human melanocytes (newborn foreskin) cultured in the presence of 12-O tetradecanoyl phorbol-13-acetate (TPA). This MoAb, designated HMSA-5 (Human Melanosome-Specific Antigen-5) (IgG1), recognised a cytoplasmic antigen in both normal human melanocytes and neoplastic cells, such as common and dysplastic melanocytic nevi, and malignant melanoma. None of the carcinoma or sarcoma specimens tested showed positive reactivity with MoAb HMSA-5. Under immunoelectron microscopy, immuno-gold deposition was seen on microvesicles associated with melanosomes, and a portion of the ER-Golgi complexes. Radioimmunoprecipitation analysis showed that the HMSA-5 reactive antigen was a glycoprotein of M(r) 69 to 73 kDa. A pulse-chase time course study showed that the amount of antigen detected by MoAb HMSA-5 decreased over a 24 h period without significant expression on the cell surface, or corresponding appearance of the antigen in the culture supernatant. This glycoprotein appears to play a role in the early stages of melanosomal development, and the HMSA-5 reactive epitope may be lost during subsequent maturation processes. Importantly, HMSA-5 can be identified in all forms of human melanocytes, hence it can be considered a new common melanocytic marker even on routine paraffin sections. Images Figure 1 Figure 2 Figure 3 Figure 4 Figure 5 Figure 6 Figure 7 Figure 8 Figure 9 Figure 10 Figure 11 PMID:7678981

  11. Architectural Illusion.

    ERIC Educational Resources Information Center

    Doornek, Richard R.

    1990-01-01

    Presents a lesson plan developed around the work of architectural muralist Richard Haas. Discusses the significance of mural painting and gives key concepts for the lesson. Lists class activities for the elementary and secondary grades. Provides a photograph of the Haas mural on the Fountainbleau Hilton Hotel, 1986. (GG)

  12. Architectural Treasures.

    ERIC Educational Resources Information Center

    Pietropola, Anne

    1998-01-01

    Presents an art lesson for eighth-grade students in which they created their own architectural structures. Stresses a strong discipline-based introduction using slide shows of famous buildings, large metropolitan cities, and 35,00 years of homes. Reports the lesson spanned two weeks. Includes a diagram, directions, and specifies materials. (CMK)

  13. Architectural Drafting.

    ERIC Educational Resources Information Center

    Davis, Ronald; Yancey, Bruce

    Designed to be used as a supplement to a two-book course in basic drafting, these instructional materials consisting of 14 units cover the process of drawing all working drawings necessary for residential buildings. The following topics are covered in the individual units: introduction to architectural drafting, lettering and tools, site…

  14. Architectural Tops

    ERIC Educational Resources Information Center

    Mahoney, Ellen

    2010-01-01

    The development of the skyscraper is an American story that combines architectural history, economic power, and technological achievement. Each city in the United States can be identified by the profile of its buildings. The design of the tops of skyscrapers was the inspiration for the students in the author's high-school ceramic class to develop…

  15. NASA Integrated Network Monitor and Control Software Architecture

    NASA Technical Reports Server (NTRS)

    Shames, Peter; Anderson, Michael; Kowal, Steve; Levesque, Michael; Sindiy, Oleg; Donahue, Kenneth; Barnes, Patrick

    2012-01-01

    The National Aeronautics and Space Administration (NASA) Space Communications and Navigation office (SCaN) has commissioned a series of trade studies to define a new architecture intended to integrate the three existing networks that it operates, the Deep Space Network (DSN), Space Network (SN), and Near Earth Network (NEN), into one integrated network that offers users a set of common, standardized, services and interfaces. The integrated monitor and control architecture utilizes common software and common operator interfaces that can be deployed at all three network elements. This software uses state-of-the-art concepts such as a pool of re-programmable equipment that acts like a configurable software radio, distributed hierarchical control, and centralized management of the whole SCaN integrated network. For this trade space study a model-based approach using SysML was adopted to describe and analyze several possible options for the integrated network monitor and control architecture. This model was used to refine the design and to drive the costing of the four different software options. This trade study modeled the three existing self standing network elements at point of departure, and then described how to integrate them using variations of new and existing monitor and control system components for the different proposed deployments under consideration. This paper will describe the trade space explored, the selected system architecture, the modeling and trade study methods, and some observations on useful approaches to implementing such model based trade space representation and analysis.

  16. Executable Architecture Research at Old Dominion University

    NASA Technical Reports Server (NTRS)

    Tolk, Andreas; Shuman, Edwin A.; Garcia, Johnny J.

    2011-01-01

    Executable Architectures allow the evaluation of system architectures not only regarding their static, but also their dynamic behavior. However, the systems engineering community do not agree on a common formal specification of executable architectures. To close this gap and identify necessary elements of an executable architecture, a modeling language, and a modeling formalism is topic of ongoing PhD research. In addition, systems are generally defined and applied in an operational context to provide capabilities and enable missions. To maximize the benefits of executable architectures, a second PhD effort introduces the idea of creating an executable context in addition to the executable architecture. The results move the validation of architectures from the current information domain into the knowledge domain and improve the reliability of such validation efforts. The paper presents research and results of both doctoral research efforts and puts them into a common context of state-of-the-art of systems engineering methods supporting more agility.

  17. Antibacterial activity and mechanism of action of Monarda punctata essential oil and its main components against common bacterial pathogens in respiratory tract

    PubMed Central

    Li, Hong; Yang, Tian; Li, Fei-Yan; Yao, Yan; Sun, Zhong-Min

    2014-01-01

    The aim of the current research work was to study the chemical composition of the essential oil of Monarda punctata along with evaluating the essential oil and its major components for their antibacterial effects against some frequently encountered respiratory infection causing pathogens. Gas chromatographic mass spectrometric analysis revealed the presence of 13 chemical constituents with thymol (75.2%), p-cymene (6.7%), limonene (5.4), and carvacrol (3.5%) as the major constituents. The oil composition was dominated by the oxygenated monoterpenes. Antibacterial activity of the essential oil and its major constituents (thymol, p-cymene, limonene) was evaluated against Streptococcus pyogenes, methicillin-resistant Staphylococcus aureus (MRSA), Streptococcus pneumoniae, Haemophilus influenzae and Escherichia coli. The study revealed that the essential oil and its constituents exhibited a broad spectrum and variable degree of antibacterial activity against different strains. Among the tested strains, Streptococcus pyogenes, Escherichia coli and Streptococcus pneumoniae were the most susceptible bacterial strain showing lowest MIC and MBC values. Methicillin-resistant Staphylococcus aureus was the most resistant bacterial strain to the essential oil treatment showing relatively higher MIC and MBC values. Scanning electron microscopy revealed that the essential oil induced potent and dose-dependent membrane damage in S. pyogenes and MRSA bacterial strains. The reactive oxygen species generated by the Monarda punctata essential oil were identified using 2’, 7’-dichlorofluorescein diacetate (DCFDA).This study indicated that the Monarda punctata essential oil to a great extent and thymol to a lower extent triggered a substantial increase in the ROS levels in S. pyogenes bacterial cultures which ultimately cause membrane damage as revealed by SEM results. PMID:25550774

  18. Antibacterial activity and mechanism of action of Monarda punctata essential oil and its main components against common bacterial pathogens in respiratory tract.

    PubMed

    Li, Hong; Yang, Tian; Li, Fei-Yan; Yao, Yan; Sun, Zhong-Min

    2014-01-01

    The aim of the current research work was to study the chemical composition of the essential oil of Monarda punctata along with evaluating the essential oil and its major components for their antibacterial effects against some frequently encountered respiratory infection causing pathogens. Gas chromatographic mass spectrometric analysis revealed the presence of 13 chemical constituents with thymol (75.2%), p-cymene (6.7%), limonene (5.4), and carvacrol (3.5%) as the major constituents. The oil composition was dominated by the oxygenated monoterpenes. Antibacterial activity of the essential oil and its major constituents (thymol, p-cymene, limonene) was evaluated against Streptococcus pyogenes, methicillin-resistant Staphylococcus aureus (MRSA), Streptococcus pneumoniae, Haemophilus influenzae and Escherichia coli. The study revealed that the essential oil and its constituents exhibited a broad spectrum and variable degree of antibacterial activity against different strains. Among the tested strains, Streptococcus pyogenes, Escherichia coli and Streptococcus pneumoniae were the most susceptible bacterial strain showing lowest MIC and MBC values. Methicillin-resistant Staphylococcus aureus was the most resistant bacterial strain to the essential oil treatment showing relatively higher MIC and MBC values. Scanning electron microscopy revealed that the essential oil induced potent and dose-dependent membrane damage in S. pyogenes and MRSA bacterial strains. The reactive oxygen species generated by the Monarda punctata essential oil were identified using 2', 7'-dichlorofluorescein diacetate (DCFDA).This study indicated that the Monarda punctata essential oil to a great extent and thymol to a lower extent triggered a substantial increase in the ROS levels in S. pyogenes bacterial cultures which ultimately cause membrane damage as revealed by SEM results. PMID:25550774

  19. Microcomponent chemical process sheet architecture

    DOEpatents

    Wegeng, R.S.; Drost, M.K.; Call, C.J.; Birmingham, J.G.; McDonald, C.E.; Kurath, D.E.; Friedrich, M.

    1998-09-22

    The invention is a microcomponent sheet architecture wherein macroscale unit processes are performed by microscale components. The sheet architecture may be a single laminate with a plurality of separate microcomponent sections or the sheet architecture may be a plurality of laminates with one or more microcomponent sections on each laminate. Each microcomponent or plurality of like microcomponents perform at least one chemical process unit operation. A first laminate having a plurality of like first microcomponents is combined with at least a second laminate having a plurality of like second microcomponents thereby combining at least two unit operations to achieve a system operation. 26 figs.

  20. Microcomponent chemical process sheet architecture

    DOEpatents

    Wegeng, Robert S.; Drost, M. Kevin; Call, Charles J.; Birmingham, Joseph G.; McDonald, Carolyn Evans; Kurath, Dean E.; Friedrich, Michele

    1998-01-01

    The invention is a microcomponent sheet architecture wherein macroscale unit processes are performed by microscale components. The sheet architecture may be a single laminate with a plurality of separate microcomponent sections or the sheet architecture may be a plurality of laminates with one or more microcomponent sections on each laminate. Each microcomponent or plurality of like microcomponents perform at least one chemical process unit operation. A first laminate having a plurality of like first microcomponents is combined with at least a second laminate having a plurality of like second microcomponents thereby combining at least two unit operations to achieve a system operation.

  1. Common Space, Common Time, Common Work

    ERIC Educational Resources Information Center

    Shank, Melody J.

    2005-01-01

    The most valued means of support and learning cited by new teachers at Poland Regional High School in rural Maine are the collegial interactions that common workspace, common planning time, and common tasks make possible. The school has used these everyday structures to enable new and veteran teachers to converse about curricular and pedagogical…

  2. Space Telecommunications Radio Architecture (STRS)

    NASA Technical Reports Server (NTRS)

    Reinhart, Richard C.

    2006-01-01

    A software defined radio (SDR) architecture used in space-based platforms proposes to standardize certain aspects of radio development such as interface definitions, functional control and execution, and application software and firmware development. NASA has charted a team to develop an open software defined radio hardware and software architecture to support NASA missions and determine the viability of an Agency-wide Standard. A draft concept of the proposed standard has been released and discussed among organizations in the SDR community. Appropriate leveraging of the JTRS SCA, OMG's SWRadio Architecture and other aspects are considered. A standard radio architecture offers potential value by employing common waveform software instantiation, operation, testing and software maintenance. While software defined radios offer greater flexibility, they also poses challenges to the radio development for the space environment in terms of size, mass and power consumption and available technology. An SDR architecture for space must recognize and address the constraints of space flight hardware, and systems along with flight heritage and culture. NASA is actively participating in the development of technology and standards related to software defined radios. As NASA considers a standard radio architecture for space communications, input and coordination from government agencies, the industry, academia, and standards bodies is key to a successful architecture. The unique aspects of space require thorough investigation of relevant terrestrial technologies properly adapted to space. The talk will describe NASA s current effort to investigate SDR applications to space missions and a brief overview of a candidate architecture under consideration for space based platforms.

  3. ASAC Executive Assistant Architecture Description Summary

    NASA Technical Reports Server (NTRS)

    Roberts, Eileen; Villani, James A.

    1997-01-01

    In this technical document, we describe the system architecture developed for the Aviation System Analysis Capability (ASAC) Executive Assistant (EA). We describe the genesis and role of the ASAC system, discuss the objectives of the ASAC system and provide an overview of components and models within the ASAC system, discuss our choice for an architecture methodology, the Domain Specific Software Architecture (DSSA), and the DSSA approach to developing a system architecture, and describe the development process and the results of the ASAC EA system architecture. The document has six appendices.

  4. Component architecture for web based EMR applications.

    PubMed Central

    Berkowicz, D. A.; Barnett, G. O.; Chueh, H. C.

    1998-01-01

    The World Wide Web provides the means for the collation and display of disseminated clinical information of use to the healthcare provider. However, the heterogeneous nature of clinical data storage and formats makes it very difficult for the physician to use one consistent client application to view and manipulate information. Similarly, developers are faced with a multitude of possibilities when creating interfaces for their users. A single patients records may be distributed over a number of different record keeping systems, and/or a physician may see patients whose individual records are stored at different sites. Our goal is to provide the healthcare worker with a consistent application interface independent of the parent database and at the same time allow developers the opportunity to customize the GUI in a well controlled, stable application environment. PMID:9929193

  5. Structural components and architectures of RNA exosomes.

    PubMed

    Januszyk, Kurt; Lima, Christopher D

    2010-01-01

    A large body of structural work conducted over the past ten years has elucidated mechanistic details related to 3' to 5' processing and decay of RNA substrates by the RNA exosome. This chapter will focus on the structural organization of eukaryotic exosomes and their evolutionary cousins in bacteria and archaea with an emphasis on mechanistic details related to substrate recognition and to 3' to 5' phosphorolytic exoribonucleolytic activities of bacterial and archaeal exosomes as well as the hydrolytic exoribonucleolytic and endoribonucleolytic activities of eukaryotic exosomes. These points will be addressed in large part through presentation of crystal structures ofphosphorolytic enzymes such as bacterial RNase PH, PNPase and archaeal exosomes and crystal structures ofthe eukaryotic exosome and exosome sub-complexes in addition to standalone structures of proteins that catalyze activities associated with the eukaryotic RNA exosome, namely Rrp44, Rrp6 and their bacterial counterparts. PMID:21618871

  6. Final Technical Report - Center for Technology for Advanced Scientific Component Software (TASCS)

    SciTech Connect

    Sussman, Alan

    2014-10-21

    This is a final technical report for the University of Maryland work in the SciDAC Center for Technology for Advanced Scientific Component Software (TASCS). The Maryland work focused on software tools for coupling parallel software components built using the Common Component Architecture (CCA) APIs. Those tools are based on the Maryland InterComm software framework that has been used in multiple computational science applications to build large-scale simulations of complex physical systems that employ multiple separately developed codes.

  7. Bocca: A Development Environment for HPC Components

    SciTech Connect

    Elwasif, Wael R; Norris, Boyana; Benjamin, Allan A.; Armstrong, Robert C.

    2007-01-01

    In high-performance scientific software development, the emphasis is often on short time to first solution. Even when the development of new components mostly reuses existing components or libraries and only small amounts of new code must be created, dealing with the component glue code and software build processes to obtain complete applications is still tedious and error-prone. Component-based soft ware meant to reduce complexity at the application level increases complexity with the attendant glue code. To address these needs, we introduce Bocca, the first tool to enable application developers to perform rapid component prototyping while maintaining robust software-engineering practices suitable to HPC environments. Bocca provides project management and a comprehensive build environment for creating and managing applications composed of Common Component Architecture components. Of critical importance for HPC applications, Bocca is designed to operate in a language-agnostic way, simultaneously handling components written in any of the languages commonly used in scientific applications: C, C++, Fortran, Fortran77, Python, and Java. Bocca automates the tasks related to the component glue code, freeing the user to focus on the scientific aspects of the application. Bocca embraces the philosophy pioneered by Ruby Rails for web applications: Start with something that works and evolve it to the user's purpose.

  8. System Safety Common Cause Analysis

    Energy Science and Technology Software Center (ESTSC)

    1992-03-10

    The COMCAN fault tree analysis codes are designed to analyze complex systems such as nuclear plants for common causes of failure. A common cause event, or common mode failure, is a secondary cause that could contribute to the failure of more than one component and violates the assumption of independence. Analysis of such events is an integral part of system reliability and safety analysis. A significant common cause event is a secondary cause common tomore » all basic events in one or more minimal cut sets. Minimal cut sets containing events from components sharing a common location or a common link are called common cause candidates. Components share a common location if no barrier insulates any one of them from the secondary cause. A common link is a dependency among components which cannot be removed by a physical barrier (e.g.,a common energy source or common maintenance instructions).« less

  9. Integrating hospital information systems in healthcare institutions: a mediation architecture.

    PubMed

    El Azami, Ikram; Cherkaoui Malki, Mohammed Ouçamah; Tahon, Christian

    2012-10-01

    Many studies have examined the integration of information systems into healthcare institutions, leading to several standards in the healthcare domain (CORBAmed: Common Object Request Broker Architecture in Medicine; HL7: Health Level Seven International; DICOM: Digital Imaging and Communications in Medicine; and IHE: Integrating the Healthcare Enterprise). Due to the existence of a wide diversity of heterogeneous systems, three essential factors are necessary to fully integrate a system: data, functions and workflow. However, most of the previous studies have dealt with only one or two of these factors and this makes the system integration unsatisfactory. In this paper, we propose a flexible, scalable architecture for Hospital Information Systems (HIS). Our main purpose is to provide a practical solution to insure HIS interoperability so that healthcare institutions can communicate without being obliged to change their local information systems and without altering the tasks of the healthcare professionals. Our architecture is a mediation architecture with 3 levels: 1) a database level, 2) a middleware level and 3) a user interface level. The mediation is based on two central components: the Mediator and the Adapter. Using the XML format allows us to establish a structured, secured exchange of healthcare data. The notion of medical ontology is introduced to solve semantic conflicts and to unify the language used for the exchange. Our mediation architecture provides an effective, promising model that promotes the integration of hospital information systems that are autonomous, heterogeneous, semantically interoperable and platform-independent. PMID:22086739

  10. How architecture wins technology wars.

    PubMed

    Morris, C R; Ferguson, C H

    1993-01-01

    Signs of revolutionary transformation in the global computer industry are everywhere. A roll call of the major industry players reads like a waiting list in the emergency room. The usual explanations for the industry's turmoil are at best inadequate. Scale, friendly government policies, manufacturing capabilities, a strong position in desktop markets, excellent software, top design skills--none of these is sufficient, either by itself or in combination, to ensure competitive success in information technology. A new paradigm is required to explain patterns of success and failure. Simply stated, success flows to the company that manages to establish proprietary architectural control over a broad, fast-moving, competitive space. Architectural strategies have become crucial to information technology because of the astonishing rate of improvement in microprocessors and other semiconductor components. Since no single vendor can keep pace with the outpouring of cheap, powerful, mass-produced components, customers insist on stitching together their own local systems solutions. Architectures impose order on the system and make the interconnections possible. The architectural controller is the company that controls the standard by which the entire information package is assembled. Microsoft's Windows is an excellent example of this. Because of the popularity of Windows, companies like Lotus must conform their software to its parameters in order to compete for market share. In the 1990s, proprietary architectural control is not only possible but indispensable to competitive success. What's more, it has broader implications for organizational structure: architectural competition is giving rise to a new form of business organization. PMID:10124636

  11. NATO Human View Architecture and Human Networks

    NASA Technical Reports Server (NTRS)

    Handley, Holly A. H.; Houston, Nancy P.

    2010-01-01

    The NATO Human View is a system architectural viewpoint that focuses on the human as part of a system. Its purpose is to capture the human requirements and to inform on how the human impacts the system design. The viewpoint contains seven static models that include different aspects of the human element, such as roles, tasks, constraints, training and metrics. It also includes a Human Dynamics component to perform simulations of the human system under design. One of the static models, termed Human Networks, focuses on the human-to-human communication patterns that occur as a result of ad hoc or deliberate team formation, especially teams distributed across space and time. Parameters of human teams that effect system performance can be captured in this model. Human centered aspects of networks, such as differences in operational tempo (sense of urgency), priorities (common goal), and team history (knowledge of the other team members), can be incorporated. The information captured in the Human Network static model can then be included in the Human Dynamics component so that the impact of distributed teams is represented in the simulation. As the NATO militaries transform to a more networked force, the Human View architecture is an important tool that can be used to make recommendations on the proper mix of technological innovations and human interactions.

  12. Common Schools for Common Education.

    ERIC Educational Resources Information Center

    Callan, Eamonn

    1995-01-01

    A vision of common education for citizens of a liberal democracy warrants faith in common schools as an instrument of social good. Some kinds of separate schooling are not inconsistent with common schooling and are even desirable. Equal respect, as defined by J. Rawls, is a basis for common education. (SLD)

  13. Functional Interface Considerations within an Exploration Life Support System Architecture

    NASA Technical Reports Server (NTRS)

    Perry, Jay L.; Sargusingh, Miriam J.; Toomarian, Nikzad

    2016-01-01

    As notional life support system (LSS) architectures are developed and evaluated, myriad options must be considered pertaining to process technologies, components, and equipment assemblies. Each option must be evaluated relative to its impact on key functional interfaces within the LSS architecture. A leading notional architecture has been developed to guide the path toward realizing future crewed space exploration goals. This architecture includes atmosphere revitalization, water recovery and management, and environmental monitoring subsystems. Guiding requirements for developing this architecture are summarized and important interfaces within the architecture are discussed. The role of environmental monitoring within the architecture is described.

  14. Distributed visualization framework architecture

    NASA Astrophysics Data System (ADS)

    Mishchenko, Oleg; Raman, Sundaresan; Crawfis, Roger

    2010-01-01

    An architecture for distributed and collaborative visualization is presented. The design goals of the system are to create a lightweight, easy to use and extensible framework for reasearch in scientific visualization. The system provides both single user and collaborative distributed environment. System architecture employs a client-server model. Visualization projects can be synchronously accessed and modified from different client machines. We present a set of visualization use cases that illustrate the flexibility of our system. The framework provides a rich set of reusable components for creating new applications. These components make heavy use of leading design patterns. All components are based on the functionality of a small set of interfaces. This allows new components to be integrated seamlessly with little to no effort. All user input and higher-level control functionality interface with proxy objects supporting a concrete implementation of these interfaces. These light-weight objects can be easily streamed across the web and even integrated with smart clients running on a user's cell phone. The back-end is supported by concrete implementations wherever needed (for instance for rendering). A middle-tier manages any communication and synchronization with the proxy objects. In addition to the data components, we have developed several first-class GUI components for visualization. These include a layer compositor editor, a programmable shader editor, a material editor and various drawable editors. These GUI components interact strictly with the interfaces. Access to the various entities in the system is provided by an AssetManager. The asset manager keeps track of all of the registered proxies and responds to queries on the overall system. This allows all user components to be populated automatically. Hence if a new component is added that supports the IMaterial interface, any instances of this can be used in the various GUI components that work with this

  15. Lab architecture

    NASA Astrophysics Data System (ADS)

    Crease, Robert P.

    2008-04-01

    There are few more dramatic illustrations of the vicissitudes of laboratory architecturethan the contrast between Building 20 at the Massachusetts Institute of Technology (MIT) and its replacement, the Ray and Maria Stata Center. Building 20 was built hurriedly in 1943 as temporary housing for MIT's famous Rad Lab, the site of wartime radar research, and it remained a productive laboratory space for over half a century. A decade ago it was demolished to make way for the Stata Center, an architecturally striking building designed by Frank Gehry to house MIT's computer science and artificial intelligence labs (above). But in 2004 - just two years after the Stata Center officially opened - the building was criticized for being unsuitable for research and became the subject of still ongoing lawsuits alleging design and construction failures.

  16. Common Cold

    MedlinePlus

    ... News & Events Volunteer NIAID > Health & Research Topics > Common Cold Skip Website Tools Website Tools Print this page ... Help people who are suffering from the common cold by volunteering for NIAID clinical studies on ClinicalTrials. ...

  17. Common Cold

    MedlinePlus

    ... coughing - everyone knows the symptoms of the common cold. It is probably the most common illness. In ... people in the United States suffer 1 billion colds. You can get a cold by touching your ...

  18. Open architecture design and approach for the Integrated Sensor Architecture (ISA)

    NASA Astrophysics Data System (ADS)

    Moulton, Christine L.; Krzywicki, Alan T.; Hepp, Jared J.; Harrell, John; Kogut, Michael

    2015-05-01

    Integrated Sensor Architecture (ISA) is designed in response to stovepiped integration approaches. The design, based on the principles of Service Oriented Architectures (SOA) and Open Architectures, addresses the problem of integration, and is not designed for specific sensors or systems. The use of SOA and Open Architecture approaches has led to a flexible, extensible architecture. Using these approaches, and supported with common data formats, open protocol specifications, and Department of Defense Architecture Framework (DoDAF) system architecture documents, an integration-focused architecture has been developed. ISA can help move the Department of Defense (DoD) from costly stovepipe solutions to a more cost-effective plug-and-play design to support interoperability.

  19. Complex Event Recognition Architecture

    NASA Technical Reports Server (NTRS)

    Fitzgerald, William A.; Firby, R. James

    2009-01-01

    Complex Event Recognition Architecture (CERA) is the name of a computational architecture, and software that implements the architecture, for recognizing complex event patterns that may be spread across multiple streams of input data. One of the main components of CERA is an intuitive event pattern language that simplifies what would otherwise be the complex, difficult tasks of creating logical descriptions of combinations of temporal events and defining rules for combining information from different sources over time. In this language, recognition patterns are defined in simple, declarative statements that combine point events from given input streams with those from other streams, using conjunction, disjunction, and negation. Patterns can be built on one another recursively to describe very rich, temporally extended combinations of events. Thereafter, a run-time matching algorithm in CERA efficiently matches these patterns against input data and signals when patterns are recognized. CERA can be used to monitor complex systems and to signal operators or initiate corrective actions when anomalous conditions are recognized. CERA can be run as a stand-alone monitoring system, or it can be integrated into a larger system to automatically trigger responses to changing environments or problematic situations.

  20. Architecture for autonomy

    NASA Astrophysics Data System (ADS)

    Broten, Gregory S.; Monckton, Simon P.; Collier, Jack; Giesbrecht, Jared

    2006-05-01

    In 2002 Defence R&D Canada changed research direction from pure tele-operated land vehicles to general autonomy for land, air, and sea craft. The unique constraints of the military environment coupled with the complexity of autonomous systems drove DRDC to carefully plan a research and development infrastructure that would provide state of the art tools without restricting research scope. DRDC's long term objectives for its autonomy program address disparate unmanned ground vehicle (UGV), unattended ground sensor (UGS), air (UAV), and subsea and surface (UUV and USV) vehicles operating together with minimal human oversight. Individually, these systems will range in complexity from simple reconnaissance mini-UAVs streaming video to sophisticated autonomous combat UGVs exploiting embedded and remote sensing. Together, these systems can provide low risk, long endurance, battlefield services assuming they can communicate and cooperate with manned and unmanned systems. A key enabling technology for this new research is a software architecture capable of meeting both DRDC's current and future requirements. DRDC built upon recent advances in the computing science field while developing its software architecture know as the Architecture for Autonomy (AFA). Although a well established practice in computing science, frameworks have only recently entered common use by unmanned vehicles. For industry and government, the complexity, cost, and time to re-implement stable systems often exceeds the perceived benefits of adopting a modern software infrastructure. Thus, most persevere with legacy software, adapting and modifying software when and wherever possible or necessary -- adopting strategic software frameworks only when no justifiable legacy exists. Conversely, academic programs with short one or two year projects frequently exploit strategic software frameworks but with little enduring impact. The open-source movement radically changes this picture. Academic frameworks

  1. Challenges in the application of modular open system architecture to weapons

    NASA Astrophysics Data System (ADS)

    Shaver, Jonathan; Rose, Leo; Young, Quinn; Christensen, Jacob

    2016-05-01

    The overarching objective for Flexible Weapons is to replace current inventory weapons that will not fully utilize the increased capabilities of 6th generation platforms, with a single weapons kit made up of flexible, open architecture components. Flexible Weapon will develop a common architecture to enable modular subsystems to achieve flexible weapons capability while allowing technology refresh at the pace of technology discovery in an affordable and sustainable design. The various combinations of weapons to address multiple missions must be 100% compatible with 6th generation delivery platforms (fighters, bombers, RPAs) and backwards compatible with 4th and 5th generation platforms.

  2. Information Model Driven Semantic Framework Architecture and Design for Distributed Data Repositories

    NASA Astrophysics Data System (ADS)

    Fox, P. A.; Semantic eScience Framework Team

    2011-12-01

    In Earth and space science, the steady evolution away from isolated and single purpose data 'systems' toward systems of systems, data ecosystems, or data frameworks that provide access to highly heterogeneous data repositories is picking up in pace. As a result, common informatics approaches are being sought for how newer architectures are developed and/or implemented. In particular, a clear need to have a repeatable method for modeling, implementing and evolving the information architectures has emerged and one that goes beyond traditional software design. This presentation outlines new component design approaches bases in sets of information model and semantic encodings for mediation.

  3. Hybrid Power Management-Based Vehicle Architecture

    NASA Technical Reports Server (NTRS)

    Eichenberg, Dennis J.

    2011-01-01

    Hybrid Power Management (HPM) is the integration of diverse, state-of-the-art power devices in an optimal configuration for space and terrestrial applications (s ee figure). The appropriate application and control of the various power devices significantly improves overall system performance and efficiency. The basic vehicle architecture consists of a primary power source, and possibly other power sources, that provides all power to a common energy storage system that is used to power the drive motors and vehicle accessory systems. This architecture also provides power as an emergency power system. Each component is independent, permitting it to be optimized for its intended purpose. The key element of HPM is the energy storage system. All generated power is sent to the energy storage system, and all loads derive their power from that system. This can significantly reduce the power requirement of the primary power source, while increasing the vehicle reliability. Ultracapacitors are ideal for an HPM-based energy storage system due to their exceptionally long cycle life, high reliability, high efficiency, high power density, and excellent low-temperature performance. Multiple power sources and multiple loads are easily incorporated into an HPM-based vehicle. A gas turbine is a good primary power source because of its high efficiency, high power density, long life, high reliability, and ability to operate on a wide range of fuels. An HPM controller maintains optimal control over each vehicle component. This flexible operating system can be applied to all vehicles to considerably improve vehicle efficiency, reliability, safety, security, and performance. The HPM-based vehicle architecture has many advantages over conventional vehicle architectures. Ultracapacitors have a much longer cycle life than batteries, which greatly improves system reliability, reduces life-of-system costs, and reduces environmental impact as ultracapacitors will probably never need to be

  4. Maximizing commonality between military and general aviation fly-by-light helicopter system designs

    NASA Astrophysics Data System (ADS)

    Enns, Russell; Mossman, David C.

    1995-05-01

    In the face of shrinking defense budgets, survival of the United States rotorcraft industry is becoming increasingly dependent on increased sales in a highly competitive civil helicopter market. As a result, only the most competitive rotorcraft manufacturers are likely to survive. A key ingredient in improving our competitive position is the ability to produce more versatile, high performance, high quality, and low cost of ownership helicopters. Fiber optic technology offers a path of achieving these objectives. Also, adopting common components and architectures for different helicopter models (while maintaining each models' uniqueness) will further decrease design and production costs. Funds saved (or generated) by exploiting this commonality can be applied to R&D used to further improve the product. In this paper, we define a fiber optics based avionics architecture which provides the pilot a fly-by-light / digital flight control system which can be implemented in both civilian and military helicopters. We then discuss the advantages of such an architecture.

  5. Space Station data management system architecture

    NASA Technical Reports Server (NTRS)

    Mallary, William E.; Whitelaw, Virginia A.

    1987-01-01

    Within the Space Station program, the Data Management System (DMS) functions in a dual role. First, it provides the hardware resources and software services which support the data processing, data communications, and data storage functions of the onboard subsystems and payloads. Second, it functions as an integrating entity which provides a common operating environment and human-machine interface for the operation and control of the orbiting Space Station systems and payloads by both the crew and the ground operators. This paper discusses the evolution and derivation of the requirements and issues which have had significant effect on the design of the Space Station DMS, describes the DMS components and services which support system and payload operations, and presents the current architectural view of the system as it exists in October 1986; one-and-a-half years into the Space Station Phase B Definition and Preliminary Design Study.

  6. Optical linear algebra processors - Architectures and algorithms

    NASA Technical Reports Server (NTRS)

    Casasent, David

    1986-01-01

    Attention is given to the component design and optical configuration features of a generic optical linear algebra processor (OLAP) architecture, as well as the large number of OLAP architectures, number representations, algorithms and applications encountered in current literature. Number-representation issues associated with bipolar and complex-valued data representations, high-accuracy (including floating point) performance, and the base or radix to be employed, are discussed, together with case studies on a space-integrating frequency-multiplexed architecture and a hybrid space-integrating and time-integrating multichannel architecture.

  7. Clays, common

    USGS Publications Warehouse

    Virta, R.L.

    1998-01-01

    Part of a special section on the state of industrial minerals in 1997. The state of the common clay industry worldwide for 1997 is discussed. Sales of common clay in the U.S. increased from 26.2 Mt in 1996 to an estimated 26.5 Mt in 1997. The amount of common clay and shale used to produce structural clay products in 1997 was estimated at 13.8 Mt.

  8. An introductory review of parallel independent component analysis (p-ICA) and a guide to applying p-ICA to genetic data and imaging phenotypes to identify disease-associated biological pathways and systems in common complex disorders.

    PubMed

    Pearlson, Godfrey D; Liu, Jingyu; Calhoun, Vince D

    2015-01-01

    Complex inherited phenotypes, including those for many common medical and psychiatric diseases, are most likely underpinned by multiple genes contributing to interlocking molecular biological processes, along with environmental factors (Owen et al., 2010). Despite this, genotyping strategies for complex, inherited, disease-related phenotypes mostly employ univariate analyses, e.g., genome wide association. Such procedures most often identify isolated risk-related SNPs or loci, not the underlying biological pathways necessary to help guide the development of novel treatment approaches. This article focuses on the multivariate analysis strategy of parallel (i.e., simultaneous combination of SNP and neuroimage information) independent component analysis (p-ICA), which typically yields large clusters of functionally related SNPs statistically correlated with phenotype components, whose overall molecular biologic relevance is inferred subsequently using annotation software suites. Because this is a novel approach, whose details are relatively new to the field we summarize its underlying principles and address conceptual questions regarding interpretation of resulting data and provide practical illustrations of the method. PMID:26442095

  9. An introductory review of parallel independent component analysis (p-ICA) and a guide to applying p-ICA to genetic data and imaging phenotypes to identify disease-associated biological pathways and systems in common complex disorders

    PubMed Central

    Pearlson, Godfrey D.; Liu, Jingyu; Calhoun, Vince D.

    2015-01-01

    Complex inherited phenotypes, including those for many common medical and psychiatric diseases, are most likely underpinned by multiple genes contributing to interlocking molecular biological processes, along with environmental factors (Owen et al., 2010). Despite this, genotyping strategies for complex, inherited, disease-related phenotypes mostly employ univariate analyses, e.g., genome wide association. Such procedures most often identify isolated risk-related SNPs or loci, not the underlying biological pathways necessary to help guide the development of novel treatment approaches. This article focuses on the multivariate analysis strategy of parallel (i.e., simultaneous combination of SNP and neuroimage information) independent component analysis (p-ICA), which typically yields large clusters of functionally related SNPs statistically correlated with phenotype components, whose overall molecular biologic relevance is inferred subsequently using annotation software suites. Because this is a novel approach, whose details are relatively new to the field we summarize its underlying principles and address conceptual questions regarding interpretation of resulting data and provide practical illustrations of the method. PMID:26442095

  10. Student Commons

    ERIC Educational Resources Information Center

    Gordon, Douglas

    2010-01-01

    Student commons are no longer simply congregation spaces for students with time on their hands. They are integral to providing a welcoming environment and effective learning space for students. Many student commons have been transformed into spaces for socialization, an environment for alternative teaching methods, a forum for large group meetings…

  11. A Hybrid Power Management (HPM) Based Vehicle Architecture

    NASA Technical Reports Server (NTRS)

    Eichenberg, Dennis J.

    2011-01-01

    Society desires vehicles with reduced fuel consumption and reduced emissions. This presents a challenge and an opportunity for industry and the government. The NASA John H. Glenn Research Center (GRC) has developed a Hybrid Power Management (HPM) based vehicle architecture for space and terrestrial vehicles. GRC's Electrical and Electromagnetics Branch of the Avionics and Electrical Systems Division initiated the HPM Program for the GRC Technology Transfer and Partnership Office. HPM is the innovative integration of diverse, state-of-the-art power devices in an optimal configuration for space and terrestrial applications. The appropriate application and control of the various power devices significantly improves overall system performance and efficiency. The basic vehicle architecture consists of a primary power source, and possibly other power sources, providing all power to a common energy storage system, which is used to power the drive motors and vehicle accessory systems, as well as provide power as an emergency power system. Each component is independent, permitting it to be optimized for its intended purpose. This flexible vehicle architecture can be applied to all vehicles to considerably improve system efficiency, reliability, safety, security, and performance. This unique vehicle architecture has the potential to alleviate global energy concerns, improve the environment, stimulate the economy, and enable new missions.

  12. Will architecture win the technology wars?

    PubMed

    Alberthal, L; Manzi, J; Curtis, G; Davidow, W H; Timko, J W; Nadler, D; Davis, L L

    1993-01-01

    Success today flows to the company that establishes proprietary architectural control over a broad, fast-moving, competitive space, Charles R. Morris and Charles H. Ferguson claim in "How Architecture Wins Technology Wars" (March-April 1993). No single vendor can keep pace with the outpouring of cheap, powerful, mass-produced components, so customers have been stitching together their own local systems solutions. Architectures impose order on the system and make interconnections possible. An architectural controller has power over the standard by which the entire information package is assembled. Because of the popularity of Microsoft's Windows, for example, companies like Lotus must conform their software to its parameters to be able to compete for market share. Proprietary architectural control has broader implications for organizational structure too: architectural competition is giving rise to a new form of business organization. PMID:10126152

  13. Parallel Architecture For Robotics Computation

    NASA Technical Reports Server (NTRS)

    Fijany, Amir; Bejczy, Antal K.

    1990-01-01

    Universal Real-Time Robotic Controller and Simulator (URRCS) is highly parallel computing architecture for control and simulation of robot motion. Result of extensive algorithmic study of different kinematic and dynamic computational problems arising in control and simulation of robot motion. Study led to development of class of efficient parallel algorithms for these problems. Represents algorithmically specialized architecture, in sense capable of exploiting common properties of this class of parallel algorithms. System with both MIMD and SIMD capabilities. Regarded as processor attached to bus of external host processor, as part of bus memory.

  14. Electrical Grounding Architecture for Unmanned Spacecraft

    NASA Technical Reports Server (NTRS)

    1998-01-01

    This handbook is approved for use by NASA Headquarters and all NASA Centers and is intended to provide a common framework for consistent practices across NASA programs. This handbook was developed to describe electrical grounding design architecture options for unmanned spacecraft. This handbook is written for spacecraft system engineers, power engineers, and electromagnetic compatibility (EMC) engineers. Spacecraft grounding architecture is a system-level decision which must be established at the earliest point in spacecraft design. All other grounding design must be coordinated with and be consistent with the system-level architecture. This handbook assumes that there is no one single 'correct' design for spacecraft grounding architecture. There have been many successful satellite and spacecraft programs from NASA, using a variety of grounding architectures with different levels of complexity. However, some design principles learned over the years apply to all types of spacecraft development. This handbook summarizes those principles to help guide spacecraft grounding architecture design for NASA and others.

  15. Automated Synthesis of Architecture of Avionic Systems

    NASA Technical Reports Server (NTRS)

    Chau, Savio; Xu, Joseph; Dang, Van; Lu, James F.

    2006-01-01

    The Architecture Synthesis Tool (AST) is software that automatically synthesizes software and hardware architectures of avionic systems. The AST is expected to be most helpful during initial formulation of an avionic-system design, when system requirements change frequently and manual modification of architecture is time-consuming and susceptible to error. The AST comprises two parts: (1) an architecture generator, which utilizes a genetic algorithm to create a multitude of architectures; and (2) a functionality evaluator, which analyzes the architectures for viability, rejecting most of the non-viable ones. The functionality evaluator generates and uses a viability tree a hierarchy representing functions and components that perform the functions such that the system as a whole performs system-level functions representing the requirements for the system as specified by a user. Architectures that survive the functionality evaluator are further evaluated by the selection process of the genetic algorithm. Architectures found to be most promising to satisfy the user s requirements and to perform optimally are selected as parents to the next generation of architectures. The foregoing process is iterated as many times as the user desires. The final output is one or a few viable architectures that satisfy the user s requirements.

  16. Common cold

    MedlinePlus

    ... are the most common reason that children miss school and parents miss work. Parents often get colds ... other children. A cold can spread quickly through schools or daycares. Colds can occur at any time ...

  17. Language interoperability for high-performance parallel scientific components

    SciTech Connect

    Elliot, N; Kohn, S; Smolinski, B

    1999-05-18

    With the increasing complexity and interdisciplinary nature of scientific applications, code reuse is becoming increasingly important in scientific computing. One method for facilitating code reuse is the use of components technologies, which have been used widely in industry. However, components have only recently worked their way into scientific computing. Language interoperability is an important underlying technology for these component architectures. In this paper, we present an approach to language interoperability for a high-performance parallel, component architecture being developed by the Common Component Architecture (CCA) group. Our approach is based on Interface Definition Language (IDL) techniques. We have developed a Scientific Interface Definition Language (SIDL), as well as bindings to C and Fortran. We have also developed a SIDL compiler and run-time library support for reference counting, reflection, object management, and exception handling (Babel). Results from using Babel to call a standard numerical solver library (written in C) from C and Fortran show that the cost of using Babel is minimal, where as the savings in development time and the benefits of object-oriented development support for C and Fortran far outweigh the costs.

  18. Airport Surface Network Architecture Definition

    NASA Technical Reports Server (NTRS)

    Nguyen, Thanh C.; Eddy, Wesley M.; Bretmersky, Steven C.; Lawas-Grodek, Fran; Ellis, Brenda L.

    2006-01-01

    Currently, airport surface communications are fragmented across multiple types of systems. These communication systems for airport operations at most airports today are based dedicated and separate architectures that cannot support system-wide interoperability and information sharing. The requirements placed upon the Communications, Navigation, and Surveillance (CNS) systems in airports are rapidly growing and integration is urgently needed if the future vision of the National Airspace System (NAS) and the Next Generation Air Transportation System (NGATS) 2025 concept are to be realized. To address this and other problems such as airport surface congestion, the Space Based Technologies Project s Surface ICNS Network Architecture team at NASA Glenn Research Center has assessed airport surface communications requirements, analyzed existing and future surface applications, and defined a set of architecture functions that will help design a scalable, reliable and flexible surface network architecture to meet the current and future needs of airport operations. This paper describes the systems approach or methodology to networking that was employed to assess airport surface communications requirements, analyze applications, and to define the surface network architecture functions as the building blocks or components of the network. The systems approach used for defining these functions is relatively new to networking. It is viewing the surface network, along with its environment (everything that the surface network interacts with or impacts), as a system. Associated with this system are sets of services that are offered by the network to the rest of the system. Therefore, the surface network is considered as part of the larger system (such as the NAS), with interactions and dependencies between the surface network and its users, applications, and devices. The surface network architecture includes components such as addressing/routing, network management, network

  19. Evolution of System Architectures: Where Do We Need to Fail Next?

    NASA Astrophysics Data System (ADS)

    Bermudez, Luis; Alameh, Nadine; Percivall, George

    2013-04-01

    Innovation requires testing and failing. Thomas Edison was right when he said "I have not failed. I've just found 10,000 ways that won't work". For innovation and improvement of standards to happen, service Architectures have to be tested and tested. Within the Open Geospatial Consortium (OGC), testing of service architectures has occurred for the last 15 years. This talk will present an evolution of these service architectures and a possible future path. OGC is a global forum for the collaboration of developers and users of spatial data products and services, and for the advancement and development of international standards for geospatial interoperability. The OGC Interoperability Program is a series of hands-on, fast paced, engineering initiatives to accelerate the development and acceptance of OGC standards. Each initiative is organized in threads that provide focus under a particular theme. The first testbed, OGC Web Services phase 1, completed in 2003 had four threads: Common Architecture, Web Mapping, Sensor Web and Web Imagery Enablement. The Common Architecture was a cross-thread theme, to ensure that the Web Mapping and Sensor Web experiments built on a base common architecture. The architecture was based on the three main SOA components: Broker, Requestor and Provider. It proposed a general service model defining service interactions and dependencies; categorization of service types; registries to allow discovery and access of services; data models and encodings; and common services (WMS, WFS, WCS). For the latter, there was a clear distinction on the different services: Data Services (e.g. WMS), Application services (e.g. Coordinate transformation) and server-side client applications (e.g. image exploitation). The latest testbed, OGC Web Service phase 9, completed in 2012 had 5 threads: Aviation, Cross-Community Interoperability (CCI), Security and Services Interoperability (SSI), OWS Innovations and Compliance & Interoperability Testing & Evaluation

  20. Component-based integration of chemistry and optimization software.

    PubMed

    Kenny, Joseph P; Benson, Steven J; Alexeev, Yuri; Sarich, Jason; Janssen, Curtis L; McInnes, Lois Curfman; Krishnan, Manojkumar; Nieplocha, Jarek; Jurrus, Elizabeth; Fahlstrom, Carl; Windus, Theresa L

    2004-11-15

    Typical scientific software designs make rigid assumptions regarding programming language and data structures, frustrating software interoperability and scientific collaboration. Component-based software engineering is an emerging approach to managing the increasing complexity of scientific software. Component technology facilitates code interoperability and reuse. Through the adoption of methodology and tools developed by the Common Component Architecture Forum, we have developed a component architecture for molecular structure optimization. Using the NWChem and Massively Parallel Quantum Chemistry packages, we have produced chemistry components that provide capacity for energy and energy derivative evaluation. We have constructed geometry optimization applications by integrating the Toolkit for Advanced Optimization, Portable Extensible Toolkit for Scientific Computation, and Global Arrays packages, which provide optimization and linear algebra capabilities. We present a brief overview of the component development process and a description of abstract interfaces for chemical optimizations. The components conforming to these abstract interfaces allow the construction of applications using different chemistry and mathematics packages interchangeably. Initial numerical results for the component software demonstrate good performance, and highlight potential research enabled by this platform. PMID:15362128

  1. Component-Based Software for High-Performance Scientific Computing

    SciTech Connect

    Alexeev, Yuri; Allan, Benjamin A.; Armstrong, Robert C.; Bernholdt, David E.; Dahlgren, Tamara L.; Gannon, Dennis B.; Janssen, Curtis; Kenny, Joseph P.; Krishnan, Manoj Kumar; Kohl, James A.; Kumfert, Gary K.; McInnes, Lois C.; Nieplocha, Jarek; Parker, Steven G.; Rasmussen, Craig; Windus, Theresa L.

    2005-06-26

    Recent advances in both computational hardware and multidisciplinary science have given rise to an unprecedented level of complexity in scientific simulation software. This paper describes an ongoing grass roots effort aimed at addressing complexity in high-performance computing through the use of Component-Based Software Engineering (CBSE). Highlights of the benefits and accomplishments of the Common Component Architecture (CCA) Forum and SciDAC ISIC are given, followed by an illustrative example of how the CCA has been applied to drive scientific discovery in quantum chemistry. Thrusts for future research are also described briefly.

  2. The UAS control segment architecture: an overview

    NASA Astrophysics Data System (ADS)

    Gregory, Douglas A.; Batavia, Parag; Coats, Mark; Allport, Chris; Jennings, Ann; Ernst, Richard

    2013-05-01

    The Under Secretary of Defense (Acquisition, Technology and Logistics) directed the Services in 2009 to jointly develop and demonstrate a common architecture for command and control of Department of Defense (DoD) Unmanned Aircraft Systems (UAS) Groups 2 through 5. The UAS Control Segment (UCS) Architecture is an architecture framework for specifying and designing the softwareintensive capabilities of current and emerging UCS systems in the DoD inventory. The UCS Architecture is based on Service Oriented Architecture (SOA) principles that will be adopted by each of the Services as a common basis for acquiring, integrating, and extending the capabilities of the UAS Control Segment. The UAS Task Force established the UCS Working Group to develop and support the UCS Architecture. The Working Group currently has over three hundred members, and is open to qualified representatives from DoD-approved defense contractors, academia, and the Government. The UCS Architecture is currently at Release 2.2, with Release 3.0 planned for July 2013. This paper discusses the current and planned elements of the UCS Architecture, and related activities of the UCS Community of Interest.

  3. Nonlinear principal component analysis of climate data

    SciTech Connect

    Boyle, J.; Sengupta, S.

    1995-06-01

    This paper presents the details of the nonlinear principal component analysis of climate data. Topic discussed include: connection with principal component analysis; network architecture; analysis of the standard routine (PRINC); and results.

  4. Performance measurement and modeling of component applications in a high performance computing environment : a case study.

    SciTech Connect

    Armstrong, Robert C.; Ray, Jaideep; Malony, A.; Shende, Sameer; Trebon, Nicholas D.

    2003-11-01

    We present a case study of performance measurement and modeling of a CCA (Common Component Architecture) component-based application in a high performance computing environment. We explore issues peculiar to component-based HPC applications and propose a performance measurement infrastructure for HPC based loosely on recent work done for Grid environments. A prototypical implementation of the infrastructure is used to collect data for a three components in a scientific application and construct performance models for two of them. Both computational and message-passing performance are addressed.

  5. CONRAD Software Architecture

    NASA Astrophysics Data System (ADS)

    Guzman, J. C.; Bennett, T.

    2008-08-01

    The Convergent Radio Astronomy Demonstrator (CONRAD) is a collaboration between the computing teams of two SKA pathfinder instruments, MeerKAT (South Africa) and ASKAP (Australia). Our goal is to produce the required common software to operate, process and store the data from the two instruments. Both instruments are synthesis arrays composed of a large number of antennas (40 - 100) operating at centimeter wavelengths with wide-field capabilities. Key challenges are the processing of high volume of data in real-time as well as the remote mode of operations. Here we present the software architecture for CONRAD. Our design approach is to maximize the use of open solutions and third-party software widely deployed in commercial applications, such as SNMP and LDAP, and to utilize modern web-based technologies for the user interfaces, such as AJAX.

  6. Most genetic risk for autism resides with common variation.

    PubMed

    Gaugler, Trent; Klei, Lambertus; Sanders, Stephan J; Bodea, Corneliu A; Goldberg, Arthur P; Lee, Ann B; Mahajan, Milind; Manaa, Dina; Pawitan, Yudi; Reichert, Jennifer; Ripke, Stephan; Sandin, Sven; Sklar, Pamela; Svantesson, Oscar; Reichenberg, Abraham; Hultman, Christina M; Devlin, Bernie; Roeder, Kathryn; Buxbaum, Joseph D

    2014-08-01

    A key component of genetic architecture is the allelic spectrum influencing trait variability. For autism spectrum disorder (herein termed autism), the nature of the allelic spectrum is uncertain. Individual risk-associated genes have been identified from rare variation, especially de novo mutations. From this evidence, one might conclude that rare variation dominates the allelic spectrum in autism, yet recent studies show that common variation, individually of small effect, has substantial impact en masse. At issue is how much of an impact relative to rare variation this common variation has. Using a unique epidemiological sample from Sweden, new methods that distinguish total narrow-sense heritability from that due to common variation and synthesis of results from other studies, we reach several conclusions about autism's genetic architecture: its narrow-sense heritability is ∼52.4%, with most due to common variation, and rare de novo mutations contribute substantially to individual liability, yet their contribution to variance in liability, 2.6%, is modest compared to that for heritable variation. PMID:25038753

  7. Component-specific modeling

    NASA Technical Reports Server (NTRS)

    Mcknight, R. L.

    1985-01-01

    Accomplishments are described for the second year effort of a 3-year program to develop methodology for component specific modeling of aircraft engine hot section components (turbine blades, turbine vanes, and burner liners). These accomplishments include: (1) engine thermodynamic and mission models; (2) geometry model generators; (3) remeshing; (4) specialty 3-D inelastic stuctural analysis; (5) computationally efficient solvers, (6) adaptive solution strategies; (7) engine performance parameters/component response variables decomposition and synthesis; (8) integrated software architecture and development, and (9) validation cases for software developed.

  8. Making the Common Good Common

    ERIC Educational Resources Information Center

    Chase, Barbara

    2011-01-01

    How are independent schools to be useful to the wider world? Beyond their common commitment to educate their students for meaningful lives in service of the greater good, can they educate a broader constituency and, thus, share their resources and skills more broadly? Their answers to this question will be shaped by their independence. Any…

  9. Post and Lintel Architecture

    ERIC Educational Resources Information Center

    Daniel, Robert A.

    1973-01-01

    Author finds that children understand architectural concepts more readily when he refers to familiar non-architectural examples of them such as goal posts, chairs, tables, and playground equipment. (GB)

  10. New computer architectures

    SciTech Connect

    Tiberghien, J.

    1984-01-01

    This book presents papers on supercomputers. Topics considered include decentralized computer architecture, new programming languages, data flow computers, reduction computers, parallel prefix calculations, structural and behavioral descriptions of digital systems, instruction sets, software generation, personal computing, and computer architecture education.

  11. Efficient multiprocessor architecture for digital signal processing

    SciTech Connect

    Auguin, M.; Boeri, F.

    1982-01-01

    There is a continuing pressure of better processing performances in numerical signal processing. Effective utilization of LSI semiconductor technology allows the consideration of multiprocessor architectures. The problem of interconnecting the components of the architecture arises. The authors describe a control algorithm of the Benes interconnection network in a asynchronous multiprocessor system. A simulation study of the time-shared bus, of the omega network, of the benes network and of the crossbar network gives a comparison of performances. 8 references.

  12. Architecture of a distributed multimission operations system

    NASA Technical Reports Server (NTRS)

    Yamada, Takahiro

    1994-01-01

    This paper presents an architecture to develop a multimission operations systems, which we call DIOSA. In this architecture, a component used as a building block is called a functional block. Each functional block has a standard structure, and the interface between functional blocks are defined with a set of standard protocols. This paper shows the structure of the database used by functional blocks, the structure of interfaces between functional blocks, and the structure of system management. Finally, examples of typical functional blocks and an example of a system constructed with this architecture is shown.

  13. High performance parallel architectures

    SciTech Connect

    Anderson, R.E. )

    1989-09-01

    In this paper the author describes current high performance parallel computer architectures. A taxonomy is presented to show computer architecture from the user programmer's point-of-view. The effects of the taxonomy upon the programming model are described. Some current architectures are described with respect to the taxonomy. Finally, some predictions about future systems are presented. 5 refs., 1 fig.

  14. UMTS network architecture

    NASA Astrophysics Data System (ADS)

    Katoen, J. P.; Saiedi, A.; Baccaro, I.

    1994-05-01

    This paper proposes a Functional Architecture and a corresponding Network Architecture for the Universal Mobile Telecommunication System (UMTS). Procedures like call handling, location management, and handover are considered. The architecture covers the domestic, business, and public environments. Integration with existing and forthcoming networks for fixed communications is anticipated and the Intelligent Network (IN) philosophy is applied.

  15. The EPSILON-2 hybrid dataflow architecture

    SciTech Connect

    Grafe, V.G.; Hoch, J.E.

    1989-11-08

    EPSILON-2 is a general parallel computer architecture that combines the fine grain parallelism of dataflow computing with the sequential efficiency common to von Neumann computing. Instruction level synchronization, single cycle context switches, and RISC-like sequential efficiency are all supported in EPSILON-2. The general parallel computing model of EPSILON-2 is described, followed by a description of the processing element architecture. A sample code is presented in detail, and the progress of the physical implementation discussed. 11 refs., 14 figs.

  16. Generic architectures for future flight systems

    NASA Technical Reports Server (NTRS)

    Wood, Richard J.

    1992-01-01

    Generic architecture for future flight systems must be based on open system architectures (OSA). This provides the developer and integrator the flexibility to optimize the hardware and software systems to match diverse and unique applications requirements. When developed properly OSA provides interoperability, commonality, graceful upgradability, survivability and hardware/software transportability to greatly minimize life cycle costs and supportability. Architecture flexibility can be achieved to take advantage of commercial developments by basing these developments on vendor-neutral commercially accepted standards and protocols. Rome Laboratory presently has a program that addresses requirements for OSA.

  17. Space Telecommunications Radio Architecture (STRS): Technical Overview

    NASA Technical Reports Server (NTRS)

    Reinhart, Richard C.

    2006-01-01

    A software defined radio (SDR) architecture used in space-based platforms proposes to standardize certain aspects of radio development such as interface definitions, functional control and execution, and application software and firmware development. NASA has charted a team to develop an open software defined radio hardware and software architecture to support NASA missions and determine the viability of an Agency-wide Standard. A draft concept of the proposed standard has been released and discussed among organizations in the SDR community. Appropriate leveraging of the JTRS SCA, OMG s SWRadio Architecture and other aspects are considered. A standard radio architecture offers potential value by employing common waveform software instantiation, operation, testing and software maintenance. While software defined radios offer greater flexibility, they also poses challenges to the radio development for the space environment in terms of size, mass and power consumption and available technology. An SDR architecture for space must recognize and address the constraints of space flight hardware, and systems along with flight heritage and culture. NASA is actively participating in the development of technology and standards related to software defined radios. As NASA considers a standard radio architecture for space communications, input and coordination from government agencies, the industry, academia, and standards bodies is key to a successful architecture. The unique aspects of space require thorough investigation of relevant terrestrial technologies properly adapted to space. The talk will describe NASA's current effort to investigate SDR applications to space missions and a brief overview of a candidate architecture under consideration for space based platforms.

  18. A component-based problem list subsystem for the HOLON testbed. Health Object Library Online.

    PubMed Central

    Law, V.; Goldberg, H. S.; Jones, P.; Safran, C.

    1998-01-01

    One of the deliverables of the HOLON (Health Object Library Online) project is the specification of a reference architecture for clinical information systems that facilitates the development of a variety of discrete, reusable software components. One of the challenges facing the HOLON consortium is determining what kinds of components can be made available in a library for developers of clinical information systems. To further explore the use of component architectures in the development of reusable clinical subsystems, we have incorporated ongoing work in the development of enterprise terminology services into a Problem List subsystem for the HOLON testbed. We have successfully implemented a set of components using CORBA (Common Object Request Broker Architecture) and Java distributed object technologies that provide a functional problem list application and UMLS-based "Problem Picker." Through this development, we have overcome a variety of obstacles characteristic of rapidly emerging technologies, and have identified architectural issues necessary to scale these components for use and reuse within an enterprise clinical information system. PMID:9929252

  19. Common modeling system for digital simulation

    NASA Technical Reports Server (NTRS)

    Painter, Rick

    1994-01-01

    The Joint Modeling and Simulation System is a tri-service investigation into a common modeling framework for the development digital models. The basis for the success of this framework is an X-window-based, open systems architecture, object-based/oriented methodology, standard interface approach to digital model construction, configuration, execution, and post processing. For years Department of Defense (DOD) agencies have produced various weapon systems/technologies and typically digital representations of the systems/technologies. These digital representations (models) have also been developed for other reasons such as studies and analysis, Cost Effectiveness Analysis (COEA) tradeoffs, etc. Unfortunately, there have been no Modeling and Simulation (M&S) standards, guidelines, or efforts towards commonality in DOD M&S. The typical scenario is an organization hires a contractor to build hardware and in doing so an digital model may be constructed. Until recently, this model was not even obtained by the organization. Even if it was procured, it was on a unique platform, in a unique language, with unique interfaces, and, with the result being UNIQUE maintenance required. Additionally, the constructors of the model expended more effort in writing the 'infrastructure' of the model/simulation (e.g. user interface, database/database management system, data journalizing/archiving, graphical presentations, environment characteristics, other components in the simulation, etc.) than in producing the model of the desired system. Other side effects include: duplication of efforts; varying assumptions; lack of credibility/validation; and decentralization in policy and execution. J-MASS provides the infrastructure, standards, toolset, and architecture to permit M&S developers and analysts to concentrate on the their area of interest.

  20. Analyzing Commonality In A System

    NASA Technical Reports Server (NTRS)

    Pacheco, Alfred; Pool, Kevin

    1988-01-01

    Cost decreased by use of fewer types of parts. System Commonality Analysis Tool (SCAT) computer program designed to aid managers and engineers in identifying common, potentially common, and unique components of system. Incorporates three major functions: program for creation and maintenance of data base, analysis of commonality, and such system utilities as host-operating-system commands and loading and unloading of data base. Produces reports tabulating maintenance, initial configurations, and expected total costs. Written in FORTRAN 77.

  1. Domain specific software architectures: Command and control

    NASA Technical Reports Server (NTRS)

    Braun, Christine; Hatch, William; Ruegsegger, Theodore; Balzer, Bob; Feather, Martin; Goldman, Neil; Wile, Dave

    1992-01-01

    GTE is the Command and Control contractor for the Domain Specific Software Architectures program. The objective of this program is to develop and demonstrate an architecture-driven, component-based capability for the automated generation of command and control (C2) applications. Such a capability will significantly reduce the cost of C2 applications development and will lead to improved system quality and reliability through the use of proven architectures and components. A major focus of GTE's approach is the automated generation of application components in particular subdomains. Our initial work in this area has concentrated in the message handling subdomain; we have defined and prototyped an approach that can automate one of the most software-intensive parts of C2 systems development. This paper provides an overview of the GTE team's DSSA approach and then presents our work on automated support for message processing.

  2. Component-specific modeling. [jet engine hot section components

    NASA Technical Reports Server (NTRS)

    Mcknight, R. L.; Maffeo, R. J.; Tipton, M. T.; Weber, G.

    1992-01-01

    Accomplishments are described for a 3 year program to develop methodology for component-specific modeling of aircraft hot section components (turbine blades, turbine vanes, and burner liners). These accomplishments include: (1) engine thermodynamic and mission models, (2) geometry model generators, (3) remeshing, (4) specialty three-dimensional inelastic structural analysis, (5) computationally efficient solvers, (6) adaptive solution strategies, (7) engine performance parameters/component response variables decomposition and synthesis, (8) integrated software architecture and development, and (9) validation cases for software developed.

  3. Java based open architecture controller

    SciTech Connect

    Weinert, G F

    2000-01-13

    At Lawrence Livermore National Laboratory (LLNL) the authors have been developing an open architecture machine tool controller. This work has been patterned after the General Motors (GM) led Open Modular Architecture Controller (OMAC) work, where they have been involved since its inception. The OMAC work has centered on creating sets of implementation neutral application programming interfaces (APIs) for machine control software components. In the work at LLNL, they were among the early adopters of the Java programming language. As an application programming language, it is particularly well suited for component software development. The language contains many features, which along with a well-defined implementation API (such as the OMAC APIs) allows third party binary files to be integrated into a working system. Because of its interpreted nature, Java allows rapid integration testing of components. However, for real-time systems development, the Java programming language presents many drawbacks. For instance, lack of well defined scheduling semantics and threading behavior can present many unwanted challenges. Also, the interpreted nature of the standard Java Virtual Machine (JVM) presents an immediate performance hit. Various real-time Java vendors are currently addressing some of these drawbacks. The various pluses and minuses of using the Java programming language and environment, with regard to a component-based controller, will be outlined.

  4. An Object-Oriented Network-Centric Software Architecture for Physical Computing

    NASA Astrophysics Data System (ADS)

    Palmer, Richard

    1997-08-01

    Recent developments in object-oriented computer languages and infrastructure such as the Internet, Web browsers, and the like provide an opportunity to define a more productive computational environment for scientific programming that is based more closely on the underlying mathematics describing physics than traditional programming languages such as FORTRAN or C++. In this talk I describe an object-oriented software architecture for representing physical problems that includes classes for such common mathematical objects as geometry, boundary conditions, partial differential and integral equations, discretization and numerical solution methods, etc. In practice, a scientific program written using this architecture looks remarkably like the mathematics used to understand the problem, is typically an order of magnitude smaller than traditional FORTRAN or C++ codes, and hence easier to understand, debug, describe, etc. All objects in this architecture are ``network-enabled,'' which means that components of a software solution to a physical problem can be transparently loaded from anywhere on the Internet or other global network. The architecture is expressed as an ``API,'' or application programmers interface specification, with reference embeddings in Java, Python, and C++. A C++ class library for an early version of this API has been implemented for machines ranging from PC's to the IBM SP2, meaning that phidentical codes run on all architectures.

  5. Most genetic risk for autism resides with common variation

    PubMed Central

    Gaugler, Trent; Klei, Lambertus; Sanders, Stephan J.; Bodea, Corneliu A.; Goldberg, Arthur P.; Lee, Ann B.; Mahajan, Milind; Manaa, Dina; Pawitan, Yudi; Reichert, Jennifer; Ripke, Stephan; Sandin, Sven; Sklar, Pamela; Svantesson, Oscar; Reichenberg, Abraham; Hultman, Christina M.; Devlin, Bernie

    2014-01-01

    A key component of genetic architecture is the allelic spectrum influencing trait variability. For autism spectrum disorder (henceforth autism) the nature of its allelic spectrum is uncertain. Individual risk genes have been identified from rare variation, especially de novo mutations1–8. From this evidence one might conclude that rare variation dominates its allelic spectrum, yet recent studies show that common variation, individually of small effect, has substantial impact en masse9,10. At issue is how much of an impact relative to rare variation. Using a unique epidemiological sample from Sweden, novel methods that distinguish total narrow-sense heritability from that due to common variation, and by synthesizing results from other studies, we reach several conclusions about autism’s genetic architecture: its narrow-sense heritability is ≈54% and most traces to common variation; rare de novo mutations contribute substantially to individuals’ liability; still their contribution to variance in liability, 2.6%, is modest compared to heritable variation. PMID:25038753

  6. Analogy, Cognitive Architecture and Universal Construction: A Tale of Two Systematicities

    PubMed Central

    Phillips, Steven

    2014-01-01

    Cognitive science recognizes two kinds of systematicity: (1) as the property where certain cognitive capacities imply certain other related cognitive capacities (Fodor and Pylyshyn); and (2) as the principle that analogical mappings based on collections of connected relations are preferred over relations in isolation (Gentner). Whether these kinds of systematicity are two aspects of a deeper property of cognition is hitherto unknown. Here, it is shown that both derive from the formal, category-theoretic notion of universal construction. In conceptual/psychological terms, a universal construction is a form of optimization of cognitive resources: optimizing the re-utilization of common component processes for common task components. Systematic cognitive capacity and the capacity for analogy are hallmarks of human cognition, which suggests that universal constructions (in the category-theoretic sense) are a crucial component of human cognitive architecture. PMID:24586555

  7. Open Architecture SDR for Space

    NASA Technical Reports Server (NTRS)

    Smith, Carl; Long, Chris; Liebetreu, John; Reinhart, Richard C.

    2005-01-01

    This paper describes an open-architecture SDR (software defined radio) infrastructure that is suitable for space-based operations (Space-SDR). SDR technologies will endow space and planetary exploration systems with dramatically increased capability, reduced power consumption, and significantly less mass than conventional systems, at costs reduced by vigorous competition, hardware commonality, dense integration, reduced obsolescence, interoperability, and software re-use. Significant progress has been recorded on developments like the Joint Tactical Radio System (JSTRS) Software Communication Architecture (SCA), which is oriented toward reconfigurable radios for defense forces operating in multiple theaters of engagement. The JTRS-SCA presents a consistent software interface for waveform development, and facilitates interoperability, waveform portability, software re-use, and technology evolution.

  8. The Component-Based Application for GAMESS

    SciTech Connect

    Peng, Fang

    2007-01-01

    GAMESS, a quantum chetnistry program for electronic structure calculations, has been freely shared by high-performance application scientists for over twenty years. It provides a rich set of functionalities and can be run on a variety of parallel platforms through a distributed data interface. While a chemistry computation is sophisticated and hard to develop, the resource sharing among different chemistry packages will accelerate the development of new computations and encourage the cooperation of scientists from universities and laboratories. Common Component Architecture (CCA) offers an enviromnent that allows scientific packages to dynamically interact with each other through components, which enable dynamic coupling of GAMESS with other chetnistry packages, such as MPQC and NWChem. Conceptually, a cotnputation can be constructed with "plug-and-play" components from scientific packages and require more than componentizing functions/subroutines of interest, especially for large-scale scientific packages with a long development history. In this research, we present our efforts to construct cotnponents for GAMESS that conform to the CCA specification. The goal is to enable the fine-grained interoperability between three quantum chemistry programs, GAMESS, MPQC and NWChem, via components. We focus on one of the three packages, GAMESS; delineate the structure of GAMESS computations, followed by our approaches to its component development. Then we use GAMESS as the driver to interoperate integral components from the other tw"o packages, arid show the solutions for interoperability problems along with preliminary results. To justify the versatility of the design, the Tuning and Analysis Utility (TAU) components have been coupled with GAMESS and its components, so that the performance of GAMESS and its components may be analyzed for a wide range of systetn parameters.

  9. Architecture for Survivable System Processing (ASSP)

    NASA Astrophysics Data System (ADS)

    Wood, Richard J.

    1991-11-01

    The Architecture for Survivable System Processing (ASSP) Program is a multi-phase effort to implement Department of Defense (DOD) and commercially developed high-tech hardware, software, and architectures for reliable space avionics and ground based systems. System configuration options provide processing capabilities to address Time Dependent Processing (TDP), Object Dependent Processing (ODP), and Mission Dependent Processing (MDP) requirements through Open System Architecture (OSA) alternatives that allow for the enhancement, incorporation, and capitalization of a broad range of development assets. High technology developments in hardware, software, and networking models, address technology challenges of long processor life times, fault tolerance, reliability, throughput, memories, radiation hardening, size, weight, power (SWAP) and security. Hardware and software design, development, and implementation focus on the interconnectivity/interoperability of an open system architecture and is being developed to apply new technology into practical OSA components. To insure for widely acceptable architecture capable of interfacing with various commercial and military components, this program provides for regular interactions with standardization working groups (e.g.) the International Standards Organization (ISO), American National Standards Institute (ANSI), Society of Automotive Engineers (SAE), and Institute of Electrical and Electronic Engineers (IEEE). Selection of a viable open architecture is based on the widely accepted standards that implement the ISO/OSI Reference Model.

  10. Architecture for Survivable System Processing (ASSP)

    NASA Technical Reports Server (NTRS)

    Wood, Richard J.

    1991-01-01

    The Architecture for Survivable System Processing (ASSP) Program is a multi-phase effort to implement Department of Defense (DOD) and commercially developed high-tech hardware, software, and architectures for reliable space avionics and ground based systems. System configuration options provide processing capabilities to address Time Dependent Processing (TDP), Object Dependent Processing (ODP), and Mission Dependent Processing (MDP) requirements through Open System Architecture (OSA) alternatives that allow for the enhancement, incorporation, and capitalization of a broad range of development assets. High technology developments in hardware, software, and networking models, address technology challenges of long processor life times, fault tolerance, reliability, throughput, memories, radiation hardening, size, weight, power (SWAP) and security. Hardware and software design, development, and implementation focus on the interconnectivity/interoperability of an open system architecture and is being developed to apply new technology into practical OSA components. To insure for widely acceptable architecture capable of interfacing with various commercial and military components, this program provides for regular interactions with standardization working groups (e.g.) the International Standards Organization (ISO), American National Standards Institute (ANSI), Society of Automotive Engineers (SAE), and Institute of Electrical and Electronic Engineers (IEEE). Selection of a viable open architecture is based on the widely accepted standards that implement the ISO/OSI Reference Model.

  11. Common world model for unmanned systems

    NASA Astrophysics Data System (ADS)

    Dean, Robert Michael S.

    2013-05-01

    The Robotic Collaborative Technology Alliance (RCTA) seeks to provide adaptive robot capabilities which move beyond traditional metric algorithms to include cognitive capabilities. Key to this effort is the Common World Model, which moves beyond the state-of-the-art by representing the world using metric, semantic, and symbolic information. It joins these layers of information to define objects in the world. These objects may be reasoned upon jointly using traditional geometric, symbolic cognitive algorithms and new computational nodes formed by the combination of these disciplines. The Common World Model must understand how these objects relate to each other. Our world model includes the concept of Self-Information about the robot. By encoding current capability, component status, task execution state, and histories we track information which enables the robot to reason and adapt its performance using Meta-Cognition and Machine Learning principles. The world model includes models of how aspects of the environment behave, which enable prediction of future world states. To manage complexity, we adopted a phased implementation approach to the world model. We discuss the design of "Phase 1" of this world model, and interfaces by tracing perception data through the system from the source to the meta-cognitive layers provided by ACT-R and SS-RICS. We close with lessons learned from implementation and how the design relates to Open Architecture.

  12. Engineering Promoter Architecture in Oleaginous Yeast Yarrowia lipolytica.

    PubMed

    Shabbir Hussain, Murtaza; Gambill, Lauren; Smith, Spencer; Blenner, Mark A

    2016-03-18

    Eukaryotic promoters have a complex architecture to control both the strength and timing of gene transcription spanning up to thousands of bases from the initiation site. This complexity makes rational fine-tuning of promoters in fungi difficult to predict; however, this very same complexity enables multiple possible strategies for engineering promoter strength. Here, we studied promoter architecture in the oleaginous yeast, Yarrowia lipolytica. While recent studies have focused on upstream activating sequences, we systematically examined various components common in fungal promoters. Here, we examine several promoter components including upstream activating sequences, proximal promoter sequences, core promoters, and the TATA box in autonomously replicating expression plasmids and integrated into the genome. Our findings show that promoter strength can be fine-tuned through the engineering of the TATA box sequence, core promoter, and upstream activating sequences. Additionally, we identified a previously unreported oleic acid responsive transcription enhancement in the XPR2 upstream activating sequences, which illustrates the complexity of fungal promoters. The promoters engineered here provide new genetic tools for metabolic engineering in Y. lipolytica and provide promoter engineering strategies that may be useful in engineering other non-model fungal systems. PMID:26635071

  13. Modular open RF architecture: extending VICTORY to RF systems

    NASA Astrophysics Data System (ADS)

    Melber, Adam; Dirner, Jason; Johnson, Michael

    2015-05-01

    Radio frequency products spanning multiple functions have become increasingly critical to the warfighter. Military use of the electromagnetic spectrum now includes communications, electronic warfare (EW), intelligence, and mission command systems. Due to the urgent needs of counterinsurgency operations, various quick reaction capabilities (QRCs) have been fielded to enhance warfighter capability. Although these QRCs were highly successfully in their respective missions, they were designed independently resulting in significant challenges when integrated on a common platform. This paper discusses how the Modular Open RF Architecture (MORA) addresses these challenges by defining an open architecture for multifunction missions that decomposes monolithic radio systems into high-level components with welldefined functions and interfaces. The functional decomposition maximizes hardware sharing while minimizing added complexity and cost due to modularization. MORA achieves significant size, weight and power (SWaP) savings by allowing hardware such as power amplifiers and antennas to be shared across systems. By separating signal conditioning from the processing that implements the actual radio application, MORA exposes previously inaccessible architecture points, providing system integrators with the flexibility to insert third-party capabilities to address technical challenges and emerging requirements. MORA leverages the Vehicular Integration for Command, Control, Communication, Computers, Intelligence, Surveillance, and Reconnaissance (C4ISR)/EW Interoperability (VICTORY) framework. This paper concludes by discussing how MORA, VICTORY and other standards such as OpenVPX are being leveraged by the U.S. Army Research, Development, and Engineering Command (RDECOM) Communications Electronics Research, Development, and Engineering Center (CERDEC) to define a converged architecture enabling rapid technology insertion, interoperability and reduced SWaP.

  14. Concept and architecture of the RHIC LLRF upgrade platform

    SciTech Connect

    Smith, K.S.; Hayes, T.; Severino, F.

    2011-03-28

    The goal of the RHIC LLRF upgrade has been the development of a stand alone, generic, high performance, modular LLRF control platform, which can be configured to replace existing systems and serve as a common platform for all new RF systems. The platform is also designed to integrate seamlessly into a distributed network based controls infrastructure, be easy to deploy, and to be useful in a variety of digital signal processing and data acquisition roles. Reuse of hardware, software and firmware has been emphasized to minimize development effort and maximize commonality of system components. System interconnection, synchronization and scaling are facilitated by a deterministic, high speed serial timing and data link, while standard intra and inter chassis communications utilize high speed, non-deterministic protocol based serial links. System hardware configuration is modular and flexible, based on a combination of a main carrier board which can host up to six custom or commercial daughter modules as required to implement desired functionality. This paper will provide an overview of the platform concept, architecture, features and benefits. The RHIC LLRF Upgrade Platform has been developed with the goal of providing a flexible, modular and scalable architecture which will support our current applications and satisfy new ones for the foreseeable future. The platform has been recently commissioned at both RHIC and the RHIC EBIS injector. To date the platform has demonstrated its versatility and utility, meeting the design goals as originally defined.

  15. Gaia Data Processing Architecture

    NASA Astrophysics Data System (ADS)

    O'Mullane, W.; Lammers, U.; Bailer-Jones, C.; Bastian, U.; Brown, A. G. A.; Drimmel, R.; Eyer, L.; Huc, C.; Katz, D.; Lindegren, L.; Pourbaix, D.; Luri, X.; Torra, J.; Mignard, F.; van Leeuwen, F.

    2007-10-01

    Gaia is the European Space Agency's (ESA's) ambitious space astrometry mission with a main objective to map astrometrically and spectro-photometrically not less than 1000 million celestial objects in our galaxy with unprecedented accuracy. The announcement of opportunity (AO) for the data processing will be issued by ESA late in 2006. The Gaia Data Processing and Analysis Consortium (DPAC) has been formed recently and is preparing an answer to this AO. The satellite will downlink around 100 TB of raw telemetry data over a mission duration of 5--6 years. To achieve its required astrometric accuracy of a few tens of microarcseconds, a highly involved processing of this data is required. In addition to the main astrometric instrument Gaia will host a radial-velocity spectrometer and two low-resolution dispersers for multi-color photometry. All instrument modules share a common focal plane consisting of a CCD mosaic about 1 m^2 in size and featuring close to 10^9 pixels. Each of the various instruments requires relatively complex processing while at the same time being interdependent. We describe the composition and structure of the DPAC and the envisaged overall architecture of the system. We shall delve further into the core processing---one of the nine so-called coordination units comprising the Gaia processing system.

  16. Grid Architecture 2

    SciTech Connect

    Taft, Jeffrey D.

    2016-01-01

    The report describes work done on Grid Architecture under the auspices of the Department of Electricity Office of Electricity Delivery and Reliability in 2015. As described in the first Grid Architecture report, the primary purpose of this work is to provide stakeholder insight about grid issues so as to enable superior decision making on their part. Doing this requires the creation of various work products, including oft-times complex diagrams, analyses, and explanations. This report provides architectural insights into several important grid topics and also describes work done to advance the science of Grid Architecture as well.

  17. Secure Storage Architectures

    SciTech Connect

    Aderholdt, Ferrol; Caldwell, Blake A; Hicks, Susan Elaine; Koch, Scott M; Naughton, III, Thomas J; Pogge, James R; Scott, Stephen L; Shipman, Galen M; Sorrillo, Lawrence

    2015-01-01

    include evaluation of performance/protection of select products. (Note, we are investigation the option of evaluating equipment from Seagate/Xyratex.) Outline: The remainder of this report is structured as follows: - Section 1: Describes the growing importance of secure storage architectures and highlights some challenges for HPC. - Section 2: Provides background information on HPC storage architectures, relevant supporting technologies for secure storage and details on OpenStack components related to storage. Note, that background material on HPC storage architectures in this chapter can be skipped if the reader is already familiar with Lustre and GPFS. - Section 3: A review of protection mechanisms in two HPC filesystems; details about available isolation, authentication/authorization and performance capabilities are discussed. - Section 4: Describe technologies that can be used to bridge gaps in HPC storage and filesystems to facilitate...

  18. Citizen Observatories: A Standards Based Architecture

    NASA Astrophysics Data System (ADS)

    Simonis, Ingo

    2015-04-01

    A number of large-scale research projects are currently under way exploring the various components of citizen observatories, e.g. CITI-SENSE (http://www.citi-sense.eu), Citclops (http://citclops.eu), COBWEB (http://cobwebproject.eu), OMNISCIENTIS (http://www.omniscientis.eu), and WeSenseIt (http://www.wesenseit.eu). Common to all projects is the motivation to develop a platform enabling effective participation by citizens in environmental projects, while considering important aspects such as security, privacy, long-term storage and availability, accessibility of raw and processed data and its proper integration into catalogues and international exchange and collaboration systems such as GEOSS or INSPIRE. This paper describes the software architecture implemented for setting up crowdsourcing campaigns using standardized components, interfaces, security features, and distribution capabilities. It illustrates the Citizen Observatory Toolkit, a software suite that allows defining crowdsourcing campaigns, to invite registered and unregistered participants to participate in crowdsourcing campaigns, and to analyze, process, and visualize raw and quality enhanced crowd sourcing data and derived products. The Citizen Observatory Toolkit is not a single software product. Instead, it is a framework of components that are built using internationally adopted standards wherever possible (e.g. OGC standards from Sensor Web Enablement, GeoPackage, and Web Mapping and Processing Services, as well as security and metadata/cataloguing standards), defines profiles of those standards where necessary (e.g. SWE O&M profile, SensorML profile), and implements design decisions based on the motivation to maximize interoperability and reusability of all components. The toolkit contains tools to set up, manage and maintain crowdsourcing campaigns, allows building on-demand apps optimized for the specific sampling focus, supports offline and online sampling modes using modern cell phones with

  19. Extensible Hardware Architecture for Mobile Robots

    NASA Technical Reports Server (NTRS)

    Park, Eric; Kobayashi, Linda; Lee, Susan Y.

    2005-01-01

    The Intelligent Robotics Group at NASA Ames Research Center has developed a new mobile robot hardware architecture designed for extensibility and reconfigurability. Currently implemented on the k9 rover. and won to be integrated onto the K10 series of human-robot collaboration research robots, this architecture allows for rapid changes in instrumentation configuration and provides a high degree of modularity through a synergistic mix of off-the-shelf and custom designed components, allowing eased transplantation into a wide vane6 of mobile robot platforms. A component level overview of this architecture is presented along with a description of the changes required for implementation on K10 , followed by plans for future work.

  20. Launch Vehicle Control Center Architectures

    NASA Technical Reports Server (NTRS)

    Watson, Michael D.; Epps, Amy; Woodruff, Van; Vachon, Michael Jacob; Monreal, Julio; Levesque, Marl; Williams, Randall; Mclaughlin, Tom

    2014-01-01

    Launch vehicles within the international community vary greatly in their configuration and processing. Each launch site has a unique processing flow based on the specific launch vehicle configuration. Launch and flight operations are managed through a set of control centers associated with each launch site. Each launch site has a control center for launch operations; however flight operations support varies from being co-located with the launch site to being shared with the space vehicle control center. There is also a nuance of some having an engineering support center which may be co-located with either the launch or flight control center, or in a separate geographical location altogether. A survey of control center architectures is presented for various launch vehicles including the NASA Space Launch System (SLS), United Launch Alliance (ULA) Atlas V and Delta IV, and the European Space Agency (ESA) Ariane 5. Each of these control center architectures shares some similarities in basic structure while differences in functional distribution also exist. The driving functions which lead to these factors are considered and a model of control center architectures is proposed which supports these commonalities and variations.

  1. The Technology of Architecture

    ERIC Educational Resources Information Center

    Reese, Susan

    2006-01-01

    This article discusses how career and technical education is helping students draw up plans for success in architectural technology. According to the College of DuPage (COD) in Glen Ellyn, Illinois, one of the two-year schools offering training in architectural technology, graduates have a number of opportunities available to them. They may work…

  2. Workflow automation architecture standard

    SciTech Connect

    Moshofsky, R.P.; Rohen, W.T.

    1994-11-14

    This document presents an architectural standard for application of workflow automation technology. The standard includes a functional architecture, process for developing an automated workflow system for a work group, functional and collateral specifications for workflow automation, and results of a proof of concept prototype.

  3. Clinical document architecture.

    PubMed

    Heitmann, Kai

    2003-01-01

    The Clinical Document Architecture (CDA), a standard developed by the Health Level Seven organisation (HL7), is an ANSI approved document architecture for exchange of clinical information using XML. A CDA document is comprised of a header with associated vocabularies and a body containing the structural clinical information. PMID:15061557

  4. Generic POCC architectures

    NASA Technical Reports Server (NTRS)

    1989-01-01

    This document describes a generic POCC (Payload Operations Control Center) architecture based upon current POCC software practice, and several refinements to the architecture based upon object-oriented design principles and expected developments in teleoperations. The current-technology generic architecture is an abstraction based upon close analysis of the ERBS, COBE, and GRO POCC's. A series of three refinements is presented: these may be viewed as an approach to a phased transition to the recommended architecture. The third refinement constitutes the recommended architecture, which, together with associated rationales, will form the basis of the rapid synthesis environment to be developed in the remainder of this task. The document is organized into two parts. The first part describes the current generic architecture using several graphical as well as tabular representations or 'views.' The second part presents an analysis of the generic architecture in terms of object-oriented principles. On the basis of this discussion, refinements to the generic architecture are presented, again using a combination of graphical and tabular representations.

  5. Emerging supercomputer architectures

    SciTech Connect

    Messina, P.C.

    1987-01-01

    This paper will examine the current and near future trends for commercially available high-performance computers with architectures that differ from the mainstream ''supercomputer'' systems in use for the last few years. These emerging supercomputer architectures are just beginning to have an impact on the field of high performance computing. 7 refs., 1 tab.

  6. Architectural Physics: Lighting.

    ERIC Educational Resources Information Center

    Hopkinson, R. G.

    The author coordinates the many diverse branches of knowledge which have dealt with the field of lighting--physiology, psychology, engineering, physics, and architectural design. Part I, "The Elements of Architectural Physics", discusses the physiological aspects of lighting, visual performance, lighting design, calculations and measurements of…

  7. FTS2000 network architecture

    NASA Technical Reports Server (NTRS)

    Klenart, John

    1991-01-01

    The network architecture of FTS2000 is graphically depicted. A map of network A topology is provided, with interservice nodes. Next, the four basic element of the architecture is laid out. Then, the FTS2000 time line is reproduced. A list of equipment supporting FTS2000 dedicated transmissions is given. Finally, access alternatives are shown.

  8. Software Architecture Evolution

    ERIC Educational Resources Information Center

    Barnes, Jeffrey M.

    2013-01-01

    Many software systems eventually undergo changes to their basic architectural structure. Such changes may be prompted by new feature requests, new quality attribute requirements, changing technology, or other reasons. Whatever the causes, architecture evolution is commonplace in real-world software projects. Today's software architects, however,…

  9. IRRIGATION SYSTEM COMPONENTS

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The common components of a irrigation system are defined in terms of the diversion, delivery, distribution and drainage subsystems. Irrigation systems can be defined on at least three different levels: project, farm and field. Each level will have the same basic set of components regardless of sca...

  10. Data Acquisition System Architecture and Capabilities At NASA GRC Plum Brook Station's Space Environment Test Facilities

    NASA Technical Reports Server (NTRS)

    Evans, Richard K.; Hill, Gerald M.

    2012-01-01

    Very large space environment test facilities present unique engineering challenges in the design of facility data systems. Data systems of this scale must be versatile enough to meet the wide range of data acquisition and measurement requirements from a diverse set of customers and test programs, but also must minimize design changes to maintain reliability and serviceability. This paper presents an overview of the common architecture and capabilities of the facility data acquisition systems available at two of the world?s largest space environment test facilities located at the NASA Glenn Research Center?s Plum Brook Station in Sandusky, Ohio; namely, the Space Propulsion Research Facility (commonly known as the B-2 facility) and the Space Power Facility (SPF). The common architecture of the data systems is presented along with details on system scalability and efficient measurement systems analysis and verification. The architecture highlights a modular design, which utilizes fully-remotely managed components, enabling the data systems to be highly configurable and support multiple test locations with a wide-range of measurement types and very large system channel counts.

  11. Data Acquisition System Architecture and Capabilities at NASA GRC Plum Brook Station's Space Environment Test Facilities

    NASA Technical Reports Server (NTRS)

    Evans, Richard K.; Hill, Gerald M.

    2014-01-01

    Very large space environment test facilities present unique engineering challenges in the design of facility data systems. Data systems of this scale must be versatile enough to meet the wide range of data acquisition and measurement requirements from a diverse set of customers and test programs, but also must minimize design changes to maintain reliability and serviceability. This paper presents an overview of the common architecture and capabilities of the facility data acquisition systems available at two of the world's largest space environment test facilities located at the NASA Glenn Research Center's Plum Brook Station in Sandusky, Ohio; namely, the Space Propulsion Research Facility (commonly known as the B-2 facility) and the Space Power Facility (SPF). The common architecture of the data systems is presented along with details on system scalability and efficient measurement systems analysis and verification. The architecture highlights a modular design, which utilizes fully-remotely managed components, enabling the data systems to be highly configurable and support multiple test locations with a wide-range of measurement types and very large system channel counts.

  12. Architectural design for resilience

    NASA Astrophysics Data System (ADS)

    Liu, Dong; Deters, Ralph; Zhang, W. J.

    2010-05-01

    Resilience has become a new nonfunctional requirement for information systems. Many design decisions have to be made at the architectural level in order to deliver an information system with the resilience property. This paper discusses the relationships between resilience and other architectural properties such as scalability, reliability, and consistency. A corollary is derived from the CAP theorem, and states that it is impossible for a system to have all three properties of consistency, resilience and partition-tolerance. We present seven architectural constraints for resilience. The constraints are elicited from good architectural practices for developing reliable and fault-tolerant systems and the state-of-the-art technologies in distributed computing. These constraints provide a comprehensive reference for architectural design towards resilience.

  13. The Simulation Intranet Architecture

    SciTech Connect

    Holmes, V.P.; Linebarger, J.M.; Miller, D.J.; Vandewart, R.L.

    1998-12-02

    The Simdarion Infranet (S1) is a term which is being used to dcscribc one element of a multidisciplinary distributed and distance computing initiative known as DisCom2 at Sandia National Laboratory (http ct al. 1998). The Simulation Intranet is an architecture for satisfying Sandia's long term goal of providing an end- to-end set of scrviccs for high fidelity full physics simu- lations in a high performance, distributed, and distance computing environment. The Intranet Architecture group was formed to apply current distributed object technologies to this problcm. For the hardware architec- tures and software models involved with the current simulation process, a CORBA-based architecture is best suited to meet Sandia's needs. This paper presents the initial desi-a and implementation of this Intranct based on a three-tier Network Computing Architecture(NCA). The major parts of the architecture include: the Web Cli- ent, the Business Objects, and Data Persistence.

  14. Molecular basis of angiosperm tree architecture

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The shoot architecture of trees greatly impacts orchard and forest management methods. Amassing greater knowledge of the molecular genetics behind tree form can benefit these industries as well as contribute to basic knowledge of plant developmental biology. This review covers basic components of ...

  15. System design document U-AVLIS control system architecture

    SciTech Connect

    Viebeck, P.G.

    1994-02-16

    This document describes the architecture of the integrated control system for the U-AVLIS process. It includes an overview of the major control system components and their interfaces to one another. Separate documents are utilized to fully describe each component mentioned herein. The purpose of this document is to introduce the reader to the integrated U-AVLIS control system. It describes the philosophy of the control system architecture and how all of the control system components are integrated. While the other System Design Documents describe in detail the design of individual control system components, this document puts those components into their correct context within the entire integrated control system.

  16. Simple example of an SADMT SDI-(Strategic Defense Initiative) Architecture Dataflow Modeling Technique) architecture specification. Version 1. 5. Final report

    SciTech Connect

    Linn, C.J.; Linn, J.L.; Edwards, S.H.; Kappel, M.R.; Ardoin, C.D.

    1988-04-21

    This report presents a simple architecture specification in the SDI Architecture Dataflow Modeling Technique (SADMT). The example code is given in the SADMT Generator (SAGEN) Language. This simple architecture includes (1) an informal description of the architecture, (2) the main program that creates the components of the simulation, (3) the specification of the BM/C3 logical processes of the architecture, (4) the specification of the Technology Modules (TMs) of the architecture, and (5) the specification of the Battle Management/Command, Control and Communications (BM/C3) and TMs of the threat.

  17. Genetic Architecture of Complex Traits in Plants

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Genetic architecture refers to the numbers and genome locations of genes affecting a trait, the magnitude of their effects, and the relative contributions of additive, dominant, and epistatic gene effects. Quantitative trait locus (QTL) mapping techniques are commonly used to investigate genetic ar...

  18. Web Service Architecture Framework for Embedded Devices

    ERIC Educational Resources Information Center

    Yanzick, Paul David

    2009-01-01

    The use of Service Oriented Architectures, namely web services, has become a widely adopted method for transfer of data between systems across the Internet as well as the Enterprise. Adopting a similar approach to embedded devices is also starting to emerge as personal devices and sensor networks are becoming more common in the industry. This…

  19. Integrated computer control system architectural overview

    SciTech Connect

    Van Arsdall, P.

    1997-06-18

    This overview introduces the NIF Integrated Control System (ICCS) architecture. The design is abstract to allow the construction of many similar applications from a common framework. This summary lays the essential foundation for understanding the model-based engineering approach used to execute the design.

  20. National solar technology roadmap: Nano-architecture PV

    SciTech Connect

    Zhang, Yong

    2007-06-01

    This roadmap addresses nano-architecture solar cells that use nanowires, nanotubes, and nanocrystals, including single-component, core-shell, embedded nanowires or nanocrystals either as absorbers or transporters.

  1. Simple optical neighbor discovery (SOND): architecture, applications, and experimental verification

    NASA Astrophysics Data System (ADS)

    Larsson, Stefan N.; Hubendick, Sten; Nedelchef, Robert

    2003-10-01

    The architecture, applications, and experimental verification of a simple neighbor discovery method are presented. The method follows the recent International Telecommunication Union - Telecommunication Standardization Sector (ITU-T) and Internet Engineering Task Force (IETF) standardization, automatically switched optical network (ASON)/generalized multiprotocol label switching (GMPLS), on automatically switched optical networks. The method needs no specific hardware components but claims to be so simple that virtually any equipment with common optical ports can support it. It eliminates the need for costly and complex synchronous optical network/synchronous digital hierarchy/optical transport network (SONET/SDH/OTN) overhead read-write functionality in optical elements such as optical cross connects (OXCs). The method thus offers a fast track to automated optical networks.

  2. Fractal Geometry of Architecture

    NASA Astrophysics Data System (ADS)

    Lorenz, Wolfgang E.

    In Fractals smaller parts and the whole are linked together. Fractals are self-similar, as those parts are, at least approximately, scaled-down copies of the rough whole. In architecture, such a concept has also been known for a long time. Not only architects of the twentieth century called for an overall idea that is mirrored in every single detail, but also Gothic cathedrals and Indian temples offer self-similarity. This study mainly focuses upon the question whether this concept of self-similarity makes architecture with fractal properties more diverse and interesting than Euclidean Modern architecture. The first part gives an introduction and explains Fractal properties in various natural and architectural objects, presenting the underlying structure by computer programmed renderings. In this connection, differences between the fractal, architectural concept and true, mathematical Fractals are worked out to become aware of limits. This is the basis for dealing with the problem whether fractal-like architecture, particularly facades, can be measured so that different designs can be compared with each other under the aspect of fractal properties. Finally the usability of the Box-Counting Method, an easy-to-use measurement method of Fractal Dimension is analyzed with regard to architecture.

  3. Dynamic Information Architecture System

    SciTech Connect

    Christiansen, John

    1997-02-12

    The Dynamic Information System (DIAS) is a flexible object-based software framework for concurrent, multidiscplinary modeling of arbitrary (but related) processes. These processes are modeled as interrelated actions caused by and affecting the collection of diverse real-world objects represented in a simulation. The DIAS architecture allows independent process models to work together harmoniously in the same frame of reference and provides a wide range of data ingestion and output capabilities, including Geographic Information System (GIS) type map-based displays and photorealistic visualization of simulations in progress. In the DIAS implementation of the object-based approach, software objects carry within them not only the data which describe their static characteristics, but also the methods, or functions, which describe their dynamic behaviors. There are two categories of objects: (1) Entity objects which have real-world counterparts and are the actors in a simulation, and (2) Software infrastructure objects which make it possible to carry out the simulations. The Entity objects contain lists of Aspect objects, each of which addresses a single aspect of the Entity''s behavior. For example, a DIAS Stream Entity representing a section of a river can have many aspects correspondimg to its behavior in terms of hydrology (as a drainage system component), navigation (as a link in a waterborne transportation system), meteorology (in terms of moisture, heat, and momentum exchange with the atmospheric boundary layer), and visualization (for photorealistic visualization or map type displays), etc. This makes it possible for each real-world object to exhibit any or all of its unique behaviors within the context of a single simulation.

  4. Dynamic Information Architecture System

    Energy Science and Technology Software Center (ESTSC)

    1997-02-12

    The Dynamic Information System (DIAS) is a flexible object-based software framework for concurrent, multidiscplinary modeling of arbitrary (but related) processes. These processes are modeled as interrelated actions caused by and affecting the collection of diverse real-world objects represented in a simulation. The DIAS architecture allows independent process models to work together harmoniously in the same frame of reference and provides a wide range of data ingestion and output capabilities, including Geographic Information System (GIS) typemore » map-based displays and photorealistic visualization of simulations in progress. In the DIAS implementation of the object-based approach, software objects carry within them not only the data which describe their static characteristics, but also the methods, or functions, which describe their dynamic behaviors. There are two categories of objects: (1) Entity objects which have real-world counterparts and are the actors in a simulation, and (2) Software infrastructure objects which make it possible to carry out the simulations. The Entity objects contain lists of Aspect objects, each of which addresses a single aspect of the Entity''s behavior. For example, a DIAS Stream Entity representing a section of a river can have many aspects correspondimg to its behavior in terms of hydrology (as a drainage system component), navigation (as a link in a waterborne transportation system), meteorology (in terms of moisture, heat, and momentum exchange with the atmospheric boundary layer), and visualization (for photorealistic visualization or map type displays), etc. This makes it possible for each real-world object to exhibit any or all of its unique behaviors within the context of a single simulation.« less

  5. Specifying structural constraints of architectural patterns in the ARCHERY language

    SciTech Connect

    Sanchez, Alejandro; Barbosa, Luis S.; Riesco, Daniel

    2015-03-10

    ARCHERY is an architectural description language for modelling and reasoning about distributed, heterogeneous and dynamically reconfigurable systems in terms of architectural patterns. The language supports the specification of architectures and their reconfiguration. This paper introduces a language extension for precisely describing the structural design decisions that pattern instances must respect in their (re)configurations. The extension is a propositional modal logic with recursion and nominals referencing components, i.e., a hybrid µ-calculus. Its expressiveness allows specifying safety and liveness constraints, as well as paths and cycles over structures. Refinements of classic architectural patterns are specified.

  6. Robotic collaborative technology alliance: an open architecture approach to integrated research

    NASA Astrophysics Data System (ADS)

    Dean, Robert Michael S.; DiBerardino, Charles A.

    2014-06-01

    The Robotics Collaborative Technology Alliance (RCTA) seeks to provide adaptive robot capabilities which move beyond traditional metric algorithms to include cognitive capabilities [1]. Research occurs in 5 main Task Areas: Intelligence, Perception, Dexterous Manipulation and Unique Mobility (DMUM), Human Robot Interaction (HRI), and Integrated Research (IR). This last task of Integrated Research is especially critical and challenging. Individual research components can only be fully assessed when integrated onto a robot where they interact with other aspects of the system to create cross-Task capabilities which move beyond the State of the Art. Adding to the complexity, the RCTA is comprised of 12+ independent organizations across the United States. Each has its own constraints due to development environments, ITAR, "lab" vs "real-time" implementations, and legacy software investments from previous and ongoing programs. We have developed three main components to manage the Integration Task. The first is RFrame, a data-centric transport agnostic middleware which unifies the disparate environments, protocols, and data collection mechanisms. Second is the modular Intelligence Architecture built around the Common World Model (CWM). The CWM instantiates a Common Data Model and provides access services. Third is RIVET, an ITAR free Hardware-In-The-Loop simulator based on 3D game technology. RIVET provides each researcher a common test-bed for development prior to integration, and a regression test mechanism. Once components are integrated and verified, they are released back to the consortium to provide the RIVET baseline for further research. This approach allows Integration of new and legacy systems built upon different architectures, by application of Open Architecture principles.

  7. Proposed architecture for the UAV family of air vehicles

    NASA Astrophysics Data System (ADS)

    Marinelli, Louis; Bazow, Steve

    1993-12-01

    As an integral part of the Unmanned Aerial Vehicle (UAV) interoperability and commonality program, Vitro Corporation and the UAV Systems Engineering Directorate developed a UAV family architecture which lays the foundation for future UAV systems.

  8. Cell broadband engine architecture as a DSP platform

    NASA Astrophysics Data System (ADS)

    Szumski, Karol; Malanowski, Mateusz

    2009-06-01

    The slowing pace of performance improvement in the commonly available processors is a cause of concern amongst many computational scientists. This combined with the ever increasing need for computational power has caused us to turn to alternative architectures in search of performance gains. Two main candidates were the Compute Unified Device Architecture (CUDA) and the Cell Broadband Engine (CELL BE) architecture. This paper focuses on the latter, outlining the architecture and basic programming paradigms, and also contains performance comparison of algorithms currently developed by our team.

  9. Architecture for Verifiable Software

    NASA Technical Reports Server (NTRS)

    Reinholtz, William; Dvorak, Daniel

    2005-01-01

    Verifiable MDS Architecture (VMA) is a software architecture that facilitates the construction of highly verifiable flight software for NASA s Mission Data System (MDS), especially for smaller missions subject to cost constraints. More specifically, the purpose served by VMA is to facilitate aggressive verification and validation of flight software while imposing a minimum of constraints on overall functionality. VMA exploits the state-based architecture of the MDS and partitions verification issues into elements susceptible to independent verification and validation, in such a manner that scaling issues are minimized, so that relatively large software systems can be aggressively verified in a cost-effective manner.

  10. Tagged token dataflow architecture

    SciTech Connect

    Arvind; Culler, D.E.

    1983-10-01

    The demand for large-scale multiprocessor systems has been substantial for many years. The technology for fabrication of such systems is available, but attempts to extend traditional architectures to this context have met with only mild success. The authors hold that fundamental aspects of the Von Neumann architecture prohibit its extension to multiprocessor systems; they pose dataflow architectures as an alternative. These two approaches are contrasted on issues of synchronization, memory latency, and the ability to share data without constraining parallelism. 12 references.

  11. An Evaluation of the High Level Architecture (HLA) as a Framework for NASA Modeling and Simulation

    NASA Technical Reports Server (NTRS)

    Reid, Michael R.; Powers, Edward I. (Technical Monitor)

    2000-01-01

    The High Level Architecture (HLA) is a current US Department of Defense and an industry (IEEE-1516) standard architecture for modeling and simulations. It provides a framework and set of functional rules and common interfaces for integrating separate and disparate simulators into a larger simulation. The goal of the HLA is to reduce software costs by facilitating the reuse of simulation components and by providing a runtime infrastructure to manage the simulations. In order to evaluate the applicability of the HLA as a technology for NASA space mission simulations, a Simulations Group at Goddard Space Flight Center (GSFC) conducted a study of the HLA and developed a simple prototype HLA-compliant space mission simulator. This paper summarizes the prototyping effort and discusses the potential usefulness of the HLA in the design and planning of future NASA space missions with a focus on risk mitigation and cost reduction.

  12. [Spectral analysis of green pigments of painting and colored drawing in northern Chinese ancient architectures].

    PubMed

    Wang, Li-Qin; Yan, Jing; Fan, Xiao-Lei; Ma, Tao

    2010-02-01

    It is important to identify pigments of painting and colored drawing in ancient architectures in order to restore and conserve them. The components of green pigments were detected with X-ray diffraction (XRD), X-ray fluorescence (XRF) and scanning electron microscopy-energy dispersive X-ray (SEM-EDX). Twenty-seven samples were collected from painting and colored drawing in northern Chinese ancient architectures in Beijing, Shanxi province and Gansu province. The experiment results showed that emerald green [CuCH3COO]2 x Cu(AsO2)2], a complex of copper aceto-arsenite pigment, had been used as the colored component in fifteen samples, whereas organic materials synthesized in the rest. However, in all samples there were no malachite and atacamite, green pigments commonly used in ancient time a long time ago. These two pigments have been found in Qin Shihuang's Terracotta Army and the wall paintings at Mogao Grettoes, Dunhuang, and some other famous wall paintings and color pottery figurines. However, emerald green was used many years later. It was reported that emerald green was synthesized by Germany in 1814 and had been widely used in China as watercolor on pith paper works and on scroll paintings since the 1850s. Because painting and colored drawing in ancient architectures stands outside, under sunlight and rain, it must be repaired and repainted in less than fifty years. Therefore, it is not surprising that emerald green was used in them. In recent years, artificial organic materials are increasingly used in painting and colored drawing in ancient architectures. From experiments it was also showed that in the same recolored painting and colored drawing, organic materials are usually in the later layers, but emerald green is in the earlier layers. This work supplies a lot of data for the purpose of selecting restoration materials and identifying painting and colored drawing in ancient architectures with a new method. PMID:20384144

  13. Formalization and visualization of domain-specific software architectures

    NASA Technical Reports Server (NTRS)

    Bailor, Paul D.; Luginbuhl, David R.; Robinson, John S.

    1992-01-01

    This paper describes a domain-specific software design system based on the concepts of software architectures engineering and domain-specific models and languages. In this system, software architectures are used as high level abstractions to formulate a domain-specific software design. The software architecture serves as a framework for composing architectural fragments (e.g., domain objects, system components, and hardware interfaces) that make up the knowledge (or model) base for solving a problem in a particular application area. A corresponding software design is generated by analyzing and describing a system in the context of the software architecture. While the software architecture serves as the framework for the design, this concept is insufficient by itself for supplying the additional details required for a specific design. Additional domain knowledge is still needed to instantiate components of the architecture and develop optimized algorithms for the problem domain. One possible way to obtain the additional details is through the use of domain-specific languages. Thus, the general concept of a software architecture and the specific design details provided by domain-specific languages are combined to create what can be termed a domain-specific software architecture (DSSA).

  14. Reconfigurable Transceiver and Software-Defined Radio Architecture and Technology Evaluated for NASA Space Communications

    NASA Technical Reports Server (NTRS)

    Reinhart, Richard C.; Kacpura, Thomas J.

    2004-01-01

    The NASA Glenn Research Center is investigating the development and suitability of a software-based open-architecture for space-based reconfigurable transceivers (RTs) and software-defined radios (SDRs). The main objectives of this project are to enable advanced operations and reduce mission costs. SDRs are becoming more common because of the capabilities of reconfigurable digital signal processing technologies such as field programmable gate arrays and digital signal processors, which place radio functions in firmware and software that were traditionally performed with analog hardware components. Features of interest of this communications architecture include nonproprietary open standards and application programming interfaces to enable software reuse and portability, independent hardware and software development, and hardware and software functional separation. The goals for RT and SDR technologies for NASA space missions include prelaunch and on-orbit frequency and waveform reconfigurability and programmability, high data rate capability, and overall communications and processing flexibility. These operational advances over current state-of-art transceivers will be provided to reduce the power, mass, and cost of RTs and SDRs for space communications. The open architecture for NASA communications will support existing (legacy) communications needs and capabilities while providing a path to more capable, advanced waveform development and mission concepts (e.g., ad hoc constellations with self-healing networks and high-rate science data return). A study was completed to assess the state of the art in RT architectures, implementations, and technologies. In-house researchers conducted literature searches and analysis, interviewed Government and industry contacts, and solicited information and white papers from industry on space-qualifiable RTs and SDRs and their associated technologies for space-based NASA applications. The white papers were evaluated, compiled, and

  15. TRANSIMS software architecture for IOC-1

    SciTech Connect

    Berkbigler, K.P.; Bush, B.W.; Davis, J.F.

    1997-04-03

    This document describes the TRANSIMS software architecture and high-level design for the first Interim Operational Capability (IOC-1). Our primary goal in establishing the TRANSIMS software architecture is to lay down a framework for IOC-1. We aim to make sure that the various components of TRANSIMS are effectively integrated, both for IOC-1 and beyond, so that TRANSIMS remains flexible, expandable, portable, and maintainable throughout its lifetime. In addition to outlining the high-level design of the TRANSIMS software, we also set forth the software development environment and software engineering practices used for TRANSIMS.

  16. The genetic architecture of leaf number and its genetic relationship to flowering time in maize.

    PubMed

    Li, Dan; Wang, Xufeng; Zhang, Xiangbo; Chen, Qiuyue; Xu, Guanghui; Xu, Dingyi; Wang, Chenglong; Liang, Yameng; Wu, Lishuan; Huang, Cheng; Tian, Jinge; Wu, Yaoyao; Tian, Feng

    2016-04-01

    The number of leaves and their distributions on plants are critical factors determining plant architecture in maize (Zea mays), and leaf number is frequently used as a measure of flowering time, a trait that is key to local environmental adaptation. Here, using a large set of 866 maize-teosinte BC2 S3 recombinant inbred lines genotyped by using 19 838 single nucleotide polymorphism markers, we conducted a comprehensive genetic dissection to assess the genetic architecture of leaf number and its genetic relationship to flowering time. We demonstrated that the two components of total leaf number, the number of leaves above (LA) and below (LB) the primary ear, were under relatively independent genetic control and might be subject to differential directional selection during maize domestication and improvement. Furthermore, we revealed that flowering time and leaf number are commonly regulated at a moderate level. The pleiotropy of the genes ZCN8, dlf1 and ZmCCT on leaf number and flowering time were validated by near-isogenic line analysis. Through fine mapping, qLA1-1, a major-effect locus that specifically affects LA, was delimited to a region with severe recombination suppression derived from teosinte. This study provides important insights into the genetic basis of traits affecting plant architecture and adaptation. The genetic independence of LA from LB enables the optimization of leaf number for ideal plant architecture breeding in maize. PMID:26593156

  17. Robot Electronics Architecture

    NASA Technical Reports Server (NTRS)

    Garrett, Michael; Magnone, Lee; Aghazarian, Hrand; Baumgartner, Eric; Kennedy, Brett

    2008-01-01

    An electronics architecture has been developed to enable the rapid construction and testing of prototypes of robotic systems. This architecture is designed to be a research vehicle of great stability, reliability, and versatility. A system according to this architecture can easily be reconfigured (including expanded or contracted) to satisfy a variety of needs with respect to input, output, processing of data, sensing, actuation, and power. The architecture affords a variety of expandable input/output options that enable ready integration of instruments, actuators, sensors, and other devices as independent modular units. The separation of different electrical functions onto independent circuit boards facilitates the development of corresponding simple and modular software interfaces. As a result, both hardware and software can be made to expand or contract in modular fashion while expending a minimum of time and effort.

  18. CORDIC processor architectures

    NASA Astrophysics Data System (ADS)

    Boehme, Johann F.; Timmermann, D.; Hahn, H.; Hosticka, Bedrich J.

    1991-12-01

    As CORDIC algorithms receive more and more attention in elementary function evaluation and signal processing applications, the problem of their VLSI realization has attracted considerable interest. In this work we review the CORDIC fundamentals covering algorithm, architecture, and implementation issues. Various aspects of the CORDIC algorithm are investigated such as efficient scale factor compensation, redundant and non-redundant addition schemes, and convergence domain. Several CORDIC processor architectures and implementation examples are discussed.

  19. Generic Distributed Simulation Architecture

    SciTech Connect

    Booker, C.P.

    1999-05-14

    A Generic Distributed Simulation Architecture is described that allows a simulation to be automatically distributed over a heterogeneous network of computers and executed with very little human direction. A prototype Framework is presented that implements the elements of the Architecture and demonstrates the feasibility of the concepts. It provides a basis for a future, improved Framework that will support legacy models. Because the Framework is implemented in Java, it may be installed on almost any modern computer system.

  20. A High Performance COTS Based Computer Architecture

    NASA Astrophysics Data System (ADS)

    Patte, Mathieu; Grimoldi, Raoul; Trautner, Roland

    2014-08-01

    Using Commercial Off The Shelf (COTS) electronic components for space applications is a long standing idea. Indeed the difference in processing performance and energy efficiency between radiation hardened components and COTS components is so important that COTS components are very attractive for use in mass and power constrained systems. However using COTS components in space is not straightforward as one must account with the effects of the space environment on the COTS components behavior. In the frame of the ESA funded activity called High Performance COTS Based Computer, Airbus Defense and Space and its subcontractor OHB CGS have developed and prototyped a versatile COTS based architecture for high performance processing. The rest of the paper is organized as follows: in a first section we will start by recapitulating the interests and constraints of using COTS components for space applications; then we will briefly describe existing fault mitigation architectures and present our solution for fault mitigation based on a component called the SmartIO; in the last part of the paper we will describe the prototyping activities executed during the HiP CBC project.

  1. Architecture Adaptive Computing Environment

    NASA Technical Reports Server (NTRS)

    Dorband, John E.

    2006-01-01

    Architecture Adaptive Computing Environment (aCe) is a software system that includes a language, compiler, and run-time library for parallel computing. aCe was developed to enable programmers to write programs, more easily than was previously possible, for a variety of parallel computing architectures. Heretofore, it has been perceived to be difficult to write parallel programs for parallel computers and more difficult to port the programs to different parallel computing architectures. In contrast, aCe is supportable on all high-performance computing architectures. Currently, it is supported on LINUX clusters. aCe uses parallel programming constructs that facilitate writing of parallel programs. Such constructs were used in single-instruction/multiple-data (SIMD) programming languages of the 1980s, including Parallel Pascal, Parallel Forth, C*, *LISP, and MasPar MPL. In aCe, these constructs are extended and implemented for both SIMD and multiple- instruction/multiple-data (MIMD) architectures. Two new constructs incorporated in aCe are those of (1) scalar and virtual variables and (2) pre-computed paths. The scalar-and-virtual-variables construct increases flexibility in optimizing memory utilization in various architectures. The pre-computed-paths construct enables the compiler to pre-compute part of a communication operation once, rather than computing it every time the communication operation is performed.

  2. Supporting Undergraduate Computer Architecture Students Using a Visual MIPS64 CPU Simulator

    ERIC Educational Resources Information Center

    Patti, D.; Spadaccini, A.; Palesi, M.; Fazzino, F.; Catania, V.

    2012-01-01

    The topics of computer architecture are always taught using an Assembly dialect as an example. The most commonly used textbooks in this field use the MIPS64 Instruction Set Architecture (ISA) to help students in learning the fundamentals of computer architecture because of its orthogonality and its suitability for real-world applications. This…

  3. An Architecture of Embedded Decompressor with Reconfigurability for Test Compression

    NASA Astrophysics Data System (ADS)

    Ichihara, Hideyuki; Saiki, Tomoyuki; Inoue, Tomoo

    Test compression/decompression scheme for reducing the test application time and memory requirement of an LSI tester has been proposed. In the scheme, the employed coding algorithms are tailored to a given test data, so that the tailored coding algorithm can highly compress the test data. However, these methods have some drawbacks, e. g., the coding algorithm is ineffective in extra test data except for the given test data. In this paper, we introduce an embedded decompressor that is reconfigurable according to coding algorithms and given test data. Its reconfigurability can overcome the drawbacks of conventional decompressors with keeping high compression ratio. Moreover, we propose an architecture of reconfigurable decompressors for four variable-length codings. In the proposed architecture, the common functions for four codings are implemented as fixed (or non-reconfigurable) components so as to reduce the configuration data, which is stored on an ATE and sent to a CUT. Experimental results show that (1) the configuration data size becomes reasonably small by reducing the configuration part of the decompressor, (2) the reconfigurable decompressor is effective for SoC testing in respect of the test data size, and (3) it can achieve an optimal compression of test data by Huffman coding.

  4. Component-Based Approach in Learning Management System Development

    ERIC Educational Resources Information Center

    Zaitseva, Larisa; Bule, Jekaterina; Makarov, Sergey

    2013-01-01

    The paper describes component-based approach (CBA) for learning management system development. Learning object as components of e-learning courses and their metadata is considered. The architecture of learning management system based on CBA being developed in Riga Technical University, namely its architecture, elements and possibilities are…

  5. Challenges of Algebraic Multigrid across Multicore Architectures

    SciTech Connect

    Baker, A H; Gamblin, T; Schulz, M; Yang, U M

    2010-04-12

    Algebraic multigrid (AMG) is a popular solver for large-scale scientific computing and an essential component of many simulation codes. AMG has shown to be extremely efficient on distributed-memory architectures. However, when executed on modern multicore architectures, we face new challenges that can significantly deteriorate AMG's performance. We examine its performance and scalability on three disparate multicore architectures: a cluster with four AMD Opteron Quad-core processors per node (Hera), a Cray XT5 with two AMD Opteron Hex-core processors per node (Jaguar), and an IBM BlueGene/P system with a single Quad-core processor (Intrepid). We discuss our experiences on these platforms and present results using both an MPI-only and a hybrid MPI/OpenMP model. We also discuss a set of techniques that helped to overcome the associated problems, including thread and process pinning and correct memory associations.

  6. The NASA Auralization Framework and Plugin Architecture

    NASA Technical Reports Server (NTRS)

    Aumann, Aric R.; Tuttle, Brian C.; Chapin, William L.; Rizzi, Stephen A.

    2015-01-01

    NASA has a long history of investigating human response to aircraft flyover noise and in recent years has developed a capability to fully auralize the noise of aircraft during their design. This capability is particularly useful for unconventional designs with noise signatures significantly different from the current fleet. To that end, a flexible software architecture has been developed to facilitate rapid integration of new simulation techniques for noise source synthesis and propagation, and to foster collaboration amongst researchers through a common releasable code base. The NASA Auralization Framework (NAF) is a skeletal framework written in C++ with basic functionalities and a plugin architecture that allows users to mix and match NAF capabilities with their own methods through the development and use of dynamically linked libraries. This paper presents the NAF software architecture and discusses several advanced auralization techniques that have been implemented as plugins to the framework.

  7. Optimal expression evaluation for data parallel architectures

    NASA Technical Reports Server (NTRS)

    Gilbert, J. R.; Schreiber, R.

    1990-01-01

    A data parallel machine represents an array or other composite data structure by allocating one processor per data item. A pointwise operation can be performed between two such arrays in unit time, provided their corresponding elements are allocated in the same processors. If the arrays are not aligned in this fashion, the cost of moving one or both of them is part of the cost of operation. The choice of where to perform the operation then affects this cost. If an expression with several operands is to be evaluated, there may be many choices of where to perform the intermediate operations. An efficient algorithm is given to find the minimum cost way to evaluate an expression, for several different data parallel architectures. The algorithm applies to any architecture in which the metric describing the cost of moving an array has a property called robustness. This encompasses most of the common data parallel communication architectures, including meshes of arbitrary dimension and hypercubes.

  8. Optimal expression evaluation for data parallel architectures

    NASA Technical Reports Server (NTRS)

    Gilbert, John R.; Schreiber, Robert

    1991-01-01

    A data parallel machine represents an array or other composits data structure by allocating one processor per data item. A pointwise operation can be performed between two such arrays in unit time, provided their corresponding elements are allocated in the same processors. If the arrays are not aligned in this fashion, the cost of moving one or both of them is part of the cost of operation. The choice of where to perform the operation then affects this cost. If an expression with several operands is to be evaluated, there may be many choices of where to perform the intermediate operations. An efficient algorithm is given to find the minimum cost way to evaluate an expression, for several different data parallel architectures. The algorithm applies to any architecture in which the metric describing the cost of moving an array has a property called robustness. This encompasses most of the common data parallel communication architectures, including meshes of arbitrary dimension and hypercubes.

  9. Model-Drive Architecture for Agent-Based Systems

    NASA Technical Reports Server (NTRS)

    Gradanin, Denis; Singh, H. Lally; Bohner, Shawn A.; Hinchey, Michael G.

    2004-01-01

    The Model Driven Architecture (MDA) approach uses a platform-independent model to define system functionality, or requirements, using some specification language. The requirements are then translated to a platform-specific model for implementation. An agent architecture based on the human cognitive model of planning, the Cognitive Agent Architecture (Cougaar) is selected for the implementation platform. The resulting Cougaar MDA prescribes certain kinds of models to be used, how those models may be prepared and the relationships of the different kinds of models. Using the existing Cougaar architecture, the level of application composition is elevated from individual components to domain level model specifications in order to generate software artifacts. The software artifacts generation is based on a metamodel. Each component maps to a UML structured component which is then converted into multiple artifacts: Cougaar/Java code, documentation, and test cases.

  10. Neural Architectures for Control

    NASA Technical Reports Server (NTRS)

    Peterson, James K.

    1991-01-01

    The cerebellar model articulated controller (CMAC) neural architectures are shown to be viable for the purposes of real-time learning and control. Software tools for the exploration of CMAC performance are developed for three hardware platforms, the MacIntosh, the IBM PC, and the SUN workstation. All algorithm development was done using the C programming language. These software tools were then used to implement an adaptive critic neuro-control design that learns in real-time how to back up a trailer truck. The truck backer-upper experiment is a standard performance measure in the neural network literature, but previously the training of the controllers was done off-line. With the CMAC neural architectures, it was possible to train the neuro-controllers on-line in real-time on a MS-DOS PC 386. CMAC neural architectures are also used in conjunction with a hierarchical planning approach to find collision-free paths over 2-D analog valued obstacle fields. The method constructs a coarse resolution version of the original problem and then finds the corresponding coarse optimal path using multipass dynamic programming. CMAC artificial neural architectures are used to estimate the analog transition costs that dynamic programming requires. The CMAC architectures are trained in real-time for each obstacle field presented. The coarse optimal path is then used as a baseline for the construction of a fine scale optimal path through the original obstacle array. These results are a very good indication of the potential power of the neural architectures in control design. In order to reach as wide an audience as possible, we have run a seminar on neuro-control that has met once per week since 20 May 1991. This seminar has thoroughly discussed the CMAC architecture, relevant portions of classical control, back propagation through time, and adaptive critic designs.

  11. Noncontextuality with marginal selectivity in reconstructing mental architectures

    PubMed Central

    Zhang, Ru; Dzhafarov, Ehtibar N.

    2015-01-01

    We present a general theory of series-parallel mental architectures with selectively influenced stochastically non-independent components. A mental architecture is a hypothetical network of processes aimed at performing a task, of which we only observe the overall time it takes under variable parameters of the task. It is usually assumed that the network contains several processes selectively influenced by different experimental factors, and then the question is asked as to how these processes are arranged within the network, e.g., whether they are concurrent or sequential. One way of doing this is to consider the distribution functions for the overall processing time and compute certain linear combinations thereof (interaction contrasts). The theory of selective influences in psychology can be viewed as a special application of the interdisciplinary theory of (non)contextuality having its origins and main applications in quantum theory. In particular, lack of contextuality is equivalent to the existence of a “hidden” random entity of which all the random variables in play are functions. Consequently, for any given value of this common random entity, the processing times and their compositions (minima, maxima, or sums) become deterministic quantities. These quantities, in turn, can be treated as random variables with (shifted) Heaviside distribution functions, for which one can easily compute various linear combinations across different treatments, including interaction contrasts. This mathematical fact leads to a simple method, more general than the previously used ones, to investigate and characterize the interaction contrast for different types of series-parallel architectures. PMID:26136694

  12. Partitioning heritability of regulatory and cell-type-specific variants across 11 common diseases.

    PubMed

    Gusev, Alexander; Lee, S Hong; Trynka, Gosia; Finucane, Hilary; Vilhjálmsson, Bjarni J; Xu, Han; Zang, Chongzhi; Ripke, Stephan; Bulik-Sullivan, Brendan; Stahl, Eli; Kähler, Anna K; Hultman, Christina M; Purcell, Shaun M; McCarroll, Steven A; Daly, Mark; Pasaniuc, Bogdan; Sullivan, Patrick F; Neale, Benjamin M; Wray, Naomi R; Raychaudhuri, Soumya; Price, Alkes L

    2014-11-01

    Regulatory and coding variants are known to be enriched with associations identified by genome-wide association studies (GWASs) of complex disease, but their contributions to trait heritability are currently unknown. We applied variance-component methods to imputed genotype data for 11 common diseases to partition the heritability explained by genotyped SNPs (hg(2)) across functional categories (while accounting for shared variance due to linkage disequilibrium). Extensive simulations showed that in contrast to current estimates from GWAS summary statistics, the variance-component approach partitions heritability accurately under a wide range of complex-disease architectures. Across the 11 diseases DNaseI hypersensitivity sites (DHSs) from 217 cell types spanned 16% of imputed SNPs (and 24% of genotyped SNPs) but explained an average of 79% (SE = 8%) of hg(2) from imputed SNPs (5.1× enrichment; p = 3.7 × 10(-17)) and 38% (SE = 4%) of hg(2) from genotyped SNPs (1.6× enrichment, p = 1.0 × 10(-4)). Further enrichment was observed at enhancer DHSs and cell-type-specific DHSs. In contrast, coding variants, which span 1% of the genome, explained <10% of hg(2) despite having the highest enrichment. We replicated these findings but found no significant contribution from rare coding variants in independent schizophrenia cohorts genotyped on GWAS and exome chips. Our results highlight the value of analyzing components of heritability to unravel the functional architecture of common disease. PMID:25439723

  13. Partitioning Heritability of Regulatory and Cell-Type-Specific Variants across 11 Common Diseases

    PubMed Central

    Gusev, Alexander; Lee, S. Hong; Trynka, Gosia; Finucane, Hilary; Vilhjálmsson, Bjarni J.; Xu, Han; Zang, Chongzhi; Ripke, Stephan; Bulik-Sullivan, Brendan; Stahl, Eli; Ripke, Stephan; Neale, Benjamin M.; Corvin, Aiden; Walters, James T.R.; Farh, Kai-How; Holmans, Peter A.; Lee, Phil; Bulik-Sullivan, Brendan; Collier, David A.; Huang, Hailiang; Pers, Tune H.; Agartz, Ingrid; Agerbo, Esben; Albus, Margot; Alexander, Madeline; Amin, Farooq; Bacanu, Silviu A.; Begemann, Martin; Belliveau, Richard A.; Bene, Judit; Bergen, Sarah E.; Bevilacqua, Elizabeth; Bigdeli, Tim B.; Black, Donald W.; Børglum, Anders D.; Bruggeman, Richard; Buccola, Nancy G.; Buckner, Randy L.; Byerley, William; Cahn, Wiepke; Cai, Guiqing; Campion, Dominique; Cantor, Rita M.; Carr, Vaughan J.; Carrera, Noa; Catts, Stanley V.; Chambert, Kimberly D.; Chan, Raymond C.K.; Chen, Ronald Y.L.; Chen, Eric Y.H.; Cheng, Wei; Cheung, Eric F.C.; Chong, Siow Ann; Cloninger, C. Robert; Cohen, David; Cohen, Nadine; Cormican, Paul; Craddock, Nick; Crowley, James J.; Curtis, David; Davidson, Michael; Davis, Kenneth L.; Degenhardt, Franziska; Del Favero, Jurgen; DeLisi, Lynn E.; Demontis, Ditte; Dikeos, Dimitris; Dinan, Timothy; Djurovic, Srdjan; Donohoe, Gary; Drapeau, Elodie; Duan, Jubao; Dudbridge, Frank; Durmishi, Naser; Eichhammer, Peter; Eriksson, Johan; Escott-Price, Valentina; Essioux, Laurent; Fanous, Ayman H.; Farrell, Martilias S.; Frank, Josef; Franke, Lude; Freedman, Robert; Freimer, Nelson B.; Friedl, Marion; Friedman, Joseph I.; Fromer, Menachem; Genovese, Giulio; Georgieva, Lyudmila; Gershon, Elliot S.; Giegling, Ina; Giusti-Rodrguez, Paola; Godard, Stephanie; Goldstein, Jacqueline I.; Golimbet, Vera; Gopal, Srihari; Gratten, Jacob; Grove, Jakob; de Haan, Lieuwe; Hammer, Christian; Hamshere, Marian L.; Hansen, Mark; Hansen, Thomas; Haroutunian, Vahram; Hartmann, Annette M.; Henskens, Frans A.; Herms, Stefan; Hirschhorn, Joel N.; Hoffmann, Per; Hofman, Andrea; Hollegaard, Mads V.; Hougaard, David M.; Ikeda, Masashi; Joa, Inge; Julià, Antonio; Kahn, René S.; Kalaydjieva, Luba; Karachanak-Yankova, Sena; Karjalainen, Juha; Kavanagh, David; Keller, Matthew C.; Kelly, Brian J.; Kennedy, James L.; Khrunin, Andrey; Kim, Yunjung; Klovins, Janis; Knowles, James A.; Konte, Bettina; Kucinskas, Vaidutis; Kucinskiene, Zita Ausrele; Kuzelova-Ptackova, Hana; Kähler, Anna K.; Laurent, Claudine; Keong, Jimmy Lee Chee; Lee, S. Hong; Legge, Sophie E.; Lerer, Bernard; Li, Miaoxin; Li, Tao; Liang, Kung-Yee; Lieberman, Jeffrey; Limborska, Svetlana; Loughland, Carmel M.; Lubinski, Jan; Lnnqvist, Jouko; Macek, Milan; Magnusson, Patrik K.E.; Maher, Brion S.; Maier, Wolfgang; Mallet, Jacques; Marsal, Sara; Mattheisen, Manuel; Mattingsdal, Morten; McCarley, Robert W.; McDonald, Colm; McIntosh, Andrew M.; Meier, Sandra; Meijer, Carin J.; Melegh, Bela; Melle, Ingrid; Mesholam-Gately, Raquelle I.; Metspalu, Andres; Michie, Patricia T.; Milani, Lili; Milanova, Vihra; Mokrab, Younes; Morris, Derek W.; Mors, Ole; Mortensen, Preben B.; Murphy, Kieran C.; Murray, Robin M.; Myin-Germeys, Inez; Mller-Myhsok, Bertram; Nelis, Mari; Nenadic, Igor; Nertney, Deborah A.; Nestadt, Gerald; Nicodemus, Kristin K.; Nikitina-Zake, Liene; Nisenbaum, Laura; Nordin, Annelie; O’Callaghan, Eadbhard; O’Dushlaine, Colm; O’Neill, F. Anthony; Oh, Sang-Yun; Olincy, Ann; Olsen, Line; Van Os, Jim; Pantelis, Christos; Papadimitriou, George N.; Papiol, Sergi; Parkhomenko, Elena; Pato, Michele T.; Paunio, Tiina; Pejovic-Milovancevic, Milica; Perkins, Diana O.; Pietilinen, Olli; Pimm, Jonathan; Pocklington, Andrew J.; Powell, John; Price, Alkes; Pulver, Ann E.; Purcell, Shaun M.; Quested, Digby; Rasmussen, Henrik B.; Reichenberg, Abraham; Reimers, Mark A.; Richards, Alexander L.; Roffman, Joshua L.; Roussos, Panos; Ruderfer, Douglas M.; Salomaa, Veikko; Sanders, Alan R.; Schall, Ulrich; Schubert, Christian R.; Schulze, Thomas G.; Schwab, Sibylle G.; Scolnick, Edward M.; Scott, Rodney J.; Seidman, Larry J.; Shi, Jianxin; Sigurdsson, Engilbert; Silagadze, Teimuraz; Silverman, Jeremy M.; Sim, Kang; Slominsky, Petr; Smoller, Jordan W.; So, Hon-Cheong; Spencer, Chris C.A.; Stahl, Eli A.; Stefansson, Hreinn; Steinberg, Stacy; Stogmann, Elisabeth; Straub, Richard E.; Strengman, Eric; Strohmaier, Jana; Stroup, T. Scott; Subramaniam, Mythily; Suvisaari, Jaana; Svrakic, Dragan M.; Szatkiewicz, Jin P.; Sderman, Erik; Thirumalai, Srinivas; Toncheva, Draga; Tooney, Paul A.; Tosato, Sarah; Veijola, Juha; Waddington, John; Walsh, Dermot; Wang, Dai; Wang, Qiang; Webb, Bradley T.; Weiser, Mark; Wildenauer, Dieter B.; Williams, Nigel M.; Williams, Stephanie; Witt, Stephanie H.; Wolen, Aaron R.; Wong, Emily H.M.; Wormley, Brandon K.; Wu, Jing Qin; Xi, Hualin Simon; Zai, Clement C.; Zheng, Xuebin; Zimprich, Fritz; Wray, Naomi R.; Stefansson, Kari; Visscher, Peter M.; Adolfsson, Rolf; Andreassen, Ole A.; Blackwood, Douglas H.R.; Bramon, Elvira; Buxbaum, Joseph D.; Brglum, Anders D.; Cichon, Sven; Darvasi, Ariel; Domenici, Enrico; Ehrenreich, Hannelore; Esko, Tõnu; Gejman, Pablo V.; Gill, Michael; Gurling, Hugh; Hultman, Christina M.; Iwata, Nakao; Jablensky, Assen V.; Jönsson, Erik G.; Kendler, Kenneth S.; Kirov, George; Knight, Jo; Lencz, Todd; Levinson, Douglas F.; Li, Qingqin S.; Liu, Jianjun; Malhotra, Anil K.; McCarroll, Steven A.; McQuillin, Andrew; Moran, Jennifer L.; Mortensen, Preben B.; Mowry, Bryan J.; Nthen, Markus M.; Ophoff, Roel A.; Owen, Michael J.; Palotie, Aarno; Pato, Carlos N.; Petryshen, Tracey L.; Posthuma, Danielle; Rietschel, Marcella; Riley, Brien P.; Rujescu, Dan; Sham, Pak C.; Sklar, Pamela; St. Clair, David; Weinberger, Daniel R.; Wendland, Jens R.; Werge, Thomas; Daly, Mark J.; Sullivan, Patrick F.; O’Donovan, Michael C.; Ripke, Stephan; O’Dushlaine, Colm; Chambert, Kimberly; Moran, Jennifer L.; Kähler, Anna K.; Akterin, Susanne; Bergen, Sarah; Magnusson, Patrik K.E.; Neale, Benjamin M.; Ruderfer, Douglas; Scolnick, Edward; Purcell, Shaun; McCarroll, Steve; Sklar, Pamela; Hultman, Christina M.; Sullivan, Patrick F.; Kähler, Anna K.; Hultman, Christina M.; Purcell, Shaun M.; McCarroll, Steven A.; Daly, Mark; Pasaniuc, Bogdan; Sullivan, Patrick F.; Neale, Benjamin M.; Wray, Naomi R.; Raychaudhuri, Soumya; Price, Alkes L.

    2014-01-01

    Regulatory and coding variants are known to be enriched with associations identified by genome-wide association studies (GWASs) of complex disease, but their contributions to trait heritability are currently unknown. We applied variance-component methods to imputed genotype data for 11 common diseases to partition the heritability explained by genotyped SNPs (hg2) across functional categories (while accounting for shared variance due to linkage disequilibrium). Extensive simulations showed that in contrast to current estimates from GWAS summary statistics, the variance-component approach partitions heritability accurately under a wide range of complex-disease architectures. Across the 11 diseases DNaseI hypersensitivity sites (DHSs) from 217 cell types spanned 16% of imputed SNPs (and 24% of genotyped SNPs) but explained an average of 79% (SE = 8%) of hg2 from imputed SNPs (5.1× enrichment; p = 3.7 × 10−17) and 38% (SE = 4%) of hg2 from genotyped SNPs (1.6× enrichment, p = 1.0 × 10−4). Further enrichment was observed at enhancer DHSs and cell-type-specific DHSs. In contrast, coding variants, which span 1% of the genome, explained <10% of hg2 despite having the highest enrichment. We replicated these findings but found no significant contribution from rare coding variants in independent schizophrenia cohorts genotyped on GWAS and exome chips. Our results highlight the value of analyzing components of heritability to unravel the functional architecture of common disease. PMID:25439723

  14. A Facility and Architecture for Autonomy Research

    NASA Technical Reports Server (NTRS)

    Pisanich, Greg; Clancy, Daniel (Technical Monitor)

    2002-01-01

    Autonomy is a key enabling factor in the advancement of the remote robotic exploration. There is currently a large gap between autonomy software at the research level and software that is ready for insertion into near-term space missions. The Mission Simulation Facility (MST) will bridge this gap by providing a simulation framework and suite of simulation tools to support research in autonomy for remote exploration. This system will allow developers of autonomy software to test their models in a high-fidelity simulation and evaluate their system's performance against a set of integrated, standardized simulations. The Mission Simulation ToolKit (MST) uses a distributed architecture with a communication layer that is built on top of the standardized High Level Architecture (HLA). This architecture enables the use of existing high fidelity models, allows mixing simulation components from various computing platforms and enforces the use of a standardized high-level interface among components. The components needed to achieve a realistic simulation can be grouped into four categories: environment generation (terrain, environmental features), robotic platform behavior (robot dynamics), instrument models (camera/spectrometer/etc.), and data analysis. The MST will provide basic components in these areas but allows users to plug-in easily any refined model by means of a communication protocol. Finally, a description file defines the robot and environment parameters for easy configuration and ensures that all the simulation models share the same information.

  15. Architectural plasticity in a Mediterranean winter annual

    PubMed Central

    Shemesh, Hagai; Zaitchik, Benjamin; Acuña, Tania; Novoplansky, Ariel

    2012-01-01

    Size variability in plants may be underlain by overlooked components of architectural plasticity. In annual plants, organ sizes are expected to depend on the availability and reliability of resources and developmental time. Given sufficient resources and developmental time, plants are expected to develop a greater number of large branches, which would maximize fitness in the long run. However, under restrictive growth conditions and environmental reliability, developing large branches might be risky and smaller branches are expected to foster higher final fitness. Growth and architecture of Trifolium purpureum (Papilionaceae) plants from both Mediterranean (MED) and semi-arid (SAR) origins were studied, when plants were subjected to variable water availability, photoperiod cues and germination timing. Although no clear architectural plasticity could be found in response to water availability, plants subjected to photoperiod cuing typical to late spring developed fewer basal branches. Furthermore, plants that germinated late were significantly smaller, with fewer basal branches, compared with plants which grew for the same time, starting at the beginning of the growing season. The results demonstrate an intricate interplay between size and architectural plasticities, whereby size modifications are readily induced by environmental factors related to prevalent resource availability but architectural plasticity is only elicited following the perception of reliable anticipatory cues. PMID:22499177

  16. GNC Architecture Design for ARES Simulation. Revision 3.0. Revision 3.0

    NASA Technical Reports Server (NTRS)

    Gay, Robert

    2006-01-01

    The purpose of this document is to describe the GNC architecture and associated interfaces for all ARES simulations. Establishing a common architecture facilitates development across the ARES simulations and provides an efficient mechanism for creating an end-to-end simulation capability. In general, the GNC architecture is the frame work in which all GNC development takes place, including sensor and effector models. All GNC software applications have a standard location within the architecture making integration easier and, thus more efficient.

  17. Common Cause Failure Modeling

    NASA Technical Reports Server (NTRS)

    Hark, Frank; Britton, Paul; Ring, Rob; Novack, Steven D.

    2015-01-01

    Common Cause Failures (CCFs) are a known and documented phenomenon that defeats system redundancy. CCFS are a set of dependent type of failures that can be caused by: system environments; manufacturing; transportation; storage; maintenance; and assembly, as examples. Since there are many factors that contribute to CCFs, the effects can be reduced, but they are difficult to eliminate entirely. Furthermore, failure databases sometimes fail to differentiate between independent and CCF (dependent) failure and data is limited, especially for launch vehicles. The Probabilistic Risk Assessment (PRA) of NASA's Safety and Mission Assurance Directorate at Marshall Space Flight Center (MFSC) is using generic data from the Nuclear Regulatory Commission's database of common cause failures at nuclear power plants to estimate CCF due to the lack of a more appropriate data source. There remains uncertainty in the actual magnitude of the common cause risk estimates for different systems at this stage of the design. Given the limited data about launch vehicle CCF and that launch vehicles are a highly redundant system by design, it is important to make design decisions to account for a range of values for independent and CCFs. When investigating the design of the one-out-of-two component redundant system for launch vehicles, a response surface was constructed to represent the impact of the independent failure rate versus a common cause beta factor effect on a system's failure probability. This presentation will define a CCF and review estimation calculations. It gives a summary of reduction methodologies and a review of examples of historical CCFs. Finally, it presents the response surface and discusses the results of the different CCFs on the reliability of a one-out-of-two system.

  18. Common Cause Failure Modeling

    NASA Technical Reports Server (NTRS)

    Hark, Frank; Britton, Paul; Ring, Rob; Novack, Steven D.

    2016-01-01

    Common Cause Failures (CCFs) are a known and documented phenomenon that defeats system redundancy. CCFS are a set of dependent type of failures that can be caused by: system environments; manufacturing; transportation; storage; maintenance; and assembly, as examples. Since there are many factors that contribute to CCFs, the effects can be reduced, but they are difficult to eliminate entirely. Furthermore, failure databases sometimes fail to differentiate between independent and CCF (dependent) failure and data is limited, especially for launch vehicles. The Probabilistic Risk Assessment (PRA) of NASA's Safety and Mission Assurance Directorate at Marshal Space Flight Center (MFSC) is using generic data from the Nuclear Regulatory Commission's database of common cause failures at nuclear power plants to estimate CCF due to the lack of a more appropriate data source. There remains uncertainty in the actual magnitude of the common cause risk estimates for different systems at this stage of the design. Given the limited data about launch vehicle CCF and that launch vehicles are a highly redundant system by design, it is important to make design decisions to account for a range of values for independent and CCFs. When investigating the design of the one-out-of-two component redundant system for launch vehicles, a response surface was constructed to represent the impact of the independent failure rate versus a common cause beta factor effect on a system's failure probability. This presentation will define a CCF and review estimation calculations. It gives a summary of reduction methodologies and a review of examples of historical CCFs. Finally, it presents the response surface and discusses the results of the different CCFs on the reliability of a one-out-of-two system.

  19. Partially Decentralized Control Architectures for Satellite Formations

    NASA Technical Reports Server (NTRS)

    Carpenter, J. Russell; Bauer, Frank H.

    2002-01-01

    In a partially decentralized control architecture, more than one but less than all nodes have supervisory capability. This paper describes an approach to choosing the number of supervisors in such au architecture, based on a reliability vs. cost trade. It also considers the implications of these results for the design of navigation systems for satellite formations that could be controlled with a partially decentralized architecture. Using an assumed cost model, analytic and simulation-based results indicate that it may be cheaper to achieve a given overall system reliability with a partially decentralized architecture containing only a few supervisors, than with either fully decentralized or purely centralized architectures. Nominally, the subset of supervisors may act as centralized estimation and control nodes for corresponding subsets of the remaining subordinate nodes, and act as decentralized estimation and control peers with respect to each other. However, in the context of partially decentralized satellite formation control, the absolute positions and velocities of each spacecraft are unique, so that correlations which make estimates using only local information suboptimal only occur through common biases and process noise. Covariance and monte-carlo analysis of a simplified system show that this lack of correlation may allow simplification of the local estimators while preserving the global optimality of the maneuvers commanded by the supervisors.

  20. Gravity response mechanisms of lateral organs and the control of plant architecture in Arabidopsis

    NASA Astrophysics Data System (ADS)

    Mullen, J.; Hangarter, R.

    Most research on gravity responses in plants has focused on primary roots and shoots, which typically grow in a vertical orientation. However, the patterns of lateral organ formation and their growth orientation, which typically are not vertical, govern plant architecture. For example, in Arabidopsis, when lateral roots emerge from the primary root, they grow at a nearly horizontal orientation. As they elongate, the roots slowly curve until they eventually reach a vertical orientation. The regulation of this lateral root orientation is an important component affecting the overall root system architecture. We have found that this change in orientation is not simply due to the onset of gravitropic competence, as non-vertical lateral roots are capable of both positive and negative gravitropism. Thus, the horizontal growth of the new lateral roots is determined by what is called the gravitropic set-point angle (GSA). In Arabidopsis shoots, rosette leaves and inflorescence branches also display GSA-dependent developmental changes in their orientation. The developmental control of the GSA of lateral organs in Arabidopsis provides us with a useful system for investigating the components involved in regulating directionality of tropistic responses. We have identified several Arabidopsis mutants that have either altered lateral root orientations, altered orientation of lateral organs in the shoot, or both, but maintain normal primary organ orientation. The mgsa ({m}odified {g}ravitropic {s}et-point {a}ngle) mutants with both altered lateral root and shoot orientation show that there are common components in the regulation of growth orientation in the different organs. Rosette leaves and lateral roots also have in common a regulation of positioning by red light. Further molecular and physiological analyses of the GSA mutants will provide insight into the basis of GSA regulation and, thus, a better understanding of how gravity controls plant architecture. [This work was

  1. Neural architectures for stereo vision.

    PubMed

    Parker, Andrew J; Smith, Jackson E T; Krug, Kristine

    2016-06-19

    Stereoscopic vision delivers a sense of depth based on binocular information but additionally acts as a mechanism for achieving correspondence between patterns arriving at the left and right eyes. We analyse quantitatively the cortical architecture for stereoscopic vision in two areas of macaque visual cortex. For primary visual cortex V1, the result is consistent with a module that is isotropic in cortical space with a diameter of at least 3 mm in surface extent. This implies that the module for stereo is larger than the repeat distance between ocular dominance columns in V1. By contrast, in the extrastriate cortical area V5/MT, which has a specialized architecture for stereo depth, the module for representation of stereo is about 1 mm in surface extent, so the representation of stereo in V5/MT is more compressed than V1 in terms of neural wiring of the neocortex. The surface extent estimated for stereo in V5/MT is consistent with measurements of its specialized domains for binocular disparity. Within V1, we suggest that long-range horizontal, anatomical connections form functional modules that serve both binocular and monocular pattern recognition: this common function may explain the distortion and disruption of monocular pattern vision observed in amblyopia.This article is part of the themed issue 'Vision in our three-dimensional world'. PMID:27269604

  2. Neural architectures for stereo vision

    PubMed Central

    2016-01-01

    Stereoscopic vision delivers a sense of depth based on binocular information but additionally acts as a mechanism for achieving correspondence between patterns arriving at the left and right eyes. We analyse quantitatively the cortical architecture for stereoscopic vision in two areas of macaque visual cortex. For primary visual cortex V1, the result is consistent with a module that is isotropic in cortical space with a diameter of at least 3 mm in surface extent. This implies that the module for stereo is larger than the repeat distance between ocular dominance columns in V1. By contrast, in the extrastriate cortical area V5/MT, which has a specialized architecture for stereo depth, the module for representation of stereo is about 1 mm in surface extent, so the representation of stereo in V5/MT is more compressed than V1 in terms of neural wiring of the neocortex. The surface extent estimated for stereo in V5/MT is consistent with measurements of its specialized domains for binocular disparity. Within V1, we suggest that long-range horizontal, anatomical connections form functional modules that serve both binocular and monocular pattern recognition: this common function may explain the distortion and disruption of monocular pattern vision observed in amblyopia. This article is part of the themed issue ‘Vision in our three-dimensional world’. PMID:27269604

  3. Performance Engineering Technology for Scientific Component Software

    SciTech Connect

    Malony, Allen D.

    2007-05-08

    Large-scale, complex scientific applications are beginning to benefit from the use of component software design methodology and technology for software development. Integral to the success of component-based applications is the ability to achieve high-performing code solutions through the use of performance engineering tools for both intra-component and inter-component analysis and optimization. Our work on this project aimed to develop performance engineering technology for scientific component software in association with the DOE CCTTSS SciDAC project (active during the contract period) and the broader Common Component Architecture (CCA) community. Our specific implementation objectives were to extend the TAU performance system and Program Database Toolkit (PDT) to support performance instrumentation, measurement, and analysis of CCA components and frameworks, and to develop performance measurement and monitoring infrastructure that could be integrated in CCA applications. These objectives have been met in the completion of all project milestones and in the transfer of the technology into the continuing CCA activities as part of the DOE TASCS SciDAC2 effort. In addition to these achievements, over the past three years, we have been an active member of the CCA Forum, attending all meetings and serving in several working groups, such as the CCA Toolkit working group, the CQoS working group, and the Tutorial working group. We have contributed significantly to CCA tutorials since SC'04, hosted two CCA meetings, participated in the annual ACTS workshops, and were co-authors on the recent CCA journal paper [24]. There are four main areas where our project has delivered results: component performance instrumentation and measurement, component performance modeling and optimization, performance database and data mining, and online performance monitoring. This final report outlines the achievements in these areas for the entire project period. The submitted progress

  4. Hybrid architecture for building secure sensor networks

    NASA Astrophysics Data System (ADS)

    Owens, Ken R., Jr.; Watkins, Steve E.

    2012-04-01

    Sensor networks have various communication and security architectural concerns. Three approaches are defined to address these concerns for sensor networks. The first area is the utilization of new computing architectures that leverage embedded virtualization software on the sensor. Deploying a small, embedded virtualization operating system on the sensor nodes that is designed to communicate to low-cost cloud computing infrastructure in the network is the foundation to delivering low-cost, secure sensor networks. The second area focuses on securing the sensor. Sensor security components include developing an identification scheme, and leveraging authentication algorithms and protocols that address security assurance within the physical, communication network, and application layers. This function will primarily be accomplished through encrypting the communication channel and integrating sensor network firewall and intrusion detection/prevention components to the sensor network architecture. Hence, sensor networks will be able to maintain high levels of security. The third area addresses the real-time and high priority nature of the data that sensor networks collect. This function requires that a quality-of-service (QoS) definition and algorithm be developed for delivering the right data at the right time. A hybrid architecture is proposed that combines software and hardware features to handle network traffic with diverse QoS requirements.

  5. Agent Architectures for Compliance

    NASA Astrophysics Data System (ADS)

    Burgemeestre, Brigitte; Hulstijn, Joris; Tan, Yao-Hua

    A Normative Multi-Agent System consists of autonomous agents who must comply with social norms. Different kinds of norms make different assumptions about the cognitive architecture of the agents. For example, a principle-based norm assumes that agents can reflect upon the consequences of their actions; a rule-based formulation only assumes that agents can avoid violations. In this paper we present several cognitive agent architectures for self-monitoring and compliance. We show how different assumptions about the cognitive architecture lead to different information needs when assessing compliance. The approach is validated with a case study of horizontal monitoring, an approach to corporate tax auditing recently introduced by the Dutch Customs and Tax Authority.

  6. Software Architecture Design Reasoning

    NASA Astrophysics Data System (ADS)

    Tang, Antony; van Vliet, Hans

    Despite recent advancements in software architecture knowledge management and design rationale modeling, industrial practice is behind in adopting these methods. The lack of empirical proofs and the lack of a practical process that can be easily incorporated by practitioners are some of the hindrance for adoptions. In particular, the process to support systematic design reasoning is not available. To rectify this issue, we propose a design reasoning process to help architects cope with an architectural design environment where design concerns are cross-cutting and diversified.We use an industrial case study to validate that the design reasoning process can help improve the quality of software architecture design. The results have indicated that associating design concerns and identifying design options are important steps in design reasoning.

  7. Advanced ground station architecture

    NASA Technical Reports Server (NTRS)

    Zillig, David; Benjamin, Ted

    1994-01-01

    This paper describes a new station architecture for NASA's Ground Network (GN). The architecture makes efficient use of emerging technologies to provide dramatic reductions in size, operational complexity, and operational and maintenance costs. The architecture, which is based on recent receiver work sponsored by the Office of Space Communications Advanced Systems Program, allows integration of both GN and Space Network (SN) modes of operation in the same electronics system. It is highly configurable through software and the use of charged coupled device (CCD) technology to provide a wide range of operating modes. Moreover, it affords modularity of features which are optional depending on the application. The resulting system incorporates advanced RF, digital, and remote control technology capable of introducing significant operational, performance, and cost benefits to a variety of NASA communications and tracking applications.

  8. Parallel supercomputing with commodity components

    SciTech Connect

    Warren, M.S.; Goda, M.P.; Becker, D.J.

    1997-09-01

    We have implemented a parallel computer architecture based entirely upon commodity personal computer components. Using 16 Intel Pentium Pro microprocessors and switched fast ethernet as a communication fabric, we have obtained sustained performance on scientific applications in excess of one Gigaflop. During one production astrophysics treecode simulation, we performed 1.2 x 10{sup 15} floating point operations (1.2 Petaflops) over a three week period, with one phase of that simulation running continuously for two weeks without interruption. We report on a variety of disk, memory and network benchmarks. We also present results from the NAS parallel benchmark suite, which indicate that this architecture is competitive with current commercial architectures. In addition, we describe some software written to support efficient message passing, as well as a Linux device driver interface to the Pentium hardware performance monitoring registers.

  9. Parallel supercomputing with commodity components

    NASA Technical Reports Server (NTRS)

    Warren, M. S.; Goda, M. P.; Becker, D. J.

    1997-01-01

    We have implemented a parallel computer architecture based entirely upon commodity personal computer components. Using 16 Intel Pentium Pro microprocessors and switched fast ethernet as a communication fabric, we have obtained sustained performance on scientific applications in excess of one Gigaflop. During one production astrophysics treecode simulation, we performed 1.2 x 10(sup 15) floating point operations (1.2 Petaflops) over a three week period, with one phase of that simulation running continuously for two weeks without interruption. We report on a variety of disk, memory and network benchmarks. We also present results from the NAS parallel benchmark suite, which indicate that this architecture is competitive with current commercial architectures. In addition, we describe some software written to support efficient message passing, as well as a Linux device driver interface to the Pentium hardware performance monitoring registers.

  10. Standardizing the information architecture for spacecraft operations

    NASA Technical Reports Server (NTRS)

    Easton, C. R.

    1994-01-01

    This paper presents an information architecture developed for the Space Station Freedom as a model from which to derive an information architecture standard for advanced spacecraft. The information architecture provides a way of making information available across a program, and among programs, assuming that the information will be in a variety of local formats, structures and representations. It provides a format that can be expanded to define all of the physical and logical elements that make up a program, add definitions as required, and import definitions from prior programs to a new program. It allows a spacecraft and its control center to work in different representations and formats, with the potential for supporting existing spacecraft from new control centers. It supports a common view of data and control of all spacecraft, regardless of their own internal view of their data and control characteristics, and of their communications standards, protocols and formats. This information architecture is central to standardizing spacecraft operations, in that it provides a basis for information transfer and translation, such that diverse spacecraft can be monitored and controlled in a common way.

  11. Synergetics and architecture

    NASA Astrophysics Data System (ADS)

    Maslov, V. P.; Maslova, T. V.

    2008-03-01

    A series of phenomena pertaining to economics, quantum physics, language, literary criticism, and especially architecture is studied from the standpoint of synergetics (the study of self-organizing complex systems). It turns out that a whole series of concrete formulas describing these phenomena is identical in these different situations. This is the case of formulas relating to the Bose-Einstein distribution of particles and the distribution of words from a frequency dictionary. This also allows to apply a "quantized" from of the Zipf law to the problem of the authorship of Quiet Flows the Don and to the "blending in" of new architectural structures in an existing environment.

  12. Information architecture. Volume 3: Guidance

    SciTech Connect

    1997-04-01

    The purpose of this document, as presented in Volume 1, The Foundations, is to assist the Department of Energy (DOE) in developing and promulgating information architecture guidance. This guidance is aimed at increasing the development of information architecture as a Departmentwide management best practice. This document describes departmental information architecture principles and minimum design characteristics for systems and infrastructures within the DOE Information Architecture Conceptual Model, and establishes a Departmentwide standards-based architecture program. The publication of this document fulfills the commitment to address guiding principles, promote standard architectural practices, and provide technical guidance. This document guides the transition from the baseline or defacto Departmental architecture through approved information management program plans and budgets to the future vision architecture. This document also represents another major step toward establishing a well-organized, logical foundation for the DOE information architecture.

  13. Refinery burner simulation design architecture summary.

    SciTech Connect

    Pollock, Guylaine M.; McDonald, Michael James; Halbgewachs, Ronald D.

    2011-10-01

    This report describes the architectural design for a high fidelity simulation of a refinery and refinery burner, including demonstrations of impacts to the refinery if errors occur during the refinery process. The refinery burner model and simulation are a part of the capabilities within the Sandia National Laboratories Virtual Control System Environment (VCSE). Three components comprise the simulation: HMIs developed with commercial SCADA software, a PLC controller, and visualization software. All of these components run on different machines. This design, documented after the simulation development, incorporates aspects not traditionally seen in an architectural design, but that were utilized in this particular demonstration development. Key to the success of this model development and presented in this report are the concepts of the multiple aspects of model design and development that must be considered to capture the necessary model representation fidelity of the physical systems.

  14. No Common Opinion on the Common Core

    ERIC Educational Resources Information Center

    Henderson, Michael B.; Peterson, Paul E.; West, Martin R.

    2015-01-01

    According to the three authors of this article, the 2014 "EdNext" poll yields four especially important new findings: (1) Opinion with respect to the Common Core has yet to coalesce. The idea of a common set of standards across the country has wide appeal, and the Common Core itself still commands the support of a majority of the public.…

  15. Automated component creation for legacy C++ and fortran codes.

    SciTech Connect

    Sottile, M. J.; Rasmussen, C. E.

    2001-01-01

    A significant amount of work has been spent creating component models and programming environments, but little support exists for automation in the process of creating components from existing codes. To entice users to adopt the component-based paradigm over traditional programming models, integration of legacy codes must be as simple and fast as possible, We present a system for automating the IDL generation stage of component development based on source code analysis of legacy C, C-t-4 and Fortran codes using the Program Database Toolkit. Together with IDL compilation tools such as Babel, we provide an alternative to hand-written IDL code for legacy applications and libraries. In addition to generating IDL, we propose an XML-based method for specifying meta-data related to type mapping and wrapper generation that can be shared between our tools and IDL compilers. The component model of choice for this work is the Common Component Architecture (CCA) using the Scientific Interface Definition Language (SIDL), though the concepts presented can be applied to other models.

  16. Candida Biofilms: Development, Architecture, and Resistance

    PubMed Central

    CHANDRA, JYOTSNA; MUKHERJEE, PRANAB K.

    2015-01-01

    Intravascular device–related infections are often associated with biofilms (microbial communities encased within a polysaccharide-rich extracellular matrix) formed by pathogens on the surfaces of these devices. Candida species are the most common fungi isolated from catheter-, denture-, and voice prosthesis–associated infections and also are commonly isolated from contact lens–related infections (e.g., fungal keratitis). These biofilms exhibit decreased susceptibility to most antimicrobial agents, which contributes to the persistence of infection. Recent technological advances have facilitated the development of novel approaches to investigate the formation of biofilms and identify specific markers for biofilms. These studies have provided extensive knowledge of the effect of different variables, including growth time, nutrients, and physiological conditions, on biofilm formation, morphology, and architecture. In this article, we will focus on fungal biofilms (mainly Candida biofilms) and provide an update on the development, architecture, and resistance mechanisms of biofilms. PMID:26350306

  17. Hadl: HUMS Architectural Description Language

    NASA Technical Reports Server (NTRS)

    Mukkamala, R.; Adavi, V.; Agarwal, N.; Gullapalli, S.; Kumar, P.; Sundaram, P.

    2004-01-01

    Specification of architectures is an important prerequisite for evaluation of architectures. With the increase m the growth of health usage and monitoring systems (HUMS) in commercial and military domains, the need far the design and evaluation of HUMS architectures has also been on the increase. In this paper, we describe HADL, HUMS Architectural Description Language, that we have designed for this purpose. In particular, we describe the features of the language, illustrate them with examples, and show how we use it in designing domain-specific HUMS architectures. A companion paper contains details on our design methodology of HUMS architectures.

  18. IDD Archival Hardware Architecture and Workflow

    SciTech Connect

    Mendonsa, D; Nekoogar, F; Martz, H

    2008-10-09

    This document describes the functionality of every component in the DHS/IDD archival and storage hardware system shown in Fig. 1. The document describes steps by step process of image data being received at LLNL then being processed and made available to authorized personnel and collaborators. Throughout this document references will be made to one of two figures, Fig. 1 describing the elements of the architecture and the Fig. 2 describing the workflow and how the project utilizes the available hardware.

  19. American School & University Architectural Portfolio 2000 Awards: Landscape Architecture.

    ERIC Educational Resources Information Center

    American School & University, 2000

    2000-01-01

    Presents photographs and basic information on architectural design, costs, square footage, and principle designers of the award winning school landscaping projects that competed in the American School & University Architectural Portfolio 2000. (GR)

  20. The information architecture of behavior change websites.

    PubMed

    Danaher, Brian G; McKay, H Garth; Seeley, John R

    2005-01-01

    The extraordinary growth in Internet use offers researchers important new opportunities to identify and test new ways to deliver effective behavior change programs. The information architecture (IA)-the structure of website information--is an important but often overlooked factor to consider when adapting behavioral strategies developed in office-based settings for Web delivery. Using examples and relevant perspectives from multiple disciplines, we describe a continuum of website IA designs ranging from a matrix design to the tunnel design. The free-form matrix IA design allows users free rein to use multiple hyperlinks to explore available content according to their idiosyncratic interests. The more directive tunnel IA design (commonly used in e-learning courses) guides users step-by-step through a series of Web pages that are arranged in a particular order to improve the chances of achieving a goal that is measurable and consistent. Other IA designs are also discussed, including hierarchical IA and hybrid IA designs. In the hierarchical IA design, program content is arranged in a top-down manner, which helps the user find content of interest. The more complex hybrid IA design incorporates some combination of components that use matrix, tunnel, and/or hierarchical IA designs. Each of these IA designs is discussed in terms of usability, participant engagement, and program tailoring, as well as how they might best be matched with different behavior change goals (using Web-based smoking cessation interventions as examples). Our presentation underscores the role of considering and clearly reporting the use of IA designs when creating effective Web-based interventions. We also encourage the adoption of a multidisciplinary perspective as we move towards a more mature view of Internet intervention research. PMID:15914459

  1. Tutorial on architectural acoustics

    NASA Astrophysics Data System (ADS)

    Shaw, Neil; Talaske, Rick; Bistafa, Sylvio

    2002-11-01

    This tutorial is intended to provide an overview of current knowledge and practice in architectural acoustics. Topics covered will include basic concepts and history, acoustics of small rooms (small rooms for speech such as classrooms and meeting rooms, music studios, small critical listening spaces such as home theatres) and the acoustics of large rooms (larger assembly halls, auditoria, and performance halls).

  2. 1989 Architectural Exhibition Winners.

    ERIC Educational Resources Information Center

    School Business Affairs, 1990

    1990-01-01

    Winners of the 1989 Architectural Exhibition sponsored annually by the ASBO International's School Facilities Research Committee include the Brevard Performing Arts Center (Melbourne, Florida), the Capital High School (Santa Fe, New Mexico), Gage Elementary School (Rochester, Minnesota), the Lakewood (Ohio) High School Natatorium, and three other…

  3. Emulating an MIMD architecture

    SciTech Connect

    Su Bogong; Grishman, R.

    1982-01-01

    As part of a research effort in parallel processor architecture and programming, the ultracomputer group at New York University has performed extensive simulation of parallel programs. To speed up these simulations, a parallel processor emulator, using the microprogrammable Puma computer system previously designed and built at NYU, has been developed. 8 references.

  4. System Building and Architecture.

    ERIC Educational Resources Information Center

    Robbie, Roderick G.

    The technical director of the Metropolitan Toronto School Boards Study of Educational Facilities (SEF) presents a description of the general theory and execution of the first SEF building system, and his views on the general principles of system building as they might affect architecture and the economy. (TC)

  5. Making Connections through Architecture.

    ERIC Educational Resources Information Center

    Hollingsworth, Patricia

    1993-01-01

    The Center for Arts and Sciences (Oklahoma) developed an interdisciplinary curriculum for disadvantaged gifted children on styles of architecture, called "Discovering Patterns in the Built Environment." This article describes the content and processes used in the curriculum, as well as other programs of the center, such as teacher workshops,…

  6. GNU debugger internal architecture

    SciTech Connect

    Miller, P.; Nessett, D.; Pizzi, R.

    1993-12-16

    This document describes the internal and architecture and implementation of the GNU debugger, gdb. Topics include inferior process management, command execution, symbol table management and remote debugging. Call graphs for specific functions are supplied. This document is not a complete description but offers a developer an overview which is the place to start before modification.

  7. Test Architecture, Test Retrofit

    ERIC Educational Resources Information Center

    Fulcher, Glenn; Davidson, Fred

    2009-01-01

    Just like buildings, tests are designed and built for specific purposes, people, and uses. However, both buildings and tests grow and change over time as the needs of their users change. Sometimes, they are also both used for purposes other than those intended in the original designs. This paper explores architecture as a metaphor for language…

  8. INL Generic Robot Architecture

    Energy Science and Technology Software Center (ESTSC)

    2005-03-30

    The INL Generic Robot Architecture is a generic, extensible software framework that can be applied across a variety of different robot geometries, sensor suites and low-level proprietary control application programming interfaces (e.g. mobility, aria, aware, player, etc.).

  9. RootScape: a landmark-based system for rapid screening of root architecture in Arabidopsis.

    PubMed

    Ristova, Daniela; Rosas, Ulises; Krouk, Gabriel; Ruffel, Sandrine; Birnbaum, Kenneth D; Coruzzi, Gloria M

    2013-03-01

    The architecture of plant roots affects essential functions including nutrient and water uptake, soil anchorage, and symbiotic interactions. Root architecture comprises many features that arise from the growth of the primary and lateral roots. These root features are dictated by the genetic background but are also highly responsive to the environment. Thus, root system architecture (RSA) represents an important and complex trait that is highly variable, affected by genotype × environment interactions, and relevant to survival/performance. Quantification of RSA in Arabidopsis (Arabidopsis thaliana) using plate-based tissue culture is a very common and relatively rapid assay, but quantifying RSA represents an experimental bottleneck when it comes to medium- or high-throughput approaches used in mutant or genotype screens. Here, we present RootScape, a landmark-based allometric method for rapid phenotyping of RSA using Arabidopsis as a case study. Using the software AAMToolbox, we created a 20-point landmark model that captures RSA as one integrated trait and used this model to quantify changes in the RSA of Arabidopsis (Columbia) wild-type plants grown under different hormone treatments. Principal component analysis was used to compare RootScape with conventional methods designed to measure root architecture. This analysis showed that RootScape efficiently captured nearly all the variation in root architecture detected by measuring individual root traits and is 5 to 10 times faster than conventional scoring. We validated RootScape by quantifying the plasticity of RSA in several mutant lines affected in hormone signaling. The RootScape analysis recapitulated previous results that described complex phenotypes in the mutants and identified novel gene × environment interactions. PMID:23335624

  10. Development of a Conceptual Structure for Architectural Solar Energy Systems.

    ERIC Educational Resources Information Center

    Ringel, Robert F.

    Solar subsystems and components were identified and conceptual structure was developed for architectural solar energy heating and cooling systems. Recent literature related to solar energy systems was reviewed and analyzed. Solar heating and cooling system, subsystem, and component data were compared for agreement and completeness. Significant…

  11. A high performance parallel computing architecture for robust image features

    NASA Astrophysics Data System (ADS)

    Zhou, Renyan; Liu, Leibo; Wei, Shaojun

    2014-03-01

    A design of parallel architecture for image feature detection and description is proposed in this article. The major component of this architecture is a 2D cellular network composed of simple reprogrammable processors, enabling the Hessian Blob Detector and Haar Response Calculation, which are the most computing-intensive stage of the Speeded Up Robust Features (SURF) algorithm. Combining this 2D cellular network and dedicated hardware for SURF descriptors, this architecture achieves real-time image feature detection with minimal software in the host processor. A prototype FPGA implementation of the proposed architecture achieves 1318.9 GOPS general pixel processing @ 100 MHz clock and achieves up to 118 fps in VGA (640 × 480) image feature detection. The proposed architecture is stand-alone and scalable so it is easy to be migrated into VLSI implementation.

  12. Updates to the NASA Space Telecommunications Radio System (STRS) Architecture

    NASA Technical Reports Server (NTRS)

    Kacpura, Thomas J.; Handler, Louis M.; Briones, Janette; Hall, Charles S.

    2008-01-01

    This paper describes an update of the Space Telecommunications Radio System (STRS) open architecture for NASA space based radios. The STRS architecture has been defined as a framework for the design, development, operation and upgrade of space based software defined radios, where processing resources are constrained. The architecture has been updated based upon reviews by NASA missions, radio providers, and component vendors. The STRS Standard prescribes the architectural relationship between the software elements used in software execution and defines the Application Programmer Interface (API) between the operating environment and the waveform application. Modeling tools have been adopted to present the architecture. The paper will present a description of the updated API, configuration files, and constraints. Minimum compliance is discussed for early implementations. The paper then closes with a summary of the changes made and discussion of the relevant alignment with the Object Management Group (OMG) SWRadio specification, and enhancements to the specialized signal processing abstraction.

  13. Commanding Constellations (Pipeline Architecture)

    NASA Technical Reports Server (NTRS)

    Ray, Tim; Condron, Jeff

    2003-01-01

    Providing ground command software for constellations of spacecraft is a challenging problem. Reliable command delivery requires a feedback loop; for a constellation there will likely be an independent feedback loop for each constellation member. Each command must be sent via the proper Ground Station, which may change from one contact to the next (and may be different for different members). Dynamic configuration of the ground command software is usually required (e.g. directives to configure each member's feedback loop and assign the appropriate Ground Station). For testing purposes, there must be a way to insert command data at any level in the protocol stack. The Pipeline architecture described in this paper can support all these capabilities with a sequence of software modules (the pipeline), and a single self-identifying message format (for all types of command data and configuration directives). The Pipeline architecture is quite simple, yet it can solve some complex problems. The resulting solutions are conceptually simple, and therefore, reliable. They are also modular, and therefore, easy to distribute and extend. We first used the Pipeline architecture to design a CCSDS (Consultative Committee for Space Data Systems) Ground Telecommand system (to command one spacecraft at a time with a fixed Ground Station interface). This pipeline was later extended to include gateways to any of several Ground Stations. The resulting pipeline was then extended to handle a small constellation of spacecraft. The use of the Pipeline architecture allowed us to easily handle the increasing complexity. This paper will describe the Pipeline architecture, show how it was used to solve each of the above commanding situations, and how it can easily be extended to handle larger constellations.

  14. Shaping plant architecture.

    PubMed

    Teichmann, Thomas; Muhr, Merlin

    2015-01-01

    Plants exhibit phenotypical plasticity. Their general body plan is genetically determined, but plant architecture and branching patterns are variable and can be adjusted to the prevailing environmental conditions. The modular design of the plant facilitates such morphological adaptations. The prerequisite for the formation of a branch is the initiation of an axillary meristem. Here, we review the current knowledge about this process. After its establishment, the meristem can develop into a bud which can either become dormant or grow out and form a branch. Many endogenous factors, such as photoassimilate availability, and exogenous factors like nutrient availability or shading, have to be integrated in the decision whether a branch is formed. The underlying regulatory network is complex and involves phytohormones and transcription factors. The hormone auxin is derived from the shoot apex and inhibits bud outgrowth indirectly in a process termed apical dominance. Strigolactones appear to modulate apical dominance by modification of auxin fluxes. Furthermore, the transcription factor BRANCHED1 plays a central role. The exact interplay of all these factors still remains obscure and there are alternative models. We discuss recent findings in the field along with the major models. Plant architecture is economically significant because it affects important traits of crop and ornamental plants, as well as trees cultivated in forestry or on short rotation coppices. As a consequence, plant architecture has been modified during plant domestication. Research revealed that only few key genes have been the target of selection during plant domestication and in breeding programs. Here, we discuss such findings on the basis of various examples. Architectural ideotypes that provide advantages for crop plant management and yield are described. We also outline the potential of breeding and biotechnological approaches to further modify and improve plant architecture for economic needs

  15. Shaping plant architecture

    PubMed Central

    Teichmann, Thomas; Muhr, Merlin

    2015-01-01

    Plants exhibit phenotypical plasticity. Their general body plan is genetically determined, but plant architecture and branching patterns are variable and can be adjusted to the prevailing environmental conditions. The modular design of the plant facilitates such morphological adaptations. The prerequisite for the formation of a branch is the initiation of an axillary meristem. Here, we review the current knowledge about this process. After its establishment, the meristem can develop into a bud which can either become dormant or grow out and form a branch. Many endogenous factors, such as photoassimilate availability, and exogenous factors like nutrient availability or shading, have to be integrated in the decision whether a branch is formed. The underlying regulatory network is complex and involves phytohormones and transcription factors. The hormone auxin is derived from the shoot apex and inhibits bud outgrowth indirectly in a process termed apical dominance. Strigolactones appear to modulate apical dominance by modification of auxin fluxes. Furthermore, the transcription factor BRANCHED1 plays a central role. The exact interplay of all these factors still remains obscure and there are alternative models. We discuss recent findings in the field along with the major models. Plant architecture is economically significant because it affects important traits of crop and ornamental plants, as well as trees cultivated in forestry or on short rotation coppices. As a consequence, plant architecture has been modified during plant domestication. Research revealed that only few key genes have been the target of selection during plant domestication and in breeding programs. Here, we discuss such findings on the basis of various examples. Architectural ideotypes that provide advantages for crop plant management and yield are described. We also outline the potential of breeding and biotechnological approaches to further modify and improve plant architecture for economic needs

  16. ACOUSTICS IN ARCHITECTURAL DESIGN, AN ANNOTATED BIBLIOGRAPHY ON ARCHITECTURAL ACOUSTICS.

    ERIC Educational Resources Information Center

    DOELLE, LESLIE L.

    THE PURPOSE OF THIS ANNOTATED BIBLIOGRAPHY ON ARCHITECTURAL ACOUSTICS WAS--(1) TO COMPILE A CLASSIFIED BIBLIOGRAPHY, INCLUDING MOST OF THOSE PUBLICATIONS ON ARCHITECTURAL ACOUSTICS, PUBLISHED IN ENGLISH, FRENCH, AND GERMAN WHICH CAN SUPPLY A USEFUL AND UP-TO-DATE SOURCE OF INFORMATION FOR THOSE ENCOUNTERING ANY ARCHITECTURAL-ACOUSTIC DESIGN…

  17. 11. Photocopy of architectural drawing (from National Archives Architectural and ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    11. Photocopy of architectural drawing (from National Archives Architectural and Cartographic Branch Alexandria, Va.) 'Non-Com-Officers Qrs.' Quartermaster General's Office Standard Plan 82, sheet 1. Lithograph on linen architectural drawing. April 1893 3 ELEVATIONS, 3 PLANS AND A PARTIAL SECTION - Fort Myer, Non-Commissioned Officers Quarters, Washington Avenue between Johnson Lane & Custer Road, Arlington, Arlington County, VA

  18. 12. Photocopy of architectural drawing (from National Archives Architectural and ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    12. Photocopy of architectural drawing (from National Archives Architectural and Cartographic Branch, Alexandria, Va.) 'Non-Com-Officers Qrs.' Quartermaster Generals Office Standard Plan 82, sheet 2, April 1893. Lithograph on linen architectural drawing. DETAILS - Fort Myer, Non-Commissioned Officers Quarters, Washington Avenue between Johnson Lane & Custer Road, Arlington, Arlington County, VA

  19. Brain components

    MedlinePlus Videos and Cool Tools

    The brain is composed of more than a thousand billion neurons. Specific groups of them, working in concert, provide ... of information. The 3 major components of the brain are the cerebrum, cerebellum, and brain stem. The ...

  20. Moving the Hazard Prediction and Assessment Capability to a Distributed, Portable Architecture

    SciTech Connect

    Lee, RW

    2002-09-05

    The Hazard Prediction and Assessment Capability (HPAC) has been re-engineered from a Windows application with tight binding between computation and a graphical user interface (GUI) to a new distributed object architecture. The key goals of this new architecture are platform portability, extensibility, deployment flexibility, client-server operations, easy integration with other systems, and support for a new map-based GUI. Selection of Java as the development and runtime environment is the major factor in achieving each of the goals, platform portability in particular. Portability is further enforced by allowing only Java components in the client. Extensibility is achieved via Java's dynamic binding and class loading capabilities and a design by interface approach. HPAC supports deployment on a standalone host, as a heavy client in client-server mode with data stored on the client but calculations performed on the server host, and as a thin client with data and calculations on the server host. The principle architectural element supporting deployment flexibility is the use of Universal Resource Locators (URLs) for all file references. Java WebStart{trademark} is used for thin client deployment. Although there were many choices for the object distribution mechanism, the Common Object Request Broker Architecture (CORBA) was chosen to support HPAC client server operation. HPAC complies with version 2.0 of the CORBA standard and does not assume support for pass-by-value method arguments. Execution in standalone mode is expedited by having most server objects run in the same process as client objects, thereby bypassing CORBA object transport. HPAC provides four levels for access by other tools and systems, starting with a Windows library providing transport and dispersion (T&D) calculations and output generation, detailed and more abstract sets of CORBA services, and reusable Java components.

  1. Common Control System Vulnerability

    SciTech Connect

    Trent Nelson

    2005-12-01

    The Control Systems Security Program and other programs within the Idaho National Laboratory have discovered a vulnerability common to control systems in all sectors that allows an attacker to penetrate most control systems, spoof the operator, and gain full control of targeted system elements. This vulnerability has been identified on several systems that have been evaluated at INL, and in each case a 100% success rate of completing the attack paths that lead to full system compromise was observed. Since these systems are employed in multiple critical infrastructure sectors, this vulnerability is deemed common to control systems in all sectors. Modern control systems architectures can be considered analogous to today's information networks, and as such are usually approached by attackers using a common attack methodology to penetrate deeper and deeper into the network. This approach often is composed of several phases, including gaining access to the control network, reconnaissance, profiling of vulnerabilities, launching attacks, escalating privilege, maintaining access, and obscuring or removing information that indicates that an intruder was on the system. With irrefutable proof that an external attack can lead to a compromise of a computing resource on the organization's business local area network (LAN), access to the control network is usually considered the first phase in the attack plan. Once the attacker gains access to the control network through direct connections and/or the business LAN, the second phase of reconnaissance begins with traffic analysis within the control domain. Thus, the communications between the workstations and the field device controllers can be monitored and evaluated, allowing an attacker to capture, analyze, and evaluate the commands sent among the control equipment. Through manipulation of the communication protocols of control systems (a process generally referred to as ''reverse engineering''), an attacker can then map out the

  2. An Experiment in Architectural Instruction.

    ERIC Educational Resources Information Center

    Dvorak, Robert W.

    1978-01-01

    Discusses the application of the PLATO IV computer-based educational system to a one-semester basic drawing course for freshman architecture, landscape architecture, and interior design students and relates student reactions to the experience. (RAO)

  3. Storage system architectures and their characteristics

    NASA Technical Reports Server (NTRS)

    Sarandrea, Bryan M.

    1993-01-01

    Not all users storage requirements call for 20 MBS data transfer rates, multi-tier file or data migration schemes, or even automated retrieval of data. The number of available storage solutions reflects the broad range of user requirements. It is foolish to think that any one solution can address the complete range of requirements. For users with simple off-line storage requirements, the cost and complexity of high end solutions would provide no advantage over a more simple solution. The correct answer is to match the requirements of a particular storage need to the various attributes of the available solutions. The goal of this paper is to introduce basic concepts of archiving and storage management in combination with the most common architectures and to provide some insight into how these concepts and architectures address various storage problems. The intent is to provide potential consumers of storage technology with a framework within which to begin the hunt for a solution which meets their particular needs. This paper is not intended to be an exhaustive study or to address all possible solutions or new technologies, but is intended to be a more practical treatment of todays storage system alternatives. Since most commercial storage systems today are built on Open Systems concepts, the majority of these solutions are hosted on the UNIX operating system. For this reason, some of the architectural issues discussed focus around specific UNIX architectural concepts. However, most of the architectures are operating system independent and the conclusions are applicable to such architectures on any operating system.

  4. Controlling Material Reactivity Using Architecture.

    PubMed

    Sullivan, Kyle T; Zhu, Cheng; Duoss, Eric B; Gash, Alexander E; Kolesky, David B; Kuntz, Joshua D; Lewis, Jennifer A; Spadaccini, Christopher M

    2016-03-01

    3D-printing methods are used to generate reactive material architectures. Several geometric parameters are observed to influence the resultant flame propagation velocity, indicating that the architecture can be utilized to control reactivity. Two different architectures, channels and hurdles, are generated, and thin films of thermite are deposited onto the surface. The architecture offers an additional route to control, at will, the energy release rate in reactive composite materials. PMID:26669517

  5. Contrasting the Genetic Architecture of 30 Complex Traits from Summary Association Data.

    PubMed

    Shi, Huwenbo; Kichaev, Gleb; Pasaniuc, Bogdan

    2016-07-01

    Variance-component methods that estimate the aggregate contribution of large sets of variants to the heritability of complex traits have yielded important insights into the genetic architecture of common diseases. Here, we introduce methods that estimate the total trait variance explained by the typed variants at a single locus in the genome (local SNP heritability) from genome-wide association study (GWAS) summary data while accounting for linkage disequilibrium among variants. We applied our estimator to ultra-large-scale GWAS summary data of 30 common traits and diseases to gain insights into their local genetic architecture. First, we found that common SNPs have a high contribution to the heritability of all studied traits. Second, we identified traits for which the majority of the SNP heritability can be confined to a small percentage of the genome. Third, we identified GWAS risk loci where the entire locus explains significantly more variance in the trait than the GWAS reported variants. Finally, we identified loci that explain a significant amount of heritability across multiple traits. PMID:27346688

  6. The common ancestry of life

    PubMed Central

    2010-01-01

    Background It is common belief that all cellular life forms on earth have a common origin. This view is supported by the universality of the genetic code and the universal conservation of multiple genes, particularly those that encode key components of the translation system. A remarkable recent study claims to provide a formal, homology independent test of the Universal Common Ancestry hypothesis by comparing the ability of a common-ancestry model and a multiple-ancestry model to predict sequences of universally conserved proteins. Results We devised a computational experiment on a concatenated alignment of universally conserved proteins which shows that the purported demonstration of the universal common ancestry is a trivial consequence of significant sequence similarity between the analyzed proteins. The nature and origin of this similarity are irrelevant for the prediction of "common ancestry" of by the model-comparison approach. Thus, homology (common origin) of the compared proteins remains an inference from sequence similarity rather than an independent property demonstrated by the likelihood analysis. Conclusion A formal demonstration of the Universal Common Ancestry hypothesis has not been achieved and is unlikely to be feasible in principle. Nevertheless, the evidence in support of this hypothesis provided by comparative genomics is overwhelming. Reviewers this article was reviewed by William Martin, Ivan Iossifov (nominated by Andrey Rzhetsky) and Arcady Mushegian. For the complete reviews, see the Reviewers' Report section. PMID:21087490

  7. ROADM architectures and technologies for agile optical networks

    NASA Astrophysics Data System (ADS)

    Eldada, Louay A.

    2007-02-01

    We review the different optoelectronic component and module technologies that have been developed for use in ROADM subsystems, and describe their principles of operation, designs, features, advantages, and challenges. We also describe the various needs for reconfigurable optical add/drop switching in agile optical networks. For each network need, we present the different ROADM subsystem architecture options with their pros and cons, and describe the optoelectronic technologies supporting each architecture.

  8. Requirements for a need-to-know (NTK) architecture

    SciTech Connect

    Nuclear Information Working Group; Computer Security Working Group

    1996-05-01

    Purpose of this document is to present requirements for a network architecture which can be used between sites within the DOE complex to transfer classified and sensitive unclassified information requiring need-to-know separation. The network will not be multilevel; all users will have a Q clearance. The architecture includes hardware and software of the network components and computer resources connected to the network, the computer security features implemented, and the operation procedures needed to implement the network.

  9. Simulation system architecture design for generic communications link

    NASA Technical Reports Server (NTRS)

    Tsang, Chit-Sang; Ratliff, Jim

    1986-01-01

    This paper addresses a computer simulation system architecture design for generic digital communications systems. It addresses the issues of an overall system architecture in order to achieve a user-friendly, efficient, and yet easily implementable simulation system. The system block diagram and its individual functional components are described in detail. Software implementation is discussed with the VAX/VMS operating system used as a target environment.

  10. A multi-agent architecture for geosimulation of moving agents

    NASA Astrophysics Data System (ADS)

    Vahidnia, Mohammad H.; Alesheikh, Ali A.; Alavipanah, Seyed Kazem

    2015-10-01

    In this paper, a novel architecture is proposed in which an axiomatic derivation system in the form of first-order logic facilitates declarative explanation and spatial reasoning. Simulation of environmental perception and interaction between autonomous agents is designed with a geographic belief-desire-intention and a request-inform-query model. The architecture has a complementary quantitative component that supports collaborative planning based on the concept of equilibrium and game theory. This new architecture presents a departure from current best practices geographic agent-based modelling. Implementation tasks are discussed in some detail, as well as scenarios for fleet management and disaster management.

  11. Cognitive Architectures for Multimedia Learning

    ERIC Educational Resources Information Center

    Reed, Stephen K.

    2006-01-01

    This article provides a tutorial overview of cognitive architectures that can form a theoretical foundation for designing multimedia instruction. Cognitive architectures include a description of memory stores, memory codes, and cognitive operations. Architectures that are relevant to multimedia learning include Paivio's dual coding theory,…

  12. Generalized architecture for DOA estimation for wideband/narrowband sources

    NASA Astrophysics Data System (ADS)

    Tabar, R.; Jamali, Mohsin M.; Kwatra, S. C.; Djouadi, A. H.

    1993-10-01

    The high-resolution direction-of-arrival (DOA) estimation algorithms are studied to develop architecture for real time applications. Methods for DOA estimation for wideband sources proposed by Buckley and Griffiths and MUSIC algorithm for narrowband sources proposed by Schmidt have been selected for hardware implementation. These algorithms have been simplified and generalized into one common programmable algorithm. It is then parallelized and is executed in a pipelined fashion. A parallel architecture has been designed for this generalized algorithm.

  13. Parallel Subconvolution Filtering Architectures

    NASA Technical Reports Server (NTRS)

    Gray, Andrew A.

    2003-01-01

    These architectures are based on methods of vector processing and the discrete-Fourier-transform/inverse-discrete- Fourier-transform (DFT-IDFT) overlap-and-save method, combined with time-block separation of digital filters into frequency-domain subfilters implemented by use of sub-convolutions. The parallel-processing method implemented in these architectures enables the use of relatively small DFT-IDFT pairs, while filter tap lengths are theoretically unlimited. The size of a DFT-IDFT pair is determined by the desired reduction in processing rate, rather than on the order of the filter that one seeks to implement. The emphasis in this report is on those aspects of the underlying theory and design rules that promote computational efficiency, parallel processing at reduced data rates, and simplification of the designs of very-large-scale integrated (VLSI) circuits needed to implement high-order filters and correlators.

  14. Open architecture CNC system

    SciTech Connect

    Tal, J.; Lopez, A.; Edwards, J.M.

    1995-04-01

    In this paper, an alternative solution to the traditional CNC machine tool controller has been introduced. Software and hardware modules have been described and their incorporation in a CNC control system has been outlined. This type of CNC machine tool controller demonstrates that technology is accessible and can be readily implemented into an open architecture machine tool controller. Benefit to the user is greater controller flexibility, while being economically achievable. PC based, motion as well as non-motion features will provide flexibility through a Windows environment. Up-grading this type of controller system through software revisions will keep the machine tool in a competitive state with minimal effort. Software and hardware modules are mass produced permitting competitive procurement and incorporation. Open architecture CNC systems provide diagnostics thus enhancing maintainability, and machine tool up-time. A major concern of traditional CNC systems has been operator training time. Training time can be greatly minimized by making use of Windows environment features.

  15. Consistent model driven architecture

    NASA Astrophysics Data System (ADS)

    Niepostyn, Stanisław J.

    2015-09-01

    The goal of the MDA is to produce software systems from abstract models in a way where human interaction is restricted to a minimum. These abstract models are based on the UML language. However, the semantics of UML models is defined in a natural language. Subsequently the verification of consistency of these diagrams is needed in order to identify errors in requirements at the early stage of the development process. The verification of consistency is difficult due to a semi-formal nature of UML diagrams. We propose automatic verification of consistency of the series of UML diagrams originating from abstract models implemented with our consistency rules. This Consistent Model Driven Architecture approach enables us to generate automatically complete workflow applications from consistent and complete models developed from abstract models (e.g. Business Context Diagram). Therefore, our method can be used to check practicability (feasibility) of software architecture models.

  16. Instrumented Architectural Simulation System

    NASA Technical Reports Server (NTRS)

    Delagi, B. A.; Saraiya, N.; Nishimura, S.; Byrd, G.

    1987-01-01

    Simulation of systems at an architectural level can offer an effective way to study critical design choices if (1) the performance of the simulator is adequate to examine designs executing significant code bodies, not just toy problems or small application fragements, (2) the details of the simulation include the critical details of the design, (3) the view of the design presented by the simulator instrumentation leads to useful insights on the problems with the design, and (4) there is enough flexibility in the simulation system so that the asking of unplanned questions is not suppressed by the weight of the mechanics involved in making changes either in the design or its measurement. A simulation system with these goals is described together with the approach to its implementation. Its application to the study of a particular class of multiprocessor hardware system architectures is illustrated.

  17. Generic robot architecture

    SciTech Connect

    Bruemmer, David J; Few, Douglas A

    2010-09-21

    The present invention provides methods, computer readable media, and apparatuses for a generic robot architecture providing a framework that is easily portable to a variety of robot platforms and is configured to provide hardware abstractions, abstractions for generic robot attributes, environment abstractions, and robot behaviors. The generic robot architecture includes a hardware abstraction level and a robot abstraction level. The hardware abstraction level is configured for developing hardware abstractions that define, monitor, and control hardware modules available on a robot platform. The robot abstraction level is configured for defining robot attributes and provides a software framework for building robot behaviors from the robot attributes. Each of the robot attributes includes hardware information from at least one hardware abstraction. In addition, each robot attribute is configured to substantially isolate the robot behaviors from the at least one hardware abstraction.

  18. Systems Architecture for a Nationwide Healthcare System.

    PubMed

    Abin, Jorge; Nemeth, Horacio; Friedmann, Ignacio

    2015-01-01

    From a national level to give Internet technology support, the Nationwide Integrated Healthcare System in Uruguay requires a model of Information Systems Architecture. This system has multiple healthcare providers (public and private), and a strong component of supplementary services. Thus, the data processing system should have an architecture that considers this fact, while integrating the central services provided by the Ministry of Public Health. The national electronic health record, as well as other related data processing systems, should be based on this architecture. The architecture model described here conceptualizes a federated framework of electronic health record systems, according to the IHE affinity model, HL7 standards, local standards on interoperability and security, as well as technical advice provided by AGESIC. It is the outcome of the research done by AGESIC and Systems Integration Laboratory (LINS) on the development and use of the e-Government Platform since 2008, as well as the research done by the team Salud.uy since 2013. PMID:26262000

  19. Cell wall peptidoglycan architecture in Bacillus subtilis

    PubMed Central

    Hayhurst, Emma J.; Kailas, Lekshmi; Hobbs, Jamie K.; Foster, Simon J.

    2008-01-01

    The bacterial cell wall is essential for viability and shape determination. Cell wall structural dynamics allowing growth and division, while maintaining integrity is a basic problem governing the life of bacteria. The polymer peptidoglycan is the main structural component for most bacteria and is made up of glycan strands that are cross-linked by peptide side chains. Despite study and speculation over many years, peptidoglycan architecture has remained largely elusive. Here, we show that the model rod-shaped bacterium Bacillus subtilis has glycan strands up to 5 μm, longer than the cell itself and 50 times longer than previously proposed. Atomic force microscopy revealed the glycan strands to be part of a peptidoglycan architecture allowing cell growth and division. The inner surface of the cell wall has a regular macrostructure with ≈50 nm-wide peptidoglycan cables [average 53 ± 12 nm (n = 91)] running basically across the short axis of the cell. Cross striations with an average periodicity of 25 ± 9 nm (n = 96) along each cable are also present. The fundamental cabling architecture is also maintained during septal development as part of cell division. We propose a coiled-coil model for peptidoglycan architecture encompassing our data and recent evidence concerning the biosynthetic machinery for this essential polymer. PMID:18784364

  20. Cell wall peptidoglycan architecture in Bacillus subtilis.

    PubMed

    Hayhurst, Emma J; Kailas, Lekshmi; Hobbs, Jamie K; Foster, Simon J

    2008-09-23

    The bacterial cell wall is essential for viability and shape determination. Cell wall structural dynamics allowing growth and division, while maintaining integrity is a basic problem governing the life of bacteria. The polymer peptidoglycan is the main structural component for most bacteria and is made up of glycan strands that are cross-linked by peptide side chains. Despite study and speculation over many years, peptidoglycan architecture has remained largely elusive. Here, we show that the model rod-shaped bacterium Bacillus subtilis has glycan strands up to 5 microm, longer than the cell itself and 50 times longer than previously proposed. Atomic force microscopy revealed the glycan strands to be part of a peptidoglycan architecture allowing cell growth and division. The inner surface of the cell wall has a regular macrostructure with approximately 50 nm-wide peptidoglycan cables [average 53 +/- 12 nm (n = 91)] running basically across the short axis of the cell. Cross striations with an average periodicity of 25 +/- 9 nm (n = 96) along each cable are also present. The fundamental cabling architecture is also maintained during septal development as part of cell division. We propose a coiled-coil model for peptidoglycan architecture encompassing our data and recent evidence concerning the biosynthetic machinery for this essential polymer. PMID:18784364

  1. Aerobot Autonomy Architecture

    NASA Technical Reports Server (NTRS)

    Elfes, Alberto; Hall, Jeffery L.; Kulczycki, Eric A.; Cameron, Jonathan M.; Morfopoulos, Arin C.; Clouse, Daniel S.; Montgomery, James F.; Ansar, Adnan I.; Machuzak, Richard J.

    2009-01-01

    An architecture for autonomous operation of an aerobot (i.e., a robotic blimp) to be used in scientific exploration of planets and moons in the Solar system with an atmosphere (such as Titan and Venus) is undergoing development. This architecture is also applicable to autonomous airships that could be flown in the terrestrial atmosphere for scientific exploration, military reconnaissance and surveillance, and as radio-communication relay stations in disaster areas. The architecture was conceived to satisfy requirements to perform the following functions: a) Vehicle safing, that is, ensuring the integrity of the aerobot during its entire mission, including during extended communication blackouts. b) Accurate and robust autonomous flight control during operation in diverse modes, including launch, deployment of scientific instruments, long traverses, hovering or station-keeping, and maneuvers for touch-and-go surface sampling. c) Mapping and self-localization in the absence of a global positioning system. d) Advanced recognition of hazards and targets in conjunction with tracking of, and visual servoing toward, targets, all to enable the aerobot to detect and avoid atmospheric and topographic hazards and to identify, home in on, and hover over predefined terrain features or other targets of scientific interest. The architecture is an integrated combination of systems for accurate and robust vehicle and flight trajectory control; estimation of the state of the aerobot; perception-based detection and avoidance of hazards; monitoring of the integrity and functionality ("health") of the aerobot; reflexive safing actions; multi-modal localization and mapping; autonomous planning and execution of scientific observations; and long-range planning and monitoring of the mission of the aerobot. The prototype JPL aerobot (see figure) has been tested extensively in various areas in the California Mojave desert.

  2. Architectural Methodology Report

    NASA Technical Reports Server (NTRS)

    Dhas, Chris

    2000-01-01

    The establishment of conventions between two communicating entities in the end systems is essential for communications. Examples of the kind of decisions that need to be made in establishing a protocol convention include the nature of the data representation, the for-mat and the speed of the date representation over the communications path, and the sequence of control messages (if any) which are sent. One of the main functions of a protocol is to establish a standard path between the communicating entities. This is necessary to create a virtual communications medium with certain desirable characteristics. In essence, it is the function of the protocol to transform the characteristics of the physical communications environment into a more useful virtual communications model. The final function of a protocol is to establish standard data elements for communications over the path; that is, the protocol serves to create a virtual data element for exchange. Other systems may be constructed in which the transferred element is a program or a job. Finally, there are special purpose applications in which the element to be transferred may be a complex structure such as all or part of a graphic display. NASA's Glenn Research Center (GRC) defines and develops advanced technology for high priority national needs in communications technologies for application to aeronautics and space. GRC tasked Computer Networks and Software Inc. (CNS) to describe the methodologies used in developing a protocol architecture for an in-space Internet node. The node would support NASA:s four mission areas: Earth Science; Space Science; Human Exploration and Development of Space (HEDS); Aerospace Technology. This report presents the methodology for developing the protocol architecture. The methodology addresses the architecture for a computer communications environment. It does not address an analog voice architecture.

  3. Information systems definition architecture

    SciTech Connect

    Calapristi, A.J.

    1996-06-20

    The Tank Waste Remediation System (TWRS) Information Systems Definition architecture evaluated information Management (IM) processes in several key organizations. The intent of the study is to identify improvements in TWRS IM processes that will enable better support to the TWRS mission, and accommodate changes in TWRS business environment. The ultimate goals of the study are to reduce IM costs, Manage the configuration of TWRS IM elements, and improve IM-related process performance.

  4. Avionics Architecture Modelling Language

    NASA Astrophysics Data System (ADS)

    Alana, Elena; Naranjo, Hector; Valencia, Raul; Medina, Alberto; Honvault, Christophe; Rugina, Ana; Panunzia, Marco; Dellandrea, Brice; Garcia, Gerald

    2014-08-01

    This paper presents the ESA AAML (Avionics Architecture Modelling Language) study, which aimed at advancing the avionics engineering practices towards a model-based approach by (i) identifying and prioritising the avionics-relevant analyses, (ii) specifying the modelling language features necessary to support the identified analyses, and (iii) recommending/prototyping software tooling to demonstrate the automation of the selected analyses based on a modelling language and compliant with the defined specification.

  5. Migraine and Common Morbidities

    MedlinePlus

    ... headaches . Home > Migraine and Common Morbidities Print Email Migraine and Common Morbidities ACHE Newsletter Sign up for ... newsletter by entering your e-mail address below. Migraine and Common Morbidities For many patients, migraine is ...

  6. Common Cause Failure Modeling

    NASA Technical Reports Server (NTRS)

    Hark, Frank; Britton, Paul; Ring, Robert; Novack, Steven

    2015-01-01

    Space Launch System (SLS) Agenda: Objective; Key Definitions; Calculating Common Cause; Examples; Defense against Common Cause; Impact of varied Common Cause Failure (CCF) and abortability; Response Surface for various CCF Beta; Takeaways.

  7. DAsHER CD: Developing a Data-Oriented Human-Centric Enterprise Architecture for EarthCube

    NASA Astrophysics Data System (ADS)

    Yang, C. P.; Yu, M.; Sun, M.; Qin, H.; Robinson, E.

    2015-12-01

    One of the biggest challenges that face Earth scientists is the resource discovery, access, and sharing in a desired fashion. EarthCube is targeted to enable geoscientists to address the challenges by fostering community-governed efforts that develop a common cyberinfrastructure for the purpose of collecting, accessing, analyzing, sharing and visualizing all forms of data and related resources, through the use of advanced technological and computational capabilities. Here we design an Enterprise Architecture (EA) for EarthCube to facilitate the knowledge management, communication and human collaboration in pursuit of the unprecedented data sharing across the geosciences. The design results will provide EarthCube a reference framework for developing geoscience cyberinfrastructure collaborated by different stakeholders, and identifying topics which should invoke high interest in the community. The development of this EarthCube EA framework leverages popular frameworks, such as Zachman, Gartner, DoDAF, and FEAF. The science driver of this design is the needs from EarthCube community, including the analyzed user requirements from EarthCube End User Workshop reports and EarthCube working group roadmaps, and feedbacks or comments from scientists obtained by organizing workshops. The final product of this Enterprise Architecture is a four-volume reference document: 1) Volume one is this document and comprises an executive summary of the EarthCube architecture, serving as an overview in the initial phases of architecture development; 2) Volume two is the major body of the design product. It outlines all the architectural design components or viewpoints; 3) Volume three provides taxonomy of the EarthCube enterprise augmented with semantics relations; 4) Volume four describes an example of utilizing this architecture for a geoscience project.

  8. Modular robotic architecture

    NASA Astrophysics Data System (ADS)

    Smurlo, Richard P.; Laird, Robin T.

    1991-03-01

    The development of control architectures for mobile systems is typically a task undertaken with each new application. These architectures address different operational needs and tend to be difficult to adapt to more than the problem at hand. The development of a flexible and extendible control system with evolutionary growth potential for use on mobile robots will help alleviate these problems and if made widely available will promote standardization and cornpatibility among systems throughout the industry. The Modular Robotic Architecture (MRA) is a generic control systern that meets the above needs by providing developers with a standard set of software hardware tools that can be used to design modular robots (MODBOTs) with nearly unlimited growth potential. The MODBOT itself is a generic creature that must be customized by the developer for a particular application. The MRA facilitates customization of the MODBOT by providing sensor actuator and processing modules that can be configured in almost any manner as demanded by the application. The Mobile Security Robot (MOSER) is an instance of a MODBOT that is being developed using the MRA. Navigational Sonar Module RF Link Control Station Module hR Link Detection Module Near hR Proximi Sensor Module Fluxgate Compass and Rate Gyro Collision Avoidance Sonar Module Figure 1. Remote platform module configuration of the Mobile Security Robot (MOSER). Acoustical Detection Array Stereoscopic Pan and Tilt Module High Level Processing Module Mobile Base 566

  9. Quantifying Loopy Network Architectures

    PubMed Central

    Katifori, Eleni; Magnasco, Marcelo O.

    2012-01-01

    Biology presents many examples of planar distribution and structural networks having dense sets of closed loops. An archetype of this form of network organization is the vasculature of dicotyledonous leaves, which showcases a hierarchically-nested architecture containing closed loops at many different levels. Although a number of approaches have been proposed to measure aspects of the structure of such networks, a robust metric to quantify their hierarchical organization is still lacking. We present an algorithmic framework, the hierarchical loop decomposition, that allows mapping loopy networks to binary trees, preserving in the connectivity of the trees the architecture of the original graph. We apply this framework to investigate computer generated graphs, such as artificial models and optimal distribution networks, as well as natural graphs extracted from digitized images of dicotyledonous leaves and vasculature of rat cerebral neocortex. We calculate various metrics based on the asymmetry, the cumulative size distribution and the Strahler bifurcation ratios of the corresponding trees and discuss the relationship of these quantities to the architectural organization of the original graphs. This algorithmic framework decouples the geometric information (exact location of edges and nodes) from the metric topology (connectivity and edge weight) and it ultimately allows us to perform a quantitative statistical comparison between predictions of theoretical models and naturally occurring loopy graphs. PMID:22701593

  10. Robust Software Architecture for Robots

    NASA Technical Reports Server (NTRS)

    Aghazanian, Hrand; Baumgartner, Eric; Garrett, Michael

    2009-01-01

    Robust Real-Time Reconfigurable Robotics Software Architecture (R4SA) is the name of both a software architecture and software that embodies the architecture. The architecture was conceived in the spirit of current practice in designing modular, hard, realtime aerospace systems. The architecture facilitates the integration of new sensory, motor, and control software modules into the software of a given robotic system. R4SA was developed for initial application aboard exploratory mobile robots on Mars, but is adaptable to terrestrial robotic systems, real-time embedded computing systems in general, and robotic toys.

  11. Architecture of Chinese Virtual Observatory

    NASA Astrophysics Data System (ADS)

    Cui, Chen-Zhou; Zhao, Yong-Heng

    2004-06-01

    Virtual Observatory (VO) is brought forward under the background of progresses of astronomical technologies and information technologies. VO architecture design embodies the combination of above two technologies. As an introduction of VO, principle and workflow of Virtual Observatory are given firstly. Then the latest progress on VO architecture is introduced. Based on the Grid technology, layered architecture model and service-oriented architecture model are given for Chinese Virtual Observatory. In the last part of the paper, some problems on architecture design are discussed in detail.

  12. Novel Payload Architectures for LISA

    NASA Astrophysics Data System (ADS)

    Johann, Ulrich A.; Gath, Peter F.; Holota, Wolfgang; Schulte, Hans Reiner; Weise, Dennis

    2006-11-01

    As part of the current LISA Mission Formulation Study, and based on prior internal investigations, Astrium Germany has defined and preliminary assessed novel payload architectures, potentially reducing overall complexity and improving budgets and costs. A promising concept is characterized by a single active inertial sensor attached to a single optical bench and serving both adjacent interferometer arms via two rigidly connected off-axis telescopes. The in-plane triangular constellation ``breathing angle'' compensation is accomplished by common telescope in-field of view pointing actuation of the transmit/received beams line of sight. A dedicated actuation mechanism located on the optical bench is required in addition to the on bench actuators for differential pointing of the transmit and receive direction perpendicular to the constellation plane. Both actuators operate in a sinusoidal yearly period. A technical challenge is the actuation mechanism pointing jitter and the monitoring and calibration of the laser phase walk which occurs while changing the optical path inside the optical assembly during re-pointing. Calibration or monitoring of instrument internal phase effects e.g. by a laser metrology truss derived from the existing interferometry is required. The architecture exploits in full the two-step interferometry (strap down) concept, separating functionally inter spacecraft and intra-spacecraft interferometry (reference mass laser metrology degrees of freedom sensing). The single test mass is maintained as cubic, but in free-fall in the lateral degrees of freedom within the constellation plane. Also the option of a completely free spherical test mass with full laser interferometer readout has been conceptually investigated. The spherical test mass would rotate slowly, and would be allowed to tumble. Imperfections in roundness and density would be calibrated from differential wave front sensing in a tetrahedral arrangement, supported by added attitude

  13. HYDRA : High-speed simulation architecture for precision spacecraft formation simulation

    NASA Technical Reports Server (NTRS)

    Martin, Bryan J.; Sohl, Garett.

    2003-01-01

    e Hierarchical Distributed Reconfigurable Architecture- is a scalable simulation architecture that provides flexibility and ease-of-use which take advantage of modern computation and communication hardware. It also provides the ability to implement distributed - or workstation - based simulations and high-fidelity real-time simulation from a common core. Originally designed to serve as a research platform for examining fundamental challenges in formation flying simulation for future space missions, it is also finding use in other missions and applications, all of which can take advantage of the underlying Object-Oriented structure to easily produce distributed simulations. Hydra automates the process of connecting disparate simulation components (Hydra Clients) through a client server architecture that uses high-level descriptions of data associated with each client to find and forge desirable connections (Hydra Services) at run time. Services communicate through the use of Connectors, which abstract messaging to provide single-interface access to any desired communication protocol, such as from shared-memory message passing to TCP/IP to ACE and COBRA. Hydra shares many features with the HLA, although providing more flexibility in connectivity services and behavior overriding.

  14. Describing the genetic architecture of epilepsy through heritability analysis.

    PubMed

    Speed, Doug; O'Brien, Terence J; Palotie, Aarno; Shkura, Kirill; Marson, Anthony G; Balding, David J; Johnson, Michael R

    2014-10-01

    Epilepsy is a disease with substantial missing heritability; despite its high genetic component, genetic association studies have had limited success detecting common variants which influence susceptibility. In this paper, we reassess the role of common variants on epilepsy using extensions of heritability analysis. Our data set consists of 1258 UK patients with epilepsy, of which 958 have focal epilepsy, and 5129 population control subjects, with genotypes recorded for over 4 million common single nucleotide polymorphisms. Firstly, we show that on the liability scale, common variants collectively explain at least 26% (standard deviation 5%) of phenotypic variation for all epilepsy and 27% (standard deviation 5%) for focal epilepsy. Secondly we provide a new method for estimating the number of causal variants for complex traits; when applied to epilepsy, our most optimistic estimate suggests that at least 400 variants influence disease susceptibility, with potentially many thousands. Thirdly, we use bivariate analysis to assess how similar the genetic architecture of focal epilepsy is to that of non-focal epilepsy; we demonstrate both significant differences (P = 0.004) and significant similarities (P = 0.01) between the two subtypes, indicating that although the clinical definition of focal epilepsy does identify a genetically distinct epilepsy subtype, there is also scope to improve the classification of epilepsy by incorporating genotypic information. Lastly, we investigate the potential value in using genetic data to diagnose epilepsy following a single epileptic seizure; we find that a prediction model explaining 10% of phenotypic variation could have clinical utility for deciding which single-seizure individuals are likely to benefit from immediate anti-epileptic drug therapy. PMID:25063994

  15. New Teacher Induction Programs in Georgia: Common Components and Perceptions

    ERIC Educational Resources Information Center

    McDaniel, Andrea Marshall

    2012-01-01

    With increasing demands on teachers, retaining new teachers has become more difficult in recent decades. New teacher induction programs appear to increase retention rates significantly among new teachers. Many states, including Georgia, have implemented induction programs to support and retain beginning teachers. In response to the Race to the Top…

  16. System Architectural Considerations on Reliable Guidance, Navigation, and Control (GN and C) for Constellation Program (CxP) Spacecraft

    NASA Technical Reports Server (NTRS)

    Dennehy, Cornelius J.

    2010-01-01

    This final report summarizes the results of a comparative assessment of the fault tolerance and reliability of different Guidance, Navigation and Control (GN&C) architectural approaches. This study was proactively performed by a combined Massachusetts Institute of Technology (MIT) and Draper Laboratory team as a GN&C "Discipline-Advancing" activity sponsored by the NASA Engineering and Safety Center (NESC). This systematic comparative assessment of GN&C system architectural approaches was undertaken as a fundamental step towards understanding the opportunities for, and limitations of, architecting highly reliable and fault tolerant GN&C systems composed of common avionic components. The primary goal of this study was to obtain architectural 'rules of thumb' that could positively influence future designs in the direction of an optimized (i.e., most reliable and cost-efficient) GN&C system. A secondary goal was to demonstrate the application and the utility of a systematic modeling approach that maps the entire possible architecture solution space.

  17. Capital Architecture: Situating symbolism parallel to architectural methods and technology

    NASA Astrophysics Data System (ADS)

    Daoud, Bassam

    Capital Architecture is a symbol of a nation's global presence and the cultural and social focal point of its inhabitants. Since the advent of High-Modernism in Western cities, and subsequently decolonised capitals, civic architecture no longer seems to be strictly grounded in the philosophy that national buildings shape the legacy of government and the way a nation is regarded through its built environment. Amidst an exceedingly globalized architectural practice and with the growing concern of key heritage foundations over the shortcomings of international modernism in representing its immediate socio-cultural context, the contextualization of public architecture within its sociological, cultural and economic framework in capital cities became the key denominator of this thesis. Civic architecture in capital cities is essential to confront the challenges of symbolizing a nation and demonstrating the legitimacy of the government'. In today's dominantly secular Western societies, governmental architecture, especially where the seat of political power lies, is the ultimate form of architectural expression in conveying a sense of identity and underlining a nation's status. Departing with these convictions, this thesis investigates the embodied symbolic power, the representative capacity, and the inherent permanence in contemporary architecture, and in its modes of production. Through a vast study on Modern architectural ideals and heritage -- in parallel to methodologies -- the thesis stimulates the future of large scale governmental building practices and aims to identify and index the key constituents that may respond to the lack representation in civic architecture in capital cities.

  18. Common Career Technical Core: Common Standards, Common Vision for CTE

    ERIC Educational Resources Information Center

    Green, Kimberly

    2012-01-01

    This article provides an overview of the National Association of State Directors of Career Technical Education Consortium's (NASDCTEc) Common Career Technical Core (CCTC), a state-led initiative that was created to ensure that career and technical education (CTE) programs are consistent and high quality across the United States. Forty-two states,…

  19. Components in the Pipeline

    SciTech Connect

    Gorton, Ian; Wynne, Adam S.; Liu, Yan; Yin, Jian

    2011-02-24

    Scientists commonly describe their data processing systems metaphorically as software pipelines. These pipelines input one or more data sources and apply a sequence of processing steps to transform the data and create useful results. While conceptually simple, pipelines often adopt complex topologies and must meet stringent quality of service requirements that place stress on the software infrastructure used to construct the pipeline. In this paper we describe the MeDICi Integration Framework, which is a component-based framework for constructing complex software pipelines. The framework supports composing pipelines from distributed heterogeneous software components and provides mechanisms for controlling qualities of service to meet demanding performance, reliability and communication requirements.

  20. Component separations.

    PubMed

    Heller, Lior; McNichols, Colton H; Ramirez, Oscar M

    2012-02-01

    Component separation is a technique used to provide adequate coverage for midline abdominal wall defects such as a large ventral hernia. This surgical technique is based on subcutaneous lateral dissection, fasciotomy lateral to the rectus abdominis muscle, and dissection on the plane between external and internal oblique muscles with medial advancement of the block that includes the rectus muscle and its fascia. This release allows for medial advancement of the fascia and closure of up to 20-cm wide defects in the midline area. Since its original description, components separation technique underwent multiple modifications with the ultimate goal to decrease the morbidity associated with the traditional procedure. The extensive subcutaneous lateral dissection had been associated with ischemia of the midline skin edges, wound dehiscence, infection, and seroma. Although the current trend is to proceed with minimally invasive component separation and to reinforce the fascia with mesh, the basic principles of the techniques as described by Ramirez et al in 1990 have not changed over the years. Surgeons who deal with the management of abdominal wall defects are highly encouraged to include this technique in their collection of treatment options. PMID:23372455

  1. Hyperfrequency components

    NASA Astrophysics Data System (ADS)

    1994-09-01

    The document has a collection of 19 papers (11 on technologies, 8 on applications) by 26 authors and coauthors. Technological topics include: evolution from conventional HEMT's double heterojunction and planar types of pseudomorphic HEMT's; MMIC R&D and production aspects for very-low-noise, low-power, and very-low-noise, high-power applications; hyperfrequency CAD tools; parametric measurements of hyperfrequency components on plug-in cards for design and in-process testing uses; design of Class B power amplifiers and millimetric-wave, bigrid-transistor mixers, exemplifying combined use of three major types of physical simulation in electrical modeling of microwave components; FET's for power amplification at up to 110 GHz; production, characterization, and nonlinear applications of resonant tunnel diodes. Applications topics include: development of active modules for major European programs; tubes versus solid-state components in hyperfrequency applications; status and potentialities of national and international cooperative R&D on MMIC's and CAD of hyperfrequency circuitry; attainable performance levels in multifunction MMIC applications; state of the art relative of MESFET power amplifiers (Bands S, C, X, Ku); creating a hyperfrequency functions library, of parametrizable reference cells or macrocells; and design of a single-stage, low-noise, band-W amplifier toward development of a three-stage amplifier.

  2. A Software Architecture for High Level Applications

    SciTech Connect

    Shen,G.

    2009-05-04

    A modular software platform for high level applications is under development at the National Synchrotron Light Source II project. This platform is based on client-server architecture, and the components of high level applications on this platform will be modular and distributed, and therefore reusable. An online model server is indispensable for model based control. Different accelerator facilities have different requirements for the online simulation. To supply various accelerator simulators, a set of narrow and general application programming interfaces is developed based on Tracy-3 and Elegant. This paper describes the system architecture for the modular high level applications, the design of narrow and general application programming interface for an online model server, and the prototype of online model server.

  3. Healthy Eating Design Guidelines for School Architecture

    PubMed Central

    Huang, Terry T-K; Sorensen, Dina; Davis, Steven; Frerichs, Leah; Brittin, Jeri; Celentano, Joseph; Callahan, Kelly

    2013-01-01

    We developed a new tool, Healthy Eating Design Guidelines for School Architecture, to provide practitioners in architecture and public health with a practical set of spatially organized and theory-based strategies for making school environments more conducive to learning about and practicing healthy eating by optimizing physical resources and learning spaces. The design guidelines, developed through multidisciplinary collaboration, cover 10 domains of the school food environment (eg, cafeteria, kitchen, garden) and 5 core healthy eating design principles. A school redesign project in Dillwyn, Virginia, used the tool to improve the schools’ ability to adopt a healthy nutrition curriculum and promote healthy eating. The new tool, now in a pilot version, is expected to evolve as its components are tested and evaluated through public health and design research. PMID:23449281

  4. A resource management architecture for metacomputing systems.

    SciTech Connect

    Czajkowski, K.; Foster, I.; Karonis, N.; Kesselman, C.; Martin, S.; Smith, W.; Tuecke, S.

    1999-08-24

    Metacomputing systems are intended to support remote and/or concurrent use of geographically distributed computational resources. Resource management in such systems is complicated by five concerns that do not typically arise in other situations: site autonomy and heterogeneous substrates at the resources, and application requirements for policy extensibility, co-allocation, and online control. We describe a resource management architecture that addresses these concerns. This architecture distributes the resource management problem among distinct local manager, resource broker, and resource co-allocator components and defines an extensible resource specification language to exchange information about requirements. We describe how these techniques have been implemented in the context of the Globus metacomputing toolkit and used to implement a variety of different resource management strategies. We report on our experiences applying our techniques in a large testbed, GUSTO, incorporating 15 sites, 330 computers, and 3600 processors.

  5. The flight telerobotic servicer: From functional architecture to computer architecture

    NASA Technical Reports Server (NTRS)

    Lumia, Ronald; Fiala, John

    1989-01-01

    After a brief tutorial on the NASA/National Bureau of Standards Standard Reference Model for Telerobot Control System Architecture (NASREM) functional architecture, the approach to its implementation is shown. First, interfaces must be defined which are capable of supporting the known algorithms. This is illustrated by considering the interfaces required for the SERVO level of the NASREM functional architecture. After interface definition, the specific computer architecture for the implementation must be determined. This choice is obviously technology dependent. An example illustrating one possible mapping of the NASREM functional architecture to a particular set of computers which implements it is shown. The result of choosing the NASREM functional architecture is that it provides a technology independent paradigm which can be mapped into a technology dependent implementation capable of evolving with technology in the laboratory and in space.

  6. Satellite ATM Networks: Architectures and Guidelines Developed

    NASA Technical Reports Server (NTRS)

    vonDeak, Thomas C.; Yegendu, Ferit

    1999-01-01

    An important element of satellite-supported asynchronous transfer mode (ATM) networking will involve support for the routing and rerouting of active connections. Work published under the auspices of the Telecommunications Industry Association (http://www.tiaonline.org), describes basic architectures and routing protocol issues for satellite ATM (SATATM) networks. The architectures and issues identified will serve as a basis for further development of technical specifications for these SATATM networks. Three ATM network architectures for bent pipe satellites and three ATM network architectures for satellites with onboard ATM switches were developed. The architectures differ from one another in terms of required level of mobility, supported data rates, supported terrestrial interfaces, and onboard processing and switching requirements. The documentation addresses low-, middle-, and geosynchronous-Earth-orbit satellite configurations. The satellite environment may require real-time routing to support the mobility of end devices and nodes of the ATM network itself. This requires the network to be able to reroute active circuits in real time. In addition to supporting mobility, rerouting can also be used to (1) optimize network routing, (2) respond to changing quality-of-service requirements, and (3) provide a fault tolerance mechanism. Traffic management and control functions are necessary in ATM to ensure that the quality-of-service requirements associated with each connection are not violated and also to provide flow and congestion control functions. Functions related to traffic management were identified and described. Most of these traffic management functions will be supported by on-ground ATM switches, but in a hybrid terrestrial-satellite ATM network, some of the traffic management functions may have to be supported by the onboard satellite ATM switch. Future work is planned to examine the tradeoffs of placing traffic management functions onboard a satellite as

  7. An open architecture for medical image workstation

    NASA Astrophysics Data System (ADS)

    Liang, Liang; Hu, Zhiqiang; Wang, Xiangyun

    2005-04-01

    Dealing with the difficulties of integrating various medical image viewing and processing technologies with a variety of clinical and departmental information systems and, in the meantime, overcoming the performance constraints in transferring and processing large-scale and ever-increasing image data in healthcare enterprise, we design and implement a flexible, usable and high-performance architecture for medical image workstations. This architecture is not developed for radiology only, but for any workstations in any application environments that may need medical image retrieving, viewing, and post-processing. This architecture contains an infrastructure named Memory PACS and different kinds of image applications built on it. The Memory PACS is in charge of image data caching, pre-fetching and management. It provides image applications with a high speed image data access and a very reliable DICOM network I/O. In dealing with the image applications, we use dynamic component technology to separate the performance-constrained modules from the flexibility-constrained modules so that different image viewing or processing technologies can be developed and maintained independently. We also develop a weakly coupled collaboration service, through which these image applications can communicate with each other or with third party applications. We applied this architecture in developing our product line and it works well. In our clinical sites, this architecture is applied not only in Radiology Department, but also in Ultrasonic, Surgery, Clinics, and Consultation Center. Giving that each concerned department has its particular requirements and business routines along with the facts that they all have different image processing technologies and image display devices, our workstations are still able to maintain high performance and high usability.

  8. Architectural Implications for Spatial Object Association Algorithms

    SciTech Connect

    Kumar, V S; Kurc, T; Saltz, J; Abdulla, G; Kohn, S R; Matarazzo, C

    2009-01-29

    Spatial object association, also referred to as cross-match of spatial datasets, is the problem of identifying and comparing objects in two or more datasets based on their positions in a common spatial coordinate system. In this work, we evaluate two crossmatch algorithms that are used for astronomical sky surveys, on the following database system architecture configurations: (1) Netezza Performance Server R, a parallel database system with active disk style processing capabilities, (2) MySQL Cluster, a high-throughput network database system, and (3) a hybrid configuration consisting of a collection of independent database system instances with data replication support. Our evaluation provides insights about how architectural characteristics of these systems affect the performance of the spatial crossmatch algorithms. We conducted our study using real use-case scenarios borrowed from a large-scale astronomy application known as the Large Synoptic Survey Telescope (LSST).

  9. Architectural Implications for Spatial Object Association Algorithms*

    PubMed Central

    Kumar, Vijay S.; Kurc, Tahsin; Saltz, Joel; Abdulla, Ghaleb; Kohn, Scott R.; Matarazzo, Celeste

    2013-01-01

    Spatial object association, also referred to as crossmatch of spatial datasets, is the problem of identifying and comparing objects in two or more datasets based on their positions in a common spatial coordinate system. In this work, we evaluate two crossmatch algorithms that are used for astronomical sky surveys, on the following database system architecture configurations: (1) Netezza Performance Server®, a parallel database system with active disk style processing capabilities, (2) MySQL Cluster, a high-throughput network database system, and (3) a hybrid configuration consisting of a collection of independent database system instances with data replication support. Our evaluation provides insights about how architectural characteristics of these systems affect the performance of the spatial crossmatch algorithms. We conducted our study using real use-case scenarios borrowed from a large-scale astronomy application known as the Large Synoptic Survey Telescope (LSST). PMID:25692244

  10. Computer graphics in architecture and engineering

    NASA Technical Reports Server (NTRS)

    Greenberg, D. P.

    1975-01-01

    The present status of the application of computer graphics to the building profession or architecture and its relationship to other scientific and technical areas were discussed. It was explained that, due to the fragmented nature of architecture and building activities (in contrast to the aerospace industry), a comprehensive, economic utilization of computer graphics in this area is not practical and its true potential cannot now be realized due to the present inability of architects and structural, mechanical, and site engineers to rely on a common data base. Future emphasis will therefore have to be placed on a vertical integration of the construction process and effective use of a three-dimensional data base, rather than on waiting for any technological breakthrough in interactive computing.

  11. Mind and Language Architecture

    PubMed Central

    Logan, Robert K

    2010-01-01

    A distinction is made between the brain and the mind. The architecture of the mind and language is then described within a neo-dualistic framework. A model for the origin of language based on emergence theory is presented. The complexity of hominid existence due to tool making, the control of fire and the social cooperation that fire required gave rise to a new level of order in mental activity and triggered the simultaneous emergence of language and conceptual thought. The mind is shown to have emerged as a bifurcation of the brain with the emergence of language. The role of language in the evolution of human culture is also described. PMID:20922045

  12. Architecture, constraints, and behavior

    PubMed Central

    Doyle, John C.; Csete, Marie

    2011-01-01

    This paper aims to bridge progress in neuroscience involving sophisticated quantitative analysis of behavior, including the use of robust control, with other relevant conceptual and theoretical frameworks from systems engineering, systems biology, and mathematics. Familiar and accessible case studies are used to illustrate concepts of robustness, organization, and architecture (modularity and protocols) that are central to understanding complex networks. These essential organizational features are hidden during normal function of a system but are fundamental for understanding the nature, design, and function of complex biologic and technologic systems. PMID:21788505

  13. Evolution of genome architecture.

    PubMed

    Koonin, Eugene V

    2009-02-01

    Charles Darwin believed that all traits of organisms have been honed to near perfection by natural selection. The empirical basis underlying Darwin's conclusions consisted of numerous observations made by him and other naturalists on the exquisite adaptations of animals and plants to their natural habitats and on the impressive results of artificial selection. Darwin fully appreciated the importance of heredity but was unaware of the nature and, in fact, the very existence of genomes. A century and a half after the publication of the "Origin", we have the opportunity to draw conclusions from the comparisons of hundreds of genome sequences from all walks of life. These comparisons suggest that the dominant mode of genome evolution is quite different from that of the phenotypic evolution. The genomes of vertebrates, those purported paragons of biological perfection, turned out to be veritable junkyards of selfish genetic elements where only a small fraction of the genetic material is dedicated to encoding biologically relevant information. In sharp contrast, genomes of microbes and viruses are incomparably more compact, with most of the genetic material assigned to distinct biological functions. However, even in these genomes, the specific genome organization (gene order) is poorly conserved. The results of comparative genomics lead to the conclusion that the genome architecture is not a straightforward result of continuous adaptation but rather is determined by the balance between the selection pressure, that is itself dependent on the effective population size and mutation rate, the level of recombination, and the activity of selfish elements. Although genes and, in many cases, multigene regions of genomes possess elaborate architectures that ensure regulation of expression, these arrangements are evolutionarily volatile and typically change substantially even on short evolutionary scales when gene sequences diverge minimally. Thus, the observed genome

  14. Architecture for Teraflop Visualization

    SciTech Connect

    Breckenridge, A.R.; Haynes, R.A.

    1999-04-09

    Sandia Laboratories' computational scientists are addressing a very important question: How do we get insight from the human combined with the computer-generated information? The answer inevitably leads to using scientific visualization. Going one technology leap further is teraflop visualization, where the computing model and interactive graphics are an integral whole to provide computing for insight. In order to implement our teraflop visualization architecture, all hardware installed or software coded will be based on open modules and dynamic extensibility principles. We will illustrate these concepts with examples in our three main research areas: (1) authoring content (the computer), (2) enhancing precision and resolution (the human), and (3) adding behaviors (the physics).

  15. Parallel algorithms and architectures

    SciTech Connect

    Albrecht, A.; Jung, H.; Mehlhorn, K.

    1987-01-01

    Contents of this book are the following: Preparata: Deterministic simulation of idealized parallel computers on more realistic ones; Convex hull of randomly chosen points from a polytope; Dataflow computing; Parallel in sequence; Towards the architecture of an elementary cortical processor; Parallel algorithms and static analysis of parallel programs; Parallel processing of combinatorial search; Communications; An O(nlogn) cost parallel algorithms for the single function coarsest partition problem; Systolic algorithms for computing the visibility polygon and triangulation of a polygonal region; and RELACS - A recursive layout computing system. Parallel linear conflict-free subtree access.

  16. Etruscan Divination and Architecture

    NASA Astrophysics Data System (ADS)

    Magli, Giulio

    The Etruscan religion was characterized by divination methods, aimed at interpreting the will of the gods. These methods were revealed by the gods themselves and written in the books of the Etrusca Disciplina. The books are lost, but parts of them are preserved in the accounts of later Latin sources. According to such traditions divination was tightly connected with the Etruscan cosmovision of a Pantheon distributed in equally spaced, specific sectors of the celestial realm. We explore here the possible reflections of such issues in the Etruscan architectural remains.

  17. TROPIX Power System Architecture

    NASA Technical Reports Server (NTRS)

    Manner, David B.; Hickman, J. Mark

    1995-01-01

    This document contains results obtained in the process of performing a power system definition study of the TROPIX power management and distribution system (PMAD). Requirements derived from the PMADs interaction with other spacecraft systems are discussed first. Since the design is dependent on the performance of the photovoltaics, there is a comprehensive discussion of the appropriate models for cells and arrays. A trade study of the array operating voltage and its effect on array bus mass is also presented. A system architecture is developed which makes use of a combination of high efficiency switching power convertors and analog regulators. Mass and volume estimates are presented for all subsystems.

  18. Architecture for robot intelligence

    NASA Technical Reports Server (NTRS)

    Peters, II, Richard Alan (Inventor)

    2004-01-01

    An architecture for robot intelligence enables a robot to learn new behaviors and create new behavior sequences autonomously and interact with a dynamically changing environment. Sensory information is mapped onto a Sensory Ego-Sphere (SES) that rapidly identifies important changes in the environment and functions much like short term memory. Behaviors are stored in a DBAM that creates an active map from the robot's current state to a goal state and functions much like long term memory. A dream state converts recent activities stored in the SES and creates or modifies behaviors in the DBAM.

  19. Molecular architecture requirements for polymer-grafted lignin superplasticizers.

    PubMed

    Gupta, Chetali; Sverdlove, Madeline J; Washburn, Newell R

    2015-04-01

    Superplasticizers are a class of anionic polymer dispersants used to inhibit aggregation in hydraulic cement, lowering the yield stress of cement pastes to improve workability and reduce water requirements. The plant-derived biopolymer lignin is commonly used as a low-cost/low-performance plasticizer, but attempts to improve its effects on cement rheology through copolymerization with synthetic monomers have not led to significant improvements. Here we demonstrate that kraft lignin can form the basis for high-performance superplasticizers in hydraulic cement, but the molecular architecture must be based on a lignin core with a synthetic-polymer corona that can be produced via controlled radical polymerization. Using slump tests of ordinary Portland cement pastes, we show that polyacrylamide-grafted lignin prepared via reversible addition-fragmentation chain transfer polymerization can reduce the yield stress of cement paste to similar levels as a leading commercial polycarboxylate ether superplasticizer at concentrations ten-fold lower, although the lignin material produced via controlled radical polymerization does not appear to reduce the dynamic viscosity of cement paste as effectively as the polycarboxylate superplasticizer, despite having a similar affinity for the individual mineral components of ordinary Portland cement. In contrast, polyacrylamide copolymerized with a methacrylated kraft lignin via conventional free radical polymerization having a similar overall composition did not reduce the yield stress or the viscosity of cement pastes. While further work is required to elucidate the mechanism of this effect, these results indicate that controlling the architecture of polymer-grafted lignin can significantly enhance its performance as a superplasticizer for cement. PMID:25693832

  20. BADD phase II: DDS information management architecture

    NASA Astrophysics Data System (ADS)

    Stephenson, Thomas P.; DeCleene, Brian T.; Speckert, Glen; Voorhees, Harry L.

    1997-06-01

    The DARPA Battlefield Awareness and Data Dissemination (BADD) Phase II Program will provide the next generation multimedia information management architecture to support the warfighter. One goal of this architecture is proactive dissemination of information to the warfighter through strategies such as multicast and 'smart push and pull' designed to minimize latency and make maximum use of available communications bandwidth. Another goal is to support integration of information from widely distributed legacy repositories. This will enable the next generation of battlefield awareness applications to form a common operational view of the battlefield to aid joint service and/or multi-national peacekeeping forces. This paper discusses the approach we are taking to realize such an architecture for BADD. Our architecture and its implementation, known as the Distributed Dissemination Serivces (DDS) are based on two key concepts: a global database schema and an intelligent, proactive caching scheme. A global schema provides a common logical view of the information space in which the warfighter operates. This schema (or subsets of it) is shared by all warfighters through a distributed object database providing local access to all relevant metadata. This approach provides both scalability to a large number of warfighters, and it supports tethered as well as autonomous operations. By utilizing DDS information integration services that provide transparent access to legacy databases, related information from multiple 'stovepipe' systems are now available to battlefield awareness applications. The second key concept embedded in our architecture is an intelligent, hierarchical caching system supported by proactive dissemination management services which push both lightweight and heavyweight data such as imagery and video to warfighters based on their information profiles. The goal of this approach is to transparently and proactively stage data which is likely to be requested by

  1. Application performation evaluation of the HTMT architecture.

    SciTech Connect

    Hereld, M.; Judson, I. R.; Stevens, R.

    2004-02-23

    In this report we summarize findings from a study of the predicted performance of a suite of application codes taken from the research environment and analyzed against a modeling framework for the HTMT architecture. We find that the inward bandwidth of the data vortex may be a limiting factor for some applications. We also find that available memory in the cryogenic layer is a constraining factor in the partitioning of applications into parcels. The architecture in several examples may be inadequately exploited; in particular, applications typically did not capitalize well on the available computational power or data organizational capability in the PIM layers. The application suite provided significant examples of wide excursions from the accepted (if simplified) program execution model--in particular, by required complex in-SPELL synchronization between parcels. The availability of the HTMT-C emulation environment did not contribute significantly to the ability to analyze applications, because of the large gap between the available hardware descriptions and parameters in the modeling framework and the types of data that could be collected via HTMT-C emulation runs. Detailed analysis of application performance, and indeed further credible development of the HTMT-inspired program execution model and system architecture, requires development of much better tools. Chief among them are cycle-accurate simulation tools for computational, network, and memory components. Additionally, there is a critical need for a whole system simulation tool to allow detailed programming exercises and performance tests to be developed. We address three issues in this report: (1) The landscape for applications of petaflops computing; (2) The performance of applications on the HTMT architecture; and (3) The effectiveness of HTMT-C as a tool for studying and developing the HTMT architecture. We set the scene with observations about the course of application development as petaflops

  2. Components for solar energy

    NASA Technical Reports Server (NTRS)

    1973-01-01

    A requirement for the direct technological utilization of solar energy is a device for capturing and absorbing the available sunlight. These devices are commonly termed collectors. Because of the highly variable nature of sunlight, a facility for storing the collected energy is often essential. A device for direct conversion of light into electricity, which depends for operation on incident sunlight, is the photovoltaic cell. These components for solar energy systems are considered.

  3. A Ground Systems Architecture Transition for a Distributed Operations System

    NASA Technical Reports Server (NTRS)

    Sellers, Donna; Pitts, Lee; Bryant, Barry

    2003-01-01

    The Marshall Space Flight Center (MSFC) Ground Systems Department (GSD) recently undertook an architecture change in the product line that serves the ISS program. As a result, the architecture tradeoffs between data system product lines that serve remote users versus those that serve control center flight control teams were explored extensively. This paper describes the resulting architecture that will be used in the International Space Station (ISS) payloads program, and the resulting functional breakdown of the products that support this architecture. It also describes the lessons learned from the path that was followed, as a migration of products cause the need to reevaluate the allocation of functions across the architecture. The result is a set of innovative ground system solutions that is scalable so it can support facilities of wide-ranging sizes, from a small site up to large control centers. Effective use of system automation, custom components, design optimization for data management, data storage, data transmissions, and advanced local and wide area networking architectures, plus the effective use of Commercial-Off-The-Shelf (COTS) products, provides flexible Remote Ground System options that can be tailored to the needs of each user. This paper offers a description of the efficiency and effectiveness of the Ground Systems architectural options that have been implemented, and includes successful implementation examples and lessons learned.

  4. Framework for the Parametric System Modeling of Space Exploration Architectures

    NASA Technical Reports Server (NTRS)

    Komar, David R.; Hoffman, Jim; Olds, Aaron D.; Seal, Mike D., II

    2008-01-01

    This paper presents a methodology for performing architecture definition and assessment prior to, or during, program formulation that utilizes a centralized, integrated architecture modeling framework operated by a small, core team of general space architects. This framework, known as the Exploration Architecture Model for IN-space and Earth-to-orbit (EXAMINE), enables: 1) a significantly larger fraction of an architecture trade space to be assessed in a given study timeframe; and 2) the complex element-to-element and element-to-system relationships to be quantitatively explored earlier in the design process. Discussion of the methodology advantages and disadvantages with respect to the distributed study team approach typically used within NASA to perform architecture studies is presented along with an overview of EXAMINE s functional components and tools. An example Mars transportation system architecture model is used to demonstrate EXAMINE s capabilities in this paper. However, the framework is generally applicable for exploration architecture modeling with destinations to any celestial body in the solar system.

  5. Architectures for intelligent machines

    NASA Technical Reports Server (NTRS)

    Saridis, George N.

    1991-01-01

    The theory of intelligent machines has been recently reformulated to incorporate new architectures that are using neural and Petri nets. The analytic functions of an intelligent machine are implemented by intelligent controls, using entropy as a measure. The resulting hierarchical control structure is based on the principle of increasing precision with decreasing intelligence. Each of the three levels of the intelligent control is using different architectures, in order to satisfy the requirements of the principle: the organization level is moduled after a Boltzmann machine for abstract reasoning, task planning and decision making; the coordination level is composed of a number of Petri net transducers supervised, for command exchange, by a dispatcher, which also serves as an interface to the organization level; the execution level, include the sensory, planning for navigation and control hardware which interacts one-to-one with the appropriate coordinators, while a VME bus provides a channel for database exchange among the several devices. This system is currently implemented on a robotic transporter, designed for space construction at the CIRSSE laboratories at the Rensselaer Polytechnic Institute. The progress of its development is reported.

  6. Autonomous droplet architectures.

    PubMed

    Jones, Gareth; King, Philip H; Morgan, Hywel; de Planque, Maurits R R; Zauner, Klaus-Peter

    2015-01-01

    The quintessential living element of all organisms is the cell-a fluid-filled compartment enclosed, but not isolated, by a layer of amphiphilic molecules that self-assemble at its boundary. Cells of different composition can aggregate and communicate through the exchange of molecules across their boundaries. The astounding success of this architecture is readily apparent throughout the biological world. Inspired by the versatility of nature's architecture, we investigate aggregates of membrane-enclosed droplets as a design concept for robotics. This will require droplets capable of sensing, information processing, and actuation. It will also require the integration of functionally specialized droplets into an interconnected functional unit. Based on results from the literature and from our own laboratory, we argue the viability of this approach. Sensing and information processing in droplets have been the subject of several recent studies, on which we draw. Integrating droplets into coherently acting units and the aspect of controlled actuation for locomotion have received less attention. This article describes experiments that address both of these challenges. Using lipid-coated droplets of Belousov-Zhabotinsky reaction medium in oil, we show here that such droplets can be integrated and that chemically driven mechanical motion can be achieved. PMID:25622015

  7. Modularity and mental architecture.

    PubMed

    Robbins, Philip

    2013-11-01

    Debates about the modularity of cognitive architecture have been ongoing for at least the past three decades, since the publication of Fodor's landmark book The Modularity of Mind. According to Fodor, modularity is essentially tied to informational encapsulation, and as such is only found in the relatively low-level cognitive systems responsible for perception and language. According to Fodor's critics in the evolutionary psychology camp, modularity simply reflects the fine-grained functional specialization dictated by natural selection, and it characterizes virtually all aspects of cognitive architecture, including high-level systems for judgment, decision making, and reasoning. Though both of these perspectives on modularity have garnered support, the current state of evidence and argument suggests that a broader skepticism about modularity may be warranted. WIREs Cogn Sci 2013, 4:641-649. doi: 10.1002/wcs.1255 CONFLICT OF INTEREST: The author has declared no conflicts of interest for this article. For further resources related to this article, please visit the WIREs website. PMID:26304269

  8. Protocol Architecture Model Report

    NASA Technical Reports Server (NTRS)

    Dhas, Chris

    2000-01-01

    NASA's Glenn Research Center (GRC) defines and develops advanced technology for high priority national needs in communications technologies for application to aeronautics and space. GRC tasked Computer Networks and Software Inc. (CNS) to examine protocols and architectures for an In-Space Internet Node. CNS has developed a methodology for network reference models to support NASA's four mission areas: Earth Science, Space Science, Human Exploration and Development of Space (REDS), Aerospace Technology. This report applies the methodology to three space Internet-based communications scenarios for future missions. CNS has conceptualized, designed, and developed space Internet-based communications protocols and architectures for each of the independent scenarios. The scenarios are: Scenario 1: Unicast communications between a Low-Earth-Orbit (LEO) spacecraft inspace Internet node and a ground terminal Internet node via a Tracking and Data Rela Satellite (TDRS) transfer; Scenario 2: Unicast communications between a Low-Earth-Orbit (LEO) International Space Station and a ground terminal Internet node via a TDRS transfer; Scenario 3: Multicast Communications (or "Multicasting"), 1 Spacecraft to N Ground Receivers, N Ground Transmitters to 1 Ground Receiver via a Spacecraft.

  9. Evolution of a common controller

    NASA Astrophysics Data System (ADS)

    Powell, D.; Barbour, D.; Gilbreath, G.

    2012-06-01

    Precedent has shown common controllers must strike a balance between the desire for an integrated user interface design by human factors engineers and support of project-specific data requirements. A common user-interface requires the project-specific data to conform to an internal representation, but project-specific customization is impeded by the implicit rules introduced by the internal data representation. Space and Naval Warfare Systems Center Pacific (SSC Pacific) developed the latest version of the Multi-robot Operator Control Unit (MOCU) to address interoperability, standardization, and customization issues by using a modular, extensible, and flexible architecture built upon a sharedworld model. MOCU version 3 provides an open and extensible operator-control interface that allows additional functionality to be seamlessly added with software modules while providing the means to fully integrate the information into a layered game-like user interface. MOCU's design allows it to completely decouple the human interface from the core management modules, while still enabling modules to render overlapping regions of the screen without interference or a priori knowledge of other display elements, thus allowing more flexibility in project-specific customization.

  10. Rutger's CAM2000 chip architecture

    NASA Technical Reports Server (NTRS)

    Smith, Donald E.; Hall, J. Storrs; Miyake, Keith

    1993-01-01

    This report describes the architecture and instruction set of the Rutgers CAM2000 memory chip. The CAM2000 combines features of Associative Processing (AP), Content Addressable Memory (CAM), and Dynamic Random Access Memory (DRAM) in a single chip package that is not only DRAM compatible but capable of applying simple massively parallel operations to memory. This document reflects the current status of the CAM2000 architecture and is continually updated to reflect the current state of the architecture and instruction set.

  11. Demand Activated Manufacturing Architecture

    SciTech Connect

    Bender, T.R.; Zimmerman, J.J.

    2001-02-07

    Honeywell Federal Manufacturing & Technologies (FM&T) engineers John Zimmerman and Tom Bender directed separate projects within this CRADA. This Project Accomplishments Summary contains their reports independently. Zimmerman: In 1998 Honeywell FM&T partnered with the Demand Activated Manufacturing Architecture (DAMA) Cooperative Business Management Program to pilot the Supply Chain Integration Planning Prototype (SCIP). At the time, FM&T was developing an enterprise-wide supply chain management prototype called the Integrated Programmatic Scheduling System (IPSS) to improve the DOE's Nuclear Weapons Complex (NWC) supply chain. In the CRADA partnership, FM&T provided the IPSS technical and business infrastructure as a test bed for SCIP technology, and this would provide FM&T the opportunity to evaluate SCIP as the central schedule engine and decision support tool for IPSS. FM&T agreed to do the bulk of the work for piloting SCIP. In support of that aim, DAMA needed specific DOE Defense Programs opportunities to prove the value of its supply chain architecture and tools. In this partnership, FM&T teamed with Sandia National Labs (SNL), Division 6534, the other DAMA partner and developer of SCIP. FM&T tested SCIP in 1998 and 1999. Testing ended in 1999 when DAMA CRADA funding for FM&T ceased. Before entering the partnership, FM&T discovered that the DAMA SCIP technology had an array of applications in strategic, tactical, and operational planning and scheduling. At the time, FM&T planned to improve its supply chain performance by modernizing the NWC-wide planning and scheduling business processes and tools. The modernization took the form of a distributed client-server planning and scheduling system (IPSS) for planners and schedulers to use throughout the NWC on desktops through an off-the-shelf WEB browser. The planning and scheduling process within the NWC then, and today, is a labor-intensive paper-based method that plans and schedules more than 8,000 shipped parts

  12. Design and Analysis of Architectures for Structural Health Monitoring Systems

    NASA Technical Reports Server (NTRS)

    Mukkamala, Ravi; Sixto, S. L. (Technical Monitor)

    2002-01-01

    During the two-year project period, we have worked on several aspects of Health Usage and Monitoring Systems for structural health monitoring. In particular, we have made contributions in the following areas. 1. Reference HUMS architecture: We developed a high-level architecture for health monitoring and usage systems (HUMS). The proposed reference architecture is shown. It is compatible with the Generic Open Architecture (GOA) proposed as a standard for avionics systems. 2. HUMS kernel: One of the critical layers of HUMS reference architecture is the HUMS kernel. We developed a detailed design of a kernel to implement the high level architecture.3. Prototype implementation of HUMS kernel: We have implemented a preliminary version of the HUMS kernel on a Unix platform.We have implemented both a centralized system version and a distributed version. 4. SCRAMNet and HUMS: SCRAMNet (Shared Common Random Access Memory Network) is a system that is found to be suitable to implement HUMS. For this reason, we have conducted a simulation study to determine its stability in handling the input data rates in HUMS. 5. Architectural specification.

  13. A Flexible, High Performance Service-Oriented Architecture for Detecting Cyber Attacks

    SciTech Connect

    Wynne, Adam S.; Gorton, Ian; Almquist, Justin P.; Chatterton, Jack; Thurman, David A.

    2008-02-01

    The next generation of intrusion detection and cyber defense technologies must be highly flexible so that deployed solutions can be quickly modified to detect new attack scenarios. They must also be able to provide the performance necessary to monitor traffic from high speed networks, and scale to enterprise wide deployments. In this paper we describe our experiences in creating a production application for cyber situational awareness. The application exploits the capabilities of several independently developed components and integrates them using SIFT (Scalable Information Fusion and Triage), a service-oriented architecture (SOA) designed for creating domain-independent, enterprise scale analytical applications. SIFT exploits a common design pattern for composing analytical components, and extends an existing messaging platform with scaling capabilities. We describe the design of the application, and provide a performance analysis that demonstrates the capabilities of the SIFT platform. The paper concludes by discussing the lessons we have learned from this project, and outlines the architecture of the MeDICI, the next generation of our enterprise analytics platforms.

  14. Formalism Challenges of the Cougaar Model Driven Architecture

    NASA Technical Reports Server (NTRS)

    Bohner, Shawn A.; George, Boby; Gracanin, Denis; Hinchey, Michael G.

    2004-01-01

    The Cognitive Agent Architecture (Cougaar) is one of the most sophisticated distributed agent architectures developed today. As part of its research and evolution, Cougaar is being studied for application to large, logistics-based applications for the Department of Defense (DoD). Anticipiting future complex applications of Cougaar, we are investigating the Model Driven Architecture (MDA) approach to understand how effective it would be for increasing productivity in Cougar-based development efforts. Recognizing the sophistication of the Cougaar development environment and the limitations of transformation technologies for agents, we have systematically developed an approach that combines component assembly in the large and transformation in the small. This paper describes some of the key elements that went into the Cougaar Model Driven Architecture approach and the characteristics that drove the approach.

  15. Supramolecular transformations within discrete coordination-driven supramolecular architectures.

    PubMed

    Wang, Wei; Wang, Yu-Xuan; Yang, Hai-Bo

    2016-05-01

    In this review, a comprehensive summary of supramolecular transformations within discrete coordination-driven supramolecular architectures, including helices, metallacycles, metallacages, etc., is presented. Recent investigations have demonstrated that coordination-driven self-assembled architectures provide an ideal platform to study supramolecular transformations mainly due to the relatively rigid yet dynamic nature of the coordination bonds. Various stimuli have been extensively employed to trigger the transformation processes of metallosupramolecular architectures, such as solvents, concentration, anions, guests, change in component fractions or chemical compositions, light, and post-modification reactions, which allowed for the formation of new structures with specific properties and functions. Thus, it is believed that supramolecular transformations could serve as another highly efficient approach for generating diverse metallosupramolecular architectures. Classified by the aforementioned various stimuli used to induce the interconversion processes, the emphasis in this review will be on the transformation conditions, structural changes, mechanisms, and the output of specific properties and functions upon induction of structural transformations. PMID:27009833

  16. An architecture for integrating distributed and cooperating knowledge-based Air Force decision aids

    NASA Technical Reports Server (NTRS)

    Nugent, Richard O.; Tucker, Richard W.

    1988-01-01

    MITRE has been developing a Knowledge-Based Battle Management Testbed for evaluating the viability of integrating independently-developed knowledge-based decision aids in the Air Force tactical domain. The primary goal for the testbed architecture is to permit a new system to be added to a testbed with little change to the system's software. Each system that connects to the testbed network declares that it can provide a number of services to other systems. When a system wants to use another system's service, it does not address the server system by name, but instead transmits a request to the testbed network asking for a particular service to be performed. A key component of the testbed architecture is a common database which uses a relational database management system (RDBMS). The RDBMS provides a database update notification service to requesting systems. Normally, each system is expected to monitor data relations of interest to it. Alternatively, a system may broadcast an announcement message to inform other systems that an event of potential interest has occurred. Current research is aimed at dealing with issues resulting from integration efforts, such as dealing with potential mismatches of each system's assumptions about the common database, decentralizing network control, and coordinating multiple agents.

  17. Architectures for reasoning in parallel

    NASA Technical Reports Server (NTRS)

    Hall, Lawrence O.

    1989-01-01

    The research conducted has dealt with rule-based expert systems. The algorithms that may lead to effective parallelization of them were investigated. Both the forward and backward chained control paradigms were investigated in the course of this work. The best computer architecture for the developed and investigated algorithms has been researched. Two experimental vehicles were developed to facilitate this research. They are Backpac, a parallel backward chained rule-based reasoning system and Datapac, a parallel forward chained rule-based reasoning system. Both systems have been written in Multilisp, a version of Lisp which contains the parallel construct, future. Applying the future function to a function causes the function to become a task parallel to the spawning task. Additionally, Backpac and Datapac have been run on several disparate parallel processors. The machines are an Encore Multimax with 10 processors, the Concert Multiprocessor with 64 processors, and a 32 processor BBN GP1000. Both the Concert and the GP1000 are switch-based machines. The Multimax has all its processors hung off a common bus. All are shared memory machines, but have different schemes for sharing the memory and different locales for the shared memory. The main results of the investigations come from experiments on the 10 processor Encore and the Concert with partitions of 32 or less processors. Additionally, experiments have been run with a stripped down version of EMYCIN.

  18. Software synthesis using generic architectures

    NASA Technical Reports Server (NTRS)

    Bhansali, Sanjay

    1993-01-01

    A framework for synthesizing software systems based on abstracting software system designs and the design process is described. The result of such an abstraction process is a generic architecture and the process knowledge for customizing the architecture. The customization process knowledge is used to assist a designer in customizing the architecture as opposed to completely automating the design of systems. Our approach using an implemented example of a generic tracking architecture which was customized in two different domains is illustrated. How the designs produced using KASE compare to the original designs of the two systems, and current work and plans for extending KASE to other application areas are described.

  19. A Design for Standards-based Knowledge Components.

    ERIC Educational Resources Information Center

    Anderson, Thor A.; Merrill, M. David

    2000-01-01

    Describes ongoing work in designing modular software components based on open standards and a specific instructional design theoryuinstructional transaction theory. Focuses on applied technological solutions to overcome identified limitations of current authoring environments, including proprietary architectures, instructional design theory…

  20. Common Interventional Radiology Procedures

    MedlinePlus

    ... of common interventional techniques is below. Common Interventional Radiology Procedures Angiography An X-ray exam of the ... into the vertebra. Copyright © 2016 Society of Interventional Radiology. All rights reserved. 3975 Fair Ridge Drive • Suite ...

  1. How Common Is the Common Core?

    ERIC Educational Resources Information Center

    Thomas, Amande; Edson, Alden J.

    2014-01-01

    Since the introduction of the Common Core State Standards for Mathematics (CCSSM) in 2010, stakeholders in adopting states have engaged in a variety of activities to understand CCSSM standards and transition from previous state standards. These efforts include research, professional development, assessment and modification of curriculum resources,…

  2. Technology advances and market forces: Their impact on high performance architectures

    NASA Technical Reports Server (NTRS)

    Best, D. R.

    1978-01-01

    Reasonable projections into future supercomputer architectures and technology require an analysis of the computer industry market environment, the current capabilities and trends within the component industry, and the research activities on computer architecture in the industrial and academic communities. Management, programmer, architect, and user must cooperate to increase the efficiency of supercomputer development efforts. Care must be taken to match the funding, compiler, architecture and application with greater attention to testability, maintainability, reliability, and usability than supercomputer development programs of the past.

  3. Commonality analysis as a knowledge acquisition problem

    NASA Technical Reports Server (NTRS)

    Yeager, Dorian P.

    1987-01-01

    Commonality analysis is a systematic attempt to reduce costs in a large scale engineering project by discontinuing development of certain components during the design phase. Each discontinued component is replaced by another component that has sufficient functionality to be considered an appropriate substitute. The replacement strategy is driven by economic considerations. The System Commonality Analysis Tool (SCAT) is based on an oversimplified model of the problem and incorporates no knowledge acquisition component. In fact, the process of arriving at a compromise between functionality and economy is quite complex, with many opportunities for the application of expert knowledge. Such knowledge is of two types: general knowledge expressible as heuristics or mathematical laws potentially applicable to any set of components, and specific knowledge about the way in which elements of a given set of components interrelate. Examples of both types of knowledge are presented, and a framework is proposed for integrating the knowledge into a more general and useable tool.

  4. 9. Photocopy of architectural drawing (from National Archives Architectural and ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    9. Photocopy of architectural drawing (from National Archives Architectural and Cartographic Branch, Alexandria, Va.) Annotated lithograph on paper. Standard plan used for construction of Commissary Sergeants Quarters, 1876. PLAN, FRONT AND SIDE ELEVATIONS, SECTION - Fort Myer, Commissary Sergeant's Quarters, Washington Avenue between Johnson Lane & Custer Road, Arlington, Arlington County, VA

  5. The New Common School.

    ERIC Educational Resources Information Center

    Glenn, Charles L.

    1987-01-01

    Horace Mann's goal of creating a common school that brings our society's children together in mutual respect and common learning need not be frustrated by residential segregation and geographical separation of the haves and have-nots. Massachusetts' new common school vision boasts a Metro Program for minority students, 80 magnet schools, and…

  6. The Common Core.

    ERIC Educational Resources Information Center

    Boyer, Ernest L.

    Current curricula in institutions of higher education are criticized in this speech for their lack of a common core of education. Several possibilities for developing such a common core include education centered around our common heritage and the challenges of the present. It is suggested that all students must be introduced to the events,…

  7. Knowledge representation for commonality

    NASA Technical Reports Server (NTRS)

    Yeager, Dorian P.

    1990-01-01

    Domain-specific knowledge necessary for commonality analysis falls into two general classes: commonality constraints and costing information. Notations for encoding such knowledge should be powerful and flexible and should appeal to the domain expert. The notations employed by the Commonality Analysis Problem Solver (CAPS) analysis tool are described. Examples are given to illustrate the main concepts.

  8. The Architecture of Exoplanets

    NASA Astrophysics Data System (ADS)

    Hatzes, Artie P.

    2016-05-01

    Prior to the discovery of exoplanets our expectations of their architecture were largely driven by the properties of our solar system. We expected giant planets to lie in the outer regions and rocky planets in the inner regions. Planets should probably only occupy orbital distances 0.3-30 AU from the star. Planetary orbits should be circular, prograde and in the same plane. The reality of exoplanets have shattered these expectations. Jupiter-mass, Neptune-mass, Superearths, and even Earth-mass planets can orbit within 0.05 AU of the stars, sometimes with orbital periods of less than one day. Exoplanetary orbits can be eccentric, misaligned, and even in retrograde orbits. Radial velocity surveys gave the first hints that the occurrence rate increases with decreasing mass. This was put on a firm statistical basis with the Kepler mission that clearly demonstrated that there were more Neptune- and Superearth-sized planets than Jupiter-sized planets. These are often in multiple, densely packed systems where the planets all orbit within 0.3 AU of the star, a result also suggested by radial velocity surveys. Exoplanets also exhibit diversity along the main sequence. Massive stars tend to have a higher frequency of planets ( ≈ 20-25 %) that tend to be more massive ( M≈ 5-10 M_{Jup}). Giant planets around low mass stars are rare, but these stars show an abundance of small (Neptune and Superearth) planets in multiple systems. Planet formation is also not restricted to single stars as the Kepler mission has discovered several circumbinary planets. Although we have learned much about the architecture of planets over the past 20 years, we know little about the census of small planets at relatively large ( a>1 AU) orbital distances. We have yet to find a planetary system that is analogous to our own solar system. The question of how unique are the properties of our own solar system remains unanswered. Advancements in the detection methods of small planets over a wide range

  9. A study of the selection of microcomputer architectures to automate planetary spacecraft power systems

    NASA Technical Reports Server (NTRS)

    Nauda, A.

    1982-01-01

    Performance and reliability models of alternate microcomputer architectures as a methodology for optimizing system design were examined. A methodology for selecting an optimum microcomputer architecture for autonomous operation of planetary spacecraft power systems was developed. Various microcomputer system architectures are analyzed to determine their application to spacecraft power systems. It is suggested that no standardization formula or common set of guidelines exists which provides an optimum configuration for a given set of specifications.

  10. Re-engineering Nascom's network management architecture

    NASA Astrophysics Data System (ADS)

    Drake, Brian C.; Messent, David

    1994-11-01

    The development of Nascom systems for ground communications began in 1958 with Project Vanguard. The low-speed systems (rates less than 9.6 Kbs) were developed following existing standards; but, there were no comparable standards for high-speed systems. As a result, these systems were developed using custom protocols and custom hardware. Technology has made enormous strides since the ground support systems were implemented. Standards for computer equipment, software, and high-speed communications exist and the performance of current workstations exceeds that of the mainframes used in the development of the ground systems. Nascom is in the process of upgrading its ground support systems and providing additional services. The Message Switching System (MSS), Communications Address Processor (CAP), and Multiplexer/Demultiplexer (MDM) Automated Control System (MACS) are all examples of Nascom systems developed using standards such as, X-windows, Motif, and Simple Network Management Protocol (SNMP). Also, the Earth Observing System (EOS) Communications (Ecom) project is stressing standards as an integral part of its network. The move towards standards has produced a reduction in development, maintenance, and interoperability costs, while providing operational quality improvement. The Facility and Resource Manager (FARM) project has been established to integrate the Nascom networks and systems into a common network management architecture. The maximization of standards and implementation of computer automation in the architecture will lead to continued cost reductions and increased operational efficiency. The first step has been to derive overall Nascom requirements and identify the functionality common to all the current management systems. The identification of these common functions will enable the reuse of processes in the management architecture and promote increased use of automation throughout the Nascom network. The MSS, CAP, MACS, and Ecom projects have indicated

  11. Re-engineering Nascom's network management architecture

    NASA Technical Reports Server (NTRS)

    Drake, Brian C.; Messent, David

    1994-01-01

    The development of Nascom systems for ground communications began in 1958 with Project Vanguard. The low-speed systems (rates less than 9.6 Kbs) were developed following existing standards; but, there were no comparable standards for high-speed systems. As a result, these systems were developed using custom protocols and custom hardware. Technology has made enormous strides since the ground support systems were implemented. Standards for computer equipment, software, and high-speed communications exist and the performance of current workstations exceeds that of the mainframes used in the development of the ground systems. Nascom is in the process of upgrading its ground support systems and providing additional services. The Message Switching System (MSS), Communications Address Processor (CAP), and Multiplexer/Demultiplexer (MDM) Automated Control System (MACS) are all examples of Nascom systems developed using standards such as, X-windows, Motif, and Simple Network Management Protocol (SNMP). Also, the Earth Observing System (EOS) Communications (Ecom) project is stressing standards as an integral part of its network. The move towards standards has produced a reduction in development, maintenance, and interoperability costs, while providing operational quality improvement. The Facility and Resource Manager (FARM) project has been established to integrate the Nascom networks and systems into a common network management architecture. The maximization of standards and implementation of computer automation in the architecture will lead to continued cost reductions and increased operational efficiency. The first step has been to derive overall Nascom requirements and identify the functionality common to all the current management systems. The identification of these common functions will enable the reuse of processes in the management architecture and promote increased use of automation throughout the Nascom network. The MSS, CAP, MACS, and Ecom projects have indicated

  12. Embedded instrumentation architecture

    DOEpatents

    Boyd, Gerald M.; Farrow, Jeffrey

    2015-09-29

    The various technologies presented herein relate to generating copies of an incoming signal, wherein each copy of the signal can undergo different processing to facilitate control of bandwidth demands during communication of one or more signals relating to the incoming signal. A signal sharing component can be utilized to share copies of the incoming signal between a plurality of circuits/components which can include a first A/D converter, a second A/D converter, and a comparator component. The first A/D converter can operate at a low sampling rate and accordingly generates, and continuously transmits, a signal having a low bandwidth requirement. The second A/D converter can operate at a high sampling rate and hence generates a signal having a high bandwidth requirement. Transmission of a signal from the second A/D converter can be controlled by a signaling event (e.g., a signal pulse) being determined to have occurred by the comparator component.

  13. Space Telecommunications Radio System (STRS) Architecture Standard. Release 1.02.1

    NASA Technical Reports Server (NTRS)

    Reinhart, Richard C.; Kacpura, Thomas J.; Handler, Louis M.; Hall, C. Steve; Mortensen, Dale J.; Johnson, Sandra K.; Briones, Janette C.; Nappier, Jennifer M.; Downey, Joseph A.; Lux, James P.

    2012-01-01

    This document contains the NASA architecture standard for software defined radios used in space- and ground-based platforms to enable commonality among radio developments to enhance capability and services while reducing mission and programmatic risk. Transceivers (or transponders) with functionality primarily defined in software (e.g., firmware) have the ability to change their functional behavior through software alone. This radio architecture standard offers value by employing common waveform software interfaces, method of instantiation, operation, and testing among different compliant hardware and software products. These common interfaces within the architecture abstract application software from the underlying hardware to enable technology insertion independently at either the software or hardware layer.

  14. Vacuum Brazing of Accelerator Components

    NASA Astrophysics Data System (ADS)

    Singh, Rajvir; Pant, K. K.; Lal, Shankar; Yadav, D. P.; Garg, S. R.; Raghuvanshi, V. K.; Mundra, G.

    2012-11-01

    Commonly used materials for accelerator components are those which are vacuum compatible and thermally conductive. Stainless steel, aluminum and copper are common among them. Stainless steel is a poor heat conductor and not very common in use where good thermal conductivity is required. Aluminum and copper and their alloys meet the above requirements and are frequently used for the above purpose. The accelerator components made of aluminum and its alloys using welding process have become a common practice now a days. It is mandatory to use copper and its other grades in RF devices required for accelerators. Beam line and Front End components of the accelerators are fabricated from stainless steel and OFHC copper. Fabrication of components made of copper using welding process is very difficult and in most of the cases it is impossible. Fabrication and joining in such cases is possible using brazing process especially under vacuum and inert gas atmosphere. Several accelerator components have been vacuum brazed for Indus projects at Raja Ramanna Centre for Advanced Technology (RRCAT), Indore using vacuum brazing facility available at RRCAT, Indore. This paper presents details regarding development of the above mentioned high value and strategic components/assemblies. It will include basics required for vacuum brazing, details of vacuum brazing facility, joint design, fixturing of the jobs, selection of filler alloys, optimization of brazing parameters so as to obtain high quality brazed joints, brief description of vacuum brazed accelerator components etc.

  15. Multiprocessor architectural study

    NASA Technical Reports Server (NTRS)

    Kosmala, A. L.; Stanten, S. F.; Vandever, W. H.

    1972-01-01

    An architectural design study was made of a multiprocessor computing system intended to meet functional and performance specifications appropriate to a manned space station application. Intermetrics, previous experience, and accumulated knowledge of the multiprocessor field is used to generate a baseline philosophy for the design of a future SUMC* multiprocessor. Interrupts are defined and the crucial questions of interrupt structure, such as processor selection and response time, are discussed. Memory hierarchy and performance is discussed extensively with particular attention to the design approach which utilizes a cache memory associated with each processor. The ability of an individual processor to approach its theoretical maximum performance is then analyzed in terms of a hit ratio. Memory management is envisioned as a virtual memory system implemented either through segmentation or paging. Addressing is discussed in terms of various register design adopted by current computers and those of advanced design.

  16. Functional Biomimetic Architectures

    NASA Astrophysics Data System (ADS)

    Levine, Paul M.

    N-substituted glycine oligomers, or 'peptoids,' are a class of sequence--specific foldamers composed of tertiary amide linkages, engendering proteolytic stability and enhanced cellular permeability. Peptoids are notable for their facile synthesis, sequence diversity, and ability to fold into distinct secondary structures. In an effort to establish new functional peptoid architectures, we utilize the copper-catalyzed azide-alkyne [3+2] cycloaddition (CuAAC) reaction to generate peptidomimetic assemblies bearing bioactive ligands that specifically target and modulate Androgen Receptor (AR) activity, a major therapeutic target for prostate cancer. Additionally, we explore chemical ligation protocols to generate semi-synthetic hybrid biomacromolecules capable of exhibiting novel structures and functions not accessible to fully biosynthesized proteins.

  17. Naval open systems architecture

    NASA Astrophysics Data System (ADS)

    Guertin, Nick; Womble, Brian; Haskell, Virginia

    2013-05-01

    For the past 8 years, the Navy has been working on transforming the acquisition practices of the Navy and Marine Corps toward Open Systems Architectures to open up our business, gain competitive advantage, improve warfighter performance, speed innovation to the fleet and deliver superior capability to the warfighter within a shrinking budget1. Why should Industry care? They should care because we in Government want the best Industry has to offer. Industry is in the business of pushing technology to greater and greater capabilities through innovation. Examples of innovations are on full display at this conference, such as exploring the impact of difficult environmental conditions on technical performance. Industry is creating the tools which will continue to give the Navy and Marine Corps important tactical advantages over our adversaries.

  18. Planning in subsumption architectures

    NASA Technical Reports Server (NTRS)

    Chalfant, Eugene C.

    1994-01-01

    A subsumption planner using a parallel distributed computational paradigm based on the subsumption architecture for control of real-world capable robots is described. Virtual sensor state space is used as a planning tool to visualize the robot's anticipated effect on its environment. Decision sequences are generated based on the environmental situation expected at the time the robot must commit to a decision. Between decision points, the robot performs in a preprogrammed manner. A rudimentary, domain-specific partial world model contains enough information to extrapolate the end results of the rote behavior between decision points. A collective network of predictors operates in parallel with the reactive network forming a recurrrent network which generates plans as a hierarchy. Details of a plan segment are generated only when its execution is imminent. The use of the subsumption planner is demonstrated by a simple maze navigation problem.

  19. Power Systems Control Architecture

    SciTech Connect

    James Davidson

    2005-01-01

    A diagram provided in the report depicts the complexity of the power systems control architecture used by the national power structure. It shows the structural hierarchy and the relationship of the each system to those other systems interconnected to it. Each of these levels provides a different focus for vulnerability testing and has its own weaknesses. In evaluating each level, of prime concern is what vulnerabilities exist that provide a path into the system, either to cause the system to malfunction or to take control of a field device. An additional vulnerability to consider is can the system be compromised in such a manner that the attacker can obtain critical information about the system and the portion of the national power structure that it controls.

  20. MSAT network architecture

    NASA Technical Reports Server (NTRS)

    Davies, N. G.; Skerry, B.

    1990-01-01

    The Mobile Satellite (MSAT) communications system will support mobile voice and data services using circuit switched and packet switched facilities with interconnection to the public switched telephone network and private networks. Control of the satellite network will reside in a Network Control System (NCS) which is being designed to be extremely flexible to provide for the operation of the system initially with one multi-beam satellite, but with capability to add additional satellites which may have other beam configurations. The architecture of the NCS is described. The signalling system must be capable of supporting the protocols for the assignment of circuits for mobile public telephone and private network calls as well as identifying packet data networks. The structure of a straw-man signalling system is discussed.

  1. Open Architecture Standard for NASA's Software-Defined Space Telecommunications Radio Systems

    NASA Technical Reports Server (NTRS)

    Reinhart, Richard C.; Johnson, Sandra K.; Kacpura, Thomas J.; Hall, Charles S.; Smith, Carl R.; Liebetreu, John

    2008-01-01

    NASA is developing an architecture standard for software-defined radios used in space- and ground-based platforms to enable commonality among radio developments to enhance capability and services while reducing mission and programmatic risk. Transceivers (or transponders) with functionality primarily defined in software (e.g., firmware) have the ability to change their functional behavior through software alone. This radio architecture standard offers value by employing common waveform software interfaces, method of instantiation, operation, and testing among different compliant hardware and software products. These common interfaces within the architecture abstract application software from the underlying hardware to enable technology insertion independently at either the software or hardware layer. This paper presents the initial Space Telecommunications Radio System (STRS) Architecture for NASA missions to provide the desired software abstraction and flexibility while minimizing the resources necessary to support the architecture.

  2. Architectures of fiber optic network in telecommunications

    NASA Astrophysics Data System (ADS)

    Vasile, Irina B.; Vasile, Alexandru; Filip, Luminita E.

    2005-08-01

    The operators of telecommunications have targeted their efforts towards realizing applications using broad band fiber optics systems in the access network. Thus, a new concept related to the implementation of fiber optic transmission systems, named FITL (Fiber In The Loop) has appeared. The fiber optic transmission systems have been extensively used for realizing the transport and intercommunication of the public telecommunication network, as well as for assuring the access to the telecommunication systems of the great corporations. Still, the segment of the residential users and small corporations did not benefit on large scale of this technology implementation. For the purpose of defining fiber optic applications, more types of architectures were conceived, like: bus, ring, star, tree. In the case of tree-like networks passive splitters (that"s where the name of PON comes from - Passive Optical Network-), which reduce significantly the costs of the fiber optic access, by separating the costs of the optical electronic components. That's why the passive fiber optics architectures (PON represent a viable solution for realizing the access at the user's loop. The main types of fiber optics architectures included in this work are: FTTC (Fiber To The Curb); FTTB (Fiber To The Building); FTTH (Fiber To The Home).

  3. Survey of software and hardware VLC architectures

    NASA Astrophysics Data System (ADS)

    Fogg, Chad E.

    1994-05-01

    In implementation of hybrid compression algorithms, the Huffman or Modified Huffman codec can consume a significant portion of silicon real-estate or CPU cycles. To reduce this cost, several schemes have been published that take advantage of one or more inherent properties of the variable length code tables. This paper examines some of these properties and their corresponding architectural components which can be pieced together to form custom hybrids suited to specific applications. Hardware architectural classifications include: serial and parallel Trees, Content Addressable Memory, Programmable Logic Arrays, and parallel comparators schemes that resemble flash A/D architectures. Assessment criteria include: bit rate vs. symbolic rate performance, clock cycle ratios, latencies, pre-buffering and post- buffering, codebook and source channel statistical dependencies, encoder and decoder circuitry sharing, pre-processing of codebooks, critical path, register use in software, breakdown between memory and logical operators, custom vs. standard cells, and code word order. Finally, the performance and size of current industrial implementations for specific application (JPEG, MPEG) are summarized.

  4. A Distributed Prognostic Health Management Architecture

    NASA Technical Reports Server (NTRS)

    Bhaskar, Saha; Saha, Sankalita; Goebel, Kai

    2009-01-01

    This paper introduces a generic distributed prognostic health management (PHM) architecture with specific application to the electrical power systems domain. Current state-of-the-art PHM systems are mostly centralized in nature, where all the processing is reliant on a single processor. This can lead to loss of functionality in case of a crash of the central processor or monitor. Furthermore, with increases in the volume of sensor data as well as the complexity of algorithms, traditional centralized systems become unsuitable for successful deployment, and efficient distributed architectures are required. A distributed architecture though, is not effective unless there is an algorithmic framework to take advantage of its unique abilities. The health management paradigm envisaged here incorporates a heterogeneous set of system components monitored by a varied suite of sensors and a particle filtering (PF) framework that has the power and the flexibility to adapt to the different diagnostic and prognostic needs. Both the diagnostic and prognostic tasks are formulated as a particle filtering problem in order to explicitly represent and manage uncertainties; however, typically the complexity of the prognostic routine is higher than the computational power of one computational element ( CE). Individual CEs run diagnostic routines until the system variable being monitored crosses beyond a nominal threshold, upon which it coordinates with other networked CEs to run the prognostic routine in a distributed fashion. Implementation results from a network of distributed embedded devices monitoring a prototypical aircraft electrical power system are presented, where the CEs are Sun Microsystems Small Programmable Object Technology (SPOT) devices.

  5. Crystal Structure of a Group I Energy Coupling Factor Vitamin Transporter S Component in Complex with Its Cognate Substrate.

    PubMed

    Josts, Inokentijs; Almeida Hernandez, Yasser; Andreeva, Antonina; Tidow, Henning

    2016-07-21

    Energy coupling factor (ECF) transporters are responsible for the uptake of essential scarce nutrients in prokaryotes. This ATP-binding cassette transporter family comprises two subgroups that share a common architecture forming a tripartite membrane protein complex consisting of a translocation component and ATP hydrolyzing module and a substrate-capture (S) component. Here, we present the crystal structure of YkoE from Bacillus subtilis, the S component of the previously uncharacterized group I ECF transporter YkoEDC. Structural and biochemical analyses revealed the constituent residues of the thiamine-binding pocket as well as an unexpected mode of vitamin recognition. In addition, our experimental and bioinformatics data demonstrate major differences between YkoE and group II ECF transporters and indicate how group I vitamin transporter S components have diverged from other group I and group II ECF transporters. PMID:27447050

  6. SpaceWire Architectures: Present and Future

    NASA Technical Reports Server (NTRS)

    Rakow, Glen Parker

    2006-01-01

    A viewgraph presentation on current and future spacewire architectures is shown. The topics include: 1) Current Spacewire Architectures: Swift Data Flow; 2) Current SpaceWire Architectures : LRO Data Flow; 3) Current Spacewire Architectures: JWST Data Flow; 4) Current SpaceWire Architectures; 5) Traditional Systems; 6) Future Systems; 7) Advantages; and 8) System Engineer Toolkit.

  7. Integrated Sensor Architecture (ISA) for Live Virtual Constructive (LVC) environments

    NASA Astrophysics Data System (ADS)

    Moulton, Christine L.; Harkrider, Susan; Harrell, John; Hepp, Jared

    2014-06-01

    The Integrated Sensor Architecture (ISA) is an interoperability solution that allows for the sharing of information between sensors and systems in a dynamic tactical environment. The ISA created a Service Oriented Architecture (SOA) that identifies common standards and protocols which support a net-centric system of systems integration. Utilizing a common language, these systems are able to connect, publish their needs and capabilities, and interact with other systems even on disadvantaged networks. Within the ISA project, three levels of interoperability were defined and implemented and these levels were tested at many events. Extensible data models and capabilities that are scalable across multi-echelons are supported, as well as dynamic discovery of capabilities and sensor management. The ISA has been tested and integrated with multiple sensors, platforms, and over a variety of hardware architectures in operational environments.

  8. Information architecture: Profile of adopted standards

    SciTech Connect

    1997-09-01

    The Department of Energy (DOE), like other Federal agencies, is under increasing pressure to use information technology to improve efficiency in mission accomplishment as well as delivery of services to the public. Because users and systems have become interdependent, DOE has enterprise wide needs for common application architectures, communication networks, databases, security, and management capabilities. Users need open systems that provide interoperability of products and portability of people, data, and applications that are distributed throughout heterogeneous computing environments. The level of interoperability necessary requires the adoption of DOE wide standards, protocols, and best practices. The Department has developed an information architecture and a related standards adoption and retirement process to assist users in developing strategies and plans for acquiring information technology products and services based upon open systems standards that support application software interoperability, portability, and scalability. This set of Departmental Information Architecture standards represents guidance for achieving higher degrees of interoperability within the greater DOE community, business partners, and stakeholders. While these standards are not mandatory, particular and due consideration of their applications in contractual matters and use in technology implementations Department wide are goals of the Chief Information Officer.

  9. Optimal expression evaluation for data parallel architectures

    NASA Technical Reports Server (NTRS)

    Gilbert, John R.; Schreiber, Robert

    1990-01-01

    A data parallel machine represents an array or other composite data structure by allocating one processor (at least conceptually) per data item. A pointwise operation can be performed between two such arrays in unit time, provided their corresponding elements are allocated in the same processors. If the arrays are not aligned in this fashion, the cost of moving one or both of them is part of the cost of the operation. The choice of where to perform the operation then affects this cost. If an expression with several operands is to be evaluated, there may be many choices of where to perform the intermediate operations. An efficient algorithm is given to find the minimum-cost way to evaluate an expression, for several different data parallel architectures. This algorithm applies to any architecture in which the metric describing the cost of moving an array is robust. This encompasses most of the common data parallel communication architectures, including meshes of arbitrary dimension and hypercubes. Remarks are made on several variations of the problem, some of which are solved and some of which remain open.

  10. Software Architecture for Autonomous Spacecraft

    NASA Technical Reports Server (NTRS)

    Shih, Jimmy S.

    1997-01-01

    The thesis objective is to design an autonomous spacecraft architecture to perform both deliberative and reactive behaviors. The Autonomous Small Planet In-Situ Reaction to Events (ASPIRE) project uses the architecture to integrate several autonomous technologies for a comet orbiter mission.

  11. Dynamic Weather Routes Architecture Overview

    NASA Technical Reports Server (NTRS)

    Eslami, Hassan; Eshow, Michelle

    2014-01-01

    Dynamic Weather Routes Architecture Overview, presents the high level software architecture of DWR, based on the CTAS software framework and the Direct-To automation tool. The document also covers external and internal data flows, required dataset, changes to the Direct-To software for DWR, collection of software statistics, and the code structure.

  12. Perspectives on Architecture and Children.

    ERIC Educational Resources Information Center

    Taylor, Anne

    1989-01-01

    Describes a new system for teaching architectural education known as Architectural Design Education. States that this system, developed by Anne Taylor and George Vlastos, introduces students to the problem solving process, integrates creative activities with traditional disciplines, and enhances students' and teachers' ability to relate to their…

  13. Dataflow architecture for machine control

    SciTech Connect

    Lent, B.

    1989-01-01

    The author describes how to implement the latest control strategies using state-of-the-art control technology and computing principles. Provides all the basic definitions, taxonomy, and analysis of currently used architectures, including microprocessor communication schemes. This book describes in detail the analysis and implementation of the selected OR dataflow driven architecture in a grinding machine control system.

  14. Interior Design in Architectural Education

    ERIC Educational Resources Information Center

    Gurel, Meltem O.; Potthoff, Joy K.

    2006-01-01

    The domain of interiors constitutes a point of tension between practicing architects and interior designers. Design of interior spaces is a significant part of architectural profession. Yet, to what extent does architectural education keep pace with changing demands in rendering topics that are identified as pertinent to the design of interiors?…

  15. Systems Architecture for Fully Autonomous Space Missions

    NASA Technical Reports Server (NTRS)

    Esper, Jamie; Schnurr, R.; VanSteenberg, M.; Brumfield, Mark (Technical Monitor)

    2002-01-01

    The NASA Goddard Space Flight Center is working to develop a revolutionary new system architecture concept in support of fully autonomous missions. As part of GSFC's contribution to the New Millenium Program (NMP) Space Technology 7 Autonomy and on-Board Processing (ST7-A) Concept Definition Study, the system incorporates the latest commercial Internet and software development ideas and extends them into NASA ground and space segment architectures. The unique challenges facing the exploration of remote and inaccessible locales and the need to incorporate corresponding autonomy technologies within reasonable cost necessitate the re-thinking of traditional mission architectures. A measure of the resiliency of this architecture in its application to a broad range of future autonomy missions will depend on its effectiveness in leveraging from commercial tools developed for the personal computer and Internet markets. Specialized test stations and supporting software come to past as spacecraft take advantage of the extensive tools and research investments of billion-dollar commercial ventures. The projected improvements of the Internet and supporting infrastructure go hand-in-hand with market pressures that provide continuity in research. By taking advantage of consumer-oriented methods and processes, space-flight missions will continue to leverage on investments tailored to provide better services at reduced cost. The application of ground and space segment architectures each based on Local Area Networks (LAN), the use of personal computer-based operating systems, and the execution of activities and operations through a Wide Area Network (Internet) enable a revolution in spacecraft mission formulation, implementation, and flight operations. Hardware and software design, development, integration, test, and flight operations are all tied-in closely to a common thread that enables the smooth transitioning between program phases. The application of commercial software

  16. Towards the Architecture of an Instructional Multimedia Database.

    ERIC Educational Resources Information Center

    Verhagen, Plin W.; Bestebreurtje, R.

    1994-01-01

    Discussion of multimedia databases in education focuses on the development of an adaptable database in The Netherlands that uses optical storage media to hold the audiovisual components. Highlights include types of applications; types of users; accessibility; adaptation; an object-oriented approach; levels of the database architecture; and…

  17. Computer Architecture. (Latest Citations from the Aerospace Database)

    NASA Technical Reports Server (NTRS)

    1996-01-01

    The bibliography contains citations concerning research and development in the field of computer architecture. Design of computer systems, microcomputer components, and digital networks are among the topics discussed. Multimicroprocessor system performance, software development, and aerospace avionics applications are also included. (Contains 50-250 citations and includes a subject term index and title list.)

  18. Information Architecture without Internal Theory: An Inductive Design Process.

    ERIC Educational Resources Information Center

    Haverty, Marsha

    2002-01-01

    Suggests that information architecture design is primarily an inductive process, partly because it lacks internal theory and partly because it is an activity that supports emergent phenomena (user experiences) from basic design components. Suggests a resemblance to Constructive Induction, a design process that locates the best representational…

  19. The Contribution of Visualization to Learning Computer Architecture

    ERIC Educational Resources Information Center

    Yehezkel, Cecile; Ben-Ari, Mordechai; Dreyfus, Tommy

    2007-01-01

    This paper describes a visualization environment and associated learning activities designed to improve learning of computer architecture. The environment, EasyCPU, displays a model of the components of a computer and the dynamic processes involved in program execution. We present the results of a research program that analysed the contribution of…

  20. Architecture Governance: The Importance of Architecture Governance for Achieving Operationally Responsive Ground Systems

    NASA Technical Reports Server (NTRS)

    Kolar, Mike; Estefan, Jeff; Giovannoni, Brian; Barkley, Erik

    2011-01-01

    Topics covered (1) Why Governance and Why Now? (2) Characteristics of Architecture Governance (3) Strategic Elements (3a) Architectural Principles (3b) Architecture Board (3c) Architecture Compliance (4) Architecture Governance Infusion Process. Governance is concerned with decision making (i.e., setting directions, establishing standards and principles, and prioritizing investments). Architecture governance is the practice and orientation by which enterprise architectures and other architectures are managed and controlled at an enterprise-wide level