Sample records for common component architecture

  1. Emergence of a Common Modeling Architecture for Earth System Science (Invited)

    NASA Astrophysics Data System (ADS)

    Deluca, C.

    2010-12-01

    Common modeling architecture can be viewed as a natural outcome of common modeling infrastructure. The development of model utility and coupling packages (ESMF, MCT, OpenMI, etc.) over the last decade represents the realization of a community vision for common model infrastructure. The adoption of these packages has led to increased technical communication among modeling centers and newly coupled modeling systems. However, adoption has also exposed aspects of interoperability that must be addressed before easy exchange of model components among different groups can be achieved. These aspects include common physical architecture (how a model is divided into components) and model metadata and usage conventions. The National Unified Operational Prediction Capability (NUOPC), an operational weather prediction consortium, is collaborating with weather and climate researchers to define a common model architecture that encompasses these advanced aspects of interoperability and looks to future needs. The nature and structure of the emergent common modeling architecture will be discussed along with its implications for future model development.

  2. Security Aspects of an Enterprise-Wide Network Architecture.

    ERIC Educational Resources Information Center

    Loew, Robert; Stengel, Ingo; Bleimann, Udo; McDonald, Aidan

    1999-01-01

    Presents an overview of two projects that concern local area networks and the common point between networks as they relate to network security. Discusses security architectures based on firewall components, packet filters, application gateways, security-management components, an intranet solution, user registration by Web form, and requests for…

  3. A component-based problem list subsystem for the HOLON testbed. Health Object Library Online.

    PubMed Central

    Law, V.; Goldberg, H. S.; Jones, P.; Safran, C.

    1998-01-01

    One of the deliverables of the HOLON (Health Object Library Online) project is the specification of a reference architecture for clinical information systems that facilitates the development of a variety of discrete, reusable software components. One of the challenges facing the HOLON consortium is determining what kinds of components can be made available in a library for developers of clinical information systems. To further explore the use of component architectures in the development of reusable clinical subsystems, we have incorporated ongoing work in the development of enterprise terminology services into a Problem List subsystem for the HOLON testbed. We have successfully implemented a set of components using CORBA (Common Object Request Broker Architecture) and Java distributed object technologies that provide a functional problem list application and UMLS-based "Problem Picker." Through this development, we have overcome a variety of obstacles characteristic of rapidly emerging technologies, and have identified architectural issues necessary to scale these components for use and reuse within an enterprise clinical information system. PMID:9929252

  4. A component-based problem list subsystem for the HOLON testbed. Health Object Library Online.

    PubMed

    Law, V; Goldberg, H S; Jones, P; Safran, C

    1998-01-01

    One of the deliverables of the HOLON (Health Object Library Online) project is the specification of a reference architecture for clinical information systems that facilitates the development of a variety of discrete, reusable software components. One of the challenges facing the HOLON consortium is determining what kinds of components can be made available in a library for developers of clinical information systems. To further explore the use of component architectures in the development of reusable clinical subsystems, we have incorporated ongoing work in the development of enterprise terminology services into a Problem List subsystem for the HOLON testbed. We have successfully implemented a set of components using CORBA (Common Object Request Broker Architecture) and Java distributed object technologies that provide a functional problem list application and UMLS-based "Problem Picker." Through this development, we have overcome a variety of obstacles characteristic of rapidly emerging technologies, and have identified architectural issues necessary to scale these components for use and reuse within an enterprise clinical information system.

  5. Hardware Architecture Study for NASA's Space Software Defined Radios

    NASA Technical Reports Server (NTRS)

    Reinhart, Richard C.; Scardelletti, Maximilian C.; Mortensen, Dale J.; Kacpura, Thomas J.; Andro, Monty; Smith, Carl; Liebetreu, John

    2008-01-01

    This study defines a hardware architecture approach for software defined radios to enable commonality among NASA space missions. The architecture accommodates a range of reconfigurable processing technologies including general purpose processors, digital signal processors, field programmable gate arrays (FPGAs), and application-specific integrated circuits (ASICs) in addition to flexible and tunable radio frequency (RF) front-ends to satisfy varying mission requirements. The hardware architecture consists of modules, radio functions, and and interfaces. The modules are a logical division of common radio functions that comprise a typical communication radio. This paper describes the architecture details, module definitions, and the typical functions on each module as well as the module interfaces. Trade-offs between component-based, custom architecture and a functional-based, open architecture are described. The architecture does not specify the internal physical implementation within each module, nor does the architecture mandate the standards or ratings of the hardware used to construct the radios.

  6. Space Telecommunications Radio Systems (STRS) Hardware Architecture Standard: Release 1.0 Hardware Section

    NASA Technical Reports Server (NTRS)

    Reinhart, Richard C.; Kacpura, Thomas J.; Smith, Carl R.; Liebetreu, John; Hill, Gary; Mortensen, Dale J.; Andro, Monty; Scardelletti, Maximilian C.; Farrington, Allen

    2008-01-01

    This report defines a hardware architecture approach for software-defined radios to enable commonality among NASA space missions. The architecture accommodates a range of reconfigurable processing technologies including general-purpose processors, digital signal processors, field programmable gate arrays, and application-specific integrated circuits (ASICs) in addition to flexible and tunable radiofrequency front ends to satisfy varying mission requirements. The hardware architecture consists of modules, radio functions, and interfaces. The modules are a logical division of common radio functions that compose a typical communication radio. This report describes the architecture details, the module definitions, the typical functions on each module, and the module interfaces. Tradeoffs between component-based, custom architecture and a functional-based, open architecture are described. The architecture does not specify a physical implementation internally on each module, nor does the architecture mandate the standards or ratings of the hardware used to construct the radios.

  7. Communication architecture for AAL. Supporting patient care by health care providers in AAL-enhanced living quarters.

    PubMed

    Nitzsche, T; Thiele, S; Häber, A; Winter, A

    2014-01-01

    This article is part of the Focus Theme of Methods of Information in Medicine on "Using Data from Ambient Assisted Living and Smart Homes in Electronic Health Records". Concepts of Ambient Assisted Living (AAL) support a long-term health monitoring and further medical and other services for multi-morbid patients with chronic diseases. In Germany many AAL and telemedical applications exist. Synergy effects by common agreements for essential application components and standards are not achieved. It is necessary to define a communication architecture which is based on common definitions of communication scenarios, application components and communication standards. The development of a communication architecture requires different steps. To gain a reference model for the problem area different AAL and telemedicine projects were compared and relevant data elements were generalized. The derived reference model defines standardized communication links. As a result the authors present an approach towards a reference architecture for AAL-communication. The focus of the architecture lays on the communication layer. The necessary application components are identified and a communication based on standards and their extensions is highlighted. The exchange of patient individual events supported by an event classification model, raw and aggregated data from the personal home area over a telemedicine center to health care providers is possible.

  8. A conceptual model for megaprogramming

    NASA Technical Reports Server (NTRS)

    Tracz, Will

    1990-01-01

    Megaprogramming is component-based software engineering and life-cycle management. Magaprogramming and its relationship to other research initiatives (common prototyping system/common prototyping language, domain specific software architectures, and software understanding) are analyzed. The desirable attributes of megaprogramming software components are identified and a software development model and resulting prototype megaprogramming system (library interconnection language extended by annotated Ada) are described.

  9. Teaching Software Componentization: A Bar Chart Java Bean

    ERIC Educational Resources Information Center

    Mitri, Michel

    2010-01-01

    In the current object-oriented paradigm, software construction increasingly involves creating and utilizing "software components". These components can serve a variety of functions, from common algorithmic processes to database connectivity to graphical interfaces. The advantage of component architectures is that programmers can use pre-existing…

  10. A Reference Architecture for Space Information Management

    NASA Technical Reports Server (NTRS)

    Mattmann, Chris A.; Crichton, Daniel J.; Hughes, J. Steven; Ramirez, Paul M.; Berrios, Daniel C.

    2006-01-01

    We describe a reference architecture for space information management systems that elegantly overcomes the rigid design of common information systems in many domains. The reference architecture consists of a set of flexible, reusable, independent models and software components that function in unison, but remain separately managed entities. The main guiding principle of the reference architecture is to separate the various models of information (e.g., data, metadata, etc.) from implemented system code, allowing each to evolve independently. System modularity, systems interoperability, and dynamic evolution of information system components are the primary benefits of the design of the architecture. The architecture requires the use of information models that are substantially more advanced than those used by the vast majority of information systems. These models are more expressive and can be more easily modularized, distributed and maintained than simpler models e.g., configuration files and data dictionaries. Our current work focuses on formalizing the architecture within a CCSDS Green Book and evaluating the architecture within the context of the C3I initiative.

  11. Architecture-Led Safety Analysis of the Joint Multi-Role (JMR) Joint Common Architecture (JCA) Demonstration System

    DTIC Science & Technology

    2015-12-01

    relevant system components (i.e., their component type declarations) have been anno - tated with EMV2 error source or propagation declarations and hazard...contributors. They are recorded as EMV2 anno - tations for each of the ASSA. Figure 40 shows a sampling of potential hazard contributors by the functional...2012] Leveson, N., Engineering a Safer World. MIT Press. 2012. [Parnas 1991] Parnas, D. & Madey, J . Functional Documentation for Computer Systems

  12. CERN's Common Unix and X Terminal Environment

    NASA Astrophysics Data System (ADS)

    Cass, Tony

    The Desktop Infrastructure Group of CERN's Computing and Networks Division has developed a Common Unix and X Terminal Environment to ease the migration to Unix based Interactive Computing. The CUTE architecture relies on a distributed filesystem—currently Trans arc's AFS—to enable essentially interchangeable client work-stations to access both "home directory" and program files transparently. Additionally, we provide a suite of programs to configure workstations for CUTE and to ensure continued compatibility. This paper describes the different components and the development of the CUTE architecture.

  13. A Multi-Purpose Modular Electronics Integration Node for Exploration Extravehicular Activity

    NASA Technical Reports Server (NTRS)

    Hodgson, Edward; Papale, William; Wichowski, Robert; Rosenbush, David; Hawes, Kevin; Stankiewicz, Tom

    2013-01-01

    As NASA works to develop an effective integrated portable life support system design for exploration Extravehicular activity (EVA), alternatives to the current system s electrical power and control architecture are needed to support new requirements for flexibility, maintainability, reliability, and reduced mass and volume. Experience with the current Extravehicular Mobility Unit (EMU) has demonstrated that the current architecture, based in a central power supply, monitoring and control unit, with dedicated analog wiring harness connections to active components in the system has a significant impact on system packaging and seriously constrains design flexibility in adapting to component obsolescence and changing system needs over time. An alternative architecture based in the use of a digital data bus offers possible wiring harness and system power savings, but risks significant penalties in component complexity and cost. A hybrid architecture that relies on a set of electronic and power interface nodes serving functional models within the Portable Life Support System (PLSS) is proposed to minimize both packaging and component level penalties. A common interface node hardware design can further reduce penalties by reducing the nonrecurring development costs, making miniaturization more practical, maximizing opportunities for maturation and reliability growth, providing enhanced fault tolerance, and providing stable design interfaces for system components and a central control. Adaptation to varying specific module requirements can be achieved with modest changes in firmware code within the module. A preliminary design effort has developed a common set of hardware interface requirements and functional capabilities for such a node based on anticipated modules comprising an exploration PLSS, and a prototype node has been designed assembled, programmed, and tested. One instance of such a node has been adapted to support testing the swingbed carbon dioxide and humidity control element in NASA s advanced PLSS 2.0 test article. This paper will describe the common interface node design concept, results of the prototype development and test effort, and plans for use in NASA PLSS 2.0 integrated tests.

  14. Modeling and Analysis of Mixed Synchronous/Asynchronous Systems

    NASA Technical Reports Server (NTRS)

    Driscoll, Kevin R.; Madl. Gabor; Hall, Brendan

    2012-01-01

    Practical safety-critical distributed systems must integrate safety critical and non-critical data in a common platform. Safety critical systems almost always consist of isochronous components that have synchronous or asynchronous interface with other components. Many of these systems also support a mix of synchronous and asynchronous interfaces. This report presents a study on the modeling and analysis of asynchronous, synchronous, and mixed synchronous/asynchronous systems. We build on the SAE Architecture Analysis and Design Language (AADL) to capture architectures for analysis. We present preliminary work targeted to capture mixed low- and high-criticality data, as well as real-time properties in a common Model of Computation (MoC). An abstract, but representative, test specimen system was created as the system to be modeled.

  15. Information architecture for a planetary 'exploration web'

    NASA Technical Reports Server (NTRS)

    Lamarra, N.; McVittie, T.

    2002-01-01

    'Web services' is a common way of deploying distributed applications whose software components and data sources may be in different locations, formats, languages, etc. Although such collaboration is not utilized significantly in planetary exploration, we believe there is significant benefit in developing an architecture in which missions could leverage each others capabilities. We believe that an incremental deployment of such an architecture could significantly contribute to the evolution of increasingly capable, efficient, and even autonomous remote exploration.

  16. The Software Architecture of Global Climate Models

    NASA Astrophysics Data System (ADS)

    Alexander, K. A.; Easterbrook, S. M.

    2011-12-01

    It has become common to compare and contrast the output of multiple global climate models (GCMs), such as in the Climate Model Intercomparison Project Phase 5 (CMIP5). However, intercomparisons of the software architecture of GCMs are almost nonexistent. In this qualitative study of seven GCMs from Canada, the United States, and Europe, we attempt to fill this gap in research. We describe the various representations of the climate system as computer programs, and account for architectural differences between models. Most GCMs now practice component-based software engineering, where Earth system components (such as the atmosphere or land surface) are present as highly encapsulated sub-models. This architecture facilitates a mix-and-match approach to climate modelling that allows for convenient sharing of model components between institutions, but it also leads to difficulty when choosing where to draw the lines between systems that are not encapsulated in the real world, such as sea ice. We also examine different styles of couplers in GCMs, which manage interaction and data flow between components. Finally, we pay particular attention to the varying levels of complexity in GCMs, both between and within models. Many GCMs have some components that are significantly more complex than others, a phenomenon which can be explained by the respective institution's research goals as well as the origin of the model components. In conclusion, although some features of software architecture have been adopted by every GCM we examined, other features show a wide range of different design choices and strategies. These architectural differences may provide new insights into variability and spread between models.

  17. Statistics of Shared Components in Complex Component Systems

    NASA Astrophysics Data System (ADS)

    Mazzolini, Andrea; Gherardi, Marco; Caselle, Michele; Cosentino Lagomarsino, Marco; Osella, Matteo

    2018-04-01

    Many complex systems are modular. Such systems can be represented as "component systems," i.e., sets of elementary components, such as LEGO bricks in LEGO sets. The bricks found in a LEGO set reflect a target architecture, which can be built following a set-specific list of instructions. In other component systems, instead, the underlying functional design and constraints are not obvious a priori, and their detection is often a challenge of both scientific and practical importance, requiring a clear understanding of component statistics. Importantly, some quantitative invariants appear to be common to many component systems, most notably a common broad distribution of component abundances, which often resembles the well-known Zipf's law. Such "laws" affect in a general and nontrivial way the component statistics, potentially hindering the identification of system-specific functional constraints or generative processes. Here, we specifically focus on the statistics of shared components, i.e., the distribution of the number of components shared by different system realizations, such as the common bricks found in different LEGO sets. To account for the effects of component heterogeneity, we consider a simple null model, which builds system realizations by random draws from a universe of possible components. Under general assumptions on abundance heterogeneity, we provide analytical estimates of component occurrence, which quantify exhaustively the statistics of shared components. Surprisingly, this simple null model can positively explain important features of empirical component-occurrence distributions obtained from large-scale data on bacterial genomes, LEGO sets, and book chapters. Specific architectural features and functional constraints can be detected from occurrence patterns as deviations from these null predictions, as we show for the illustrative case of the "core" genome in bacteria.

  18. Deciphering structural and temporal interplays during the architectural development of mango trees.

    PubMed

    Dambreville, Anaëlle; Lauri, Pierre-Éric; Trottier, Catherine; Guédon, Yann; Normand, Frédéric

    2013-05-01

    Plant architecture is commonly defined by the adjacency of organs within the structure and their properties. Few studies consider the effect of endogenous temporal factors, namely phenological factors, on the establishment of plant architecture. This study hypothesized that, in addition to the effect of environmental factors, the observed plant architecture results from both endogenous structural and temporal components, and their interplays. Mango tree, which is characterized by strong phenological asynchronisms within and between trees and by repeated vegetative and reproductive flushes during a growing cycle, was chosen as a plant model. During two consecutive growing cycles, this study described vegetative and reproductive development of 20 trees submitted to the same environmental conditions. Four mango cultivars were considered to assess possible cultivar-specific patterns. Integrative vegetative and reproductive development models incorporating generalized linear models as components were built. These models described the occurrence, intensity, and timing of vegetative and reproductive development at the growth unit scale. This study showed significant interplays between structural and temporal components of plant architectural development at two temporal scales. Within a growing cycle, earliness of bud burst was highly and positively related to earliness of vegetative development and flowering. Between growing cycles, flowering growth units delayed vegetative development compared to growth units that did not flower. These interplays explained how vegetative and reproductive phenological asynchronisms within and between trees were generated and maintained. It is suggested that causation networks involving structural and temporal components may give rise to contrasted tree architectures.

  19. Evolution of Bow-Tie Architectures in Biology

    PubMed Central

    Friedlander, Tamar; Mayo, Avraham E.; Tlusty, Tsvi; Alon, Uri

    2015-01-01

    Bow-tie or hourglass structure is a common architectural feature found in many biological systems. A bow-tie in a multi-layered structure occurs when intermediate layers have much fewer components than the input and output layers. Examples include metabolism where a handful of building blocks mediate between multiple input nutrients and multiple output biomass components, and signaling networks where information from numerous receptor types passes through a small set of signaling pathways to regulate multiple output genes. Little is known, however, about how bow-tie architectures evolve. Here, we address the evolution of bow-tie architectures using simulations of multi-layered systems evolving to fulfill a given input-output goal. We find that bow-ties spontaneously evolve when the information in the evolutionary goal can be compressed. Mathematically speaking, bow-ties evolve when the rank of the input-output matrix describing the evolutionary goal is deficient. The maximal compression possible (the rank of the goal) determines the size of the narrowest part of the network—that is the bow-tie. A further requirement is that a process is active to reduce the number of links in the network, such as product-rule mutations, otherwise a non-bow-tie solution is found in the evolutionary simulations. This offers a mechanism to understand a common architectural principle of biological systems, and a way to quantitate the effective rank of the goals under which they evolved. PMID:25798588

  20. Framework for a clinical information system.

    PubMed

    Van De Velde, R; Lansiers, R; Antonissen, G

    2002-01-01

    The design and implementation of Clinical Information System architecture is presented. This architecture has been developed and implemented based on components following a strong underlying conceptual and technological model. Common Object Request Broker and n-tier technology featuring centralised and departmental clinical information systems as the back-end store for all clinical data are used. Servers located in the "middle" tier apply the clinical (business) model and application rules. The main characteristics are the focus on modelling and reuse of both data and business logic. Scalability as well as adaptability to constantly changing requirements via component driven computing are the main reasons for that approach.

  1. Reference Avionics Architecture for Lunar Surface Systems

    NASA Technical Reports Server (NTRS)

    Somervill, Kevin M.; Lapin, Jonathan C.; Schmidt, Oron L.

    2010-01-01

    Developing and delivering infrastructure capable of supporting long-term manned operations to the lunar surface has been a primary objective of the Constellation Program in the Exploration Systems Mission Directorate. Several concepts have been developed related to development and deployment lunar exploration vehicles and assets that provide critical functionality such as transportation, habitation, and communication, to name a few. Together, these systems perform complex safety-critical functions, largely dependent on avionics for control and behavior of system functions. These functions are implemented using interchangeable, modular avionics designed for lunar transit and lunar surface deployment. Systems are optimized towards reuse and commonality of form and interface and can be configured via software or component integration for special purpose applications. There are two core concepts in the reference avionics architecture described in this report. The first concept uses distributed, smart systems to manage complexity, simplify integration, and facilitate commonality. The second core concept is to employ extensive commonality between elements and subsystems. These two concepts are used in the context of developing reference designs for many lunar surface exploration vehicles and elements. These concepts are repeated constantly as architectural patterns in a conceptual architectural framework. This report describes the use of these architectural patterns in a reference avionics architecture for Lunar surface systems elements.

  2. Component-based integration of chemistry and optimization software.

    PubMed

    Kenny, Joseph P; Benson, Steven J; Alexeev, Yuri; Sarich, Jason; Janssen, Curtis L; McInnes, Lois Curfman; Krishnan, Manojkumar; Nieplocha, Jarek; Jurrus, Elizabeth; Fahlstrom, Carl; Windus, Theresa L

    2004-11-15

    Typical scientific software designs make rigid assumptions regarding programming language and data structures, frustrating software interoperability and scientific collaboration. Component-based software engineering is an emerging approach to managing the increasing complexity of scientific software. Component technology facilitates code interoperability and reuse. Through the adoption of methodology and tools developed by the Common Component Architecture Forum, we have developed a component architecture for molecular structure optimization. Using the NWChem and Massively Parallel Quantum Chemistry packages, we have produced chemistry components that provide capacity for energy and energy derivative evaluation. We have constructed geometry optimization applications by integrating the Toolkit for Advanced Optimization, Portable Extensible Toolkit for Scientific Computation, and Global Arrays packages, which provide optimization and linear algebra capabilities. We present a brief overview of the component development process and a description of abstract interfaces for chemical optimizations. The components conforming to these abstract interfaces allow the construction of applications using different chemistry and mathematics packages interchangeably. Initial numerical results for the component software demonstrate good performance, and highlight potential research enabled by this platform.

  3. Evolutionary evidence of the effect of rare variants on disease etiology.

    PubMed

    Gorlov, I P; Gorlova, O Y; Frazier, M L; Spitz, M R; Amos, C I

    2011-03-01

    The common disease/common variant hypothesis has been popular for describing the genetic architecture of common human diseases for several years. According to the originally stated hypothesis, one or a few common genetic variants with a large effect size control the risk of common diseases. A growing body of evidence, however, suggests that rare single-nucleotide polymorphisms (SNPs), i.e. those with a minor allele frequency of less than 5%, are also an important component of the genetic architecture of common human diseases. In this study, we analyzed the relevance of rare SNPs to the risk of common diseases from an evolutionary perspective and found that rare SNPs are more likely than common SNPs to be functional and tend to have a stronger effect size than do common SNPs. This observation, and the fact that most of the SNPs in the human genome are rare, suggests that rare SNPs are a crucial element of the genetic architecture of common human diseases. We propose that the next generation of genomic studies should focus on analyzing rare SNPs. Further, targeting patients with a family history of the disease, an extreme phenotype, or early disease onset may facilitate the detection of risk-associated rare SNPs. © 2010 John Wiley & Sons A/S.

  4. Data Acquisition System Architecture and Capabilities At NASA GRC Plum Brook Station's Space Environment Test Facilities

    NASA Technical Reports Server (NTRS)

    Evans, Richard K.; Hill, Gerald M.

    2012-01-01

    Very large space environment test facilities present unique engineering challenges in the design of facility data systems. Data systems of this scale must be versatile enough to meet the wide range of data acquisition and measurement requirements from a diverse set of customers and test programs, but also must minimize design changes to maintain reliability and serviceability. This paper presents an overview of the common architecture and capabilities of the facility data acquisition systems available at two of the world?s largest space environment test facilities located at the NASA Glenn Research Center?s Plum Brook Station in Sandusky, Ohio; namely, the Space Propulsion Research Facility (commonly known as the B-2 facility) and the Space Power Facility (SPF). The common architecture of the data systems is presented along with details on system scalability and efficient measurement systems analysis and verification. The architecture highlights a modular design, which utilizes fully-remotely managed components, enabling the data systems to be highly configurable and support multiple test locations with a wide-range of measurement types and very large system channel counts.

  5. Data Acquisition System Architecture and Capabilities at NASA GRC Plum Brook Station's Space Environment Test Facilities

    NASA Technical Reports Server (NTRS)

    Evans, Richard K.; Hill, Gerald M.

    2014-01-01

    Very large space environment test facilities present unique engineering challenges in the design of facility data systems. Data systems of this scale must be versatile enough to meet the wide range of data acquisition and measurement requirements from a diverse set of customers and test programs, but also must minimize design changes to maintain reliability and serviceability. This paper presents an overview of the common architecture and capabilities of the facility data acquisition systems available at two of the world's largest space environment test facilities located at the NASA Glenn Research Center's Plum Brook Station in Sandusky, Ohio; namely, the Space Propulsion Research Facility (commonly known as the B-2 facility) and the Space Power Facility (SPF). The common architecture of the data systems is presented along with details on system scalability and efficient measurement systems analysis and verification. The architecture highlights a modular design, which utilizes fully-remotely managed components, enabling the data systems to be highly configurable and support multiple test locations with a wide-range of measurement types and very large system channel counts.

  6. Achieving AFRL Universal FADEC Vision With Open Architecture Addressing Capability and Obsolescence for Military and Commercial Applications (Preprint)

    DTIC Science & Technology

    2006-11-01

    engines will involve a family of common components. It will consist of a real - time operating system and partitioned application software (AS...system will employ a standard hardware and software architecture. It will consist of a real time operating system and partitioned application...Inputs - Enables Large Cost Reduction 3. Software - FAA Certified Auto Code - Real Time Operating System - Commercial

  7. Architectural and functional commonalities between enhancers and promoters

    PubMed Central

    Kim, Tae-Kyung; Shiekhattar, Ramin

    2015-01-01

    Summary With the explosion of genome-wide studies of regulated transcription, it has become clear that traditional definitions of enhancers and promoters need to be revisited. These control elements can now be characterized in terms of their local and regional architecture, their regulatory components including histone modifications and associated binding factors and their functional contribution to transcription. This review discusses unifying themes between promoters and enhancers in transcriptional regulatory mechanisms. PMID:26317464

  8. A Proven Ground System Architecture for Promoting Collaboration and Common Solutions at NASA

    NASA Technical Reports Server (NTRS)

    Smith, Danford

    2005-01-01

    Requirement: Improve how NASA develops and maintains ground data systems for dozens of missions, with a couple new missions always in the development phase. Decided in 2001 on enhanced message-bus architecture. Users offered choices for major components. They plug and play because key interfaces are all the same. Can support COTS, heritage, and new software. Even the middleware can be switched. Project name: GMSEC. Goddard Mission Services Evolution Center.

  9. SEL Ada reuse analysis and representations

    NASA Technical Reports Server (NTRS)

    Kester, Rush

    1990-01-01

    Overall, it was revealed that the pattern of Ada reuse has evolved from initial reuse of utility components into reuse of generalized application architectures. Utility components were both domain-independent utilities, such as queues and stacks, and domain-specific utilities, such as those that implement spacecraft orbit and attitude mathematical functions and physics or astronomical models. The level of reuse was significantly increased with the development of a generalized telemetry simulator architecture. The use of Ada generics significantly increased the level of verbatum reuse, which is due to the ability, using Ada generics, to parameterize the aspects of design that are configurable during reuse. A key factor in implementing generalized architectures was the ability to use generic subprogram parameters to tailor parts of the algorithm embedded within the architecture. The use of object oriented design (in which objects model real world entities) significantly improved the modularity for reuse. Encapsulating into packages the data and operations associated with common real world entities creates natural building blocks for reuse.

  10. Integrating security in a group oriented distributed system

    NASA Technical Reports Server (NTRS)

    Reiter, Michael; Birman, Kenneth; Gong, LI

    1992-01-01

    A distributed security architecture is proposed for incorporation into group oriented distributed systems, and in particular, into the Isis distributed programming toolkit. The primary goal of the architecture is to make common group oriented abstractions robust in hostile settings, in order to facilitate the construction of high performance distributed applications that can tolerate both component failures and malicious attacks. These abstractions include process groups and causal group multicast. Moreover, a delegation and access control scheme is proposed for use in group oriented systems. The focus is the security architecture; particular cryptosystems and key exchange protocols are not emphasized.

  11. Final Technical Report - Center for Technology for Advanced Scientific Component Software (TASCS)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sussman, Alan

    2014-10-21

    This is a final technical report for the University of Maryland work in the SciDAC Center for Technology for Advanced Scientific Component Software (TASCS). The Maryland work focused on software tools for coupling parallel software components built using the Common Component Architecture (CCA) APIs. Those tools are based on the Maryland InterComm software framework that has been used in multiple computational science applications to build large-scale simulations of complex physical systems that employ multiple separately developed codes.

  12. Maximizing commonality between military and general aviation fly-by-light helicopter system designs

    NASA Astrophysics Data System (ADS)

    Enns, Russell; Mossman, David C.

    1995-05-01

    In the face of shrinking defense budgets, survival of the United States rotorcraft industry is becoming increasingly dependent on increased sales in a highly competitive civil helicopter market. As a result, only the most competitive rotorcraft manufacturers are likely to survive. A key ingredient in improving our competitive position is the ability to produce more versatile, high performance, high quality, and low cost of ownership helicopters. Fiber optic technology offers a path of achieving these objectives. Also, adopting common components and architectures for different helicopter models (while maintaining each models' uniqueness) will further decrease design and production costs. Funds saved (or generated) by exploiting this commonality can be applied to R&D used to further improve the product. In this paper, we define a fiber optics based avionics architecture which provides the pilot a fly-by-light / digital flight control system which can be implemented in both civilian and military helicopters. We then discuss the advantages of such an architecture.

  13. Advanced and secure architectural EHR approaches.

    PubMed

    Blobel, Bernd

    2006-01-01

    Electronic Health Records (EHRs) provided as a lifelong patient record advance towards core applications of distributed and co-operating health information systems and health networks. For meeting the challenge of scalable, flexible, portable, secure EHR systems, the underlying EHR architecture must be based on the component paradigm and model driven, separating platform-independent and platform-specific models. Allowing manageable models, real systems must be decomposed and simplified. The resulting modelling approach has to follow the ISO Reference Model - Open Distributing Processing (RM-ODP). The ISO RM-ODP describes any system component from different perspectives. Platform-independent perspectives contain the enterprise view (business process, policies, scenarios, use cases), the information view (classes and associations) and the computational view (composition and decomposition), whereas platform-specific perspectives concern the engineering view (physical distribution and realisation) and the technology view (implementation details from protocols up to education and training) on system components. Those views have to be established for components reflecting aspects of all domains involved in healthcare environments including administrative, legal, medical, technical, etc. Thus, security-related component models reflecting all view mentioned have to be established for enabling both application and communication security services as integral part of the system's architecture. Beside decomposition and simplification of system regarding the different viewpoint on their components, different levels of systems' granularity can be defined hiding internals or focusing on properties of basic components to form a more complex structure. The resulting models describe both structure and behaviour of component-based systems. The described approach has been deployed in different projects defining EHR systems and their underlying architectural principles. In that context, the Australian GEHR project, the openEHR initiative, the revision of CEN ENV 13606 "Electronic Health Record communication", all based on Archetypes, but also the HL7 version 3 activities are discussed in some detail. The latter include the HL7 RIM, the HL7 Development Framework, the HL7's clinical document architecture (CDA) as well as the set of models from use cases, activity diagrams, sequence diagrams up to Domain Information Models (DMIMs) and their building blocks Common Message Element Types (CMET) Constraining Models to their underlying concepts. The future-proof EHR architecture as open, user-centric, user-friendly, flexible, scalable, portable core application in health information systems and health networks has to follow advanced architectural paradigms.

  14. Business Systems Modernization: Strategy for Evolving DOD’s Business Enterprise Architecture Offers a Conceptual Approach, but Execution Details are Needed

    DTIC Science & Technology

    2007-04-01

    Services and System Capabilities Enterprise Rules and Standards for Interoperability Navy AFArmy TRANS COM DFASDLA Ente prise Shared Services and System...Where commonality among components exists, there are also opportunities for identifying and leveraging shared services . A service-oriented architecture...and (3) shared services . The BMA federation strategy, according to these officials, is the first mission area federation strategy, and it is their

  15. Conceptual Design and Analysis of Service Oriented Architecture (SOA) for Command and Control of Space Assets

    DTIC Science & Technology

    2010-12-01

    strategy “to establish a net- centric environment that increasingly leverages shared services and SOAs that are:  Supported by…a single set of common...component services. As mentioned previously, this is an important characteristic of SOA. Also noteworthy is set of shared services seen on the...transmit information products directly to the user(s). 6. Shared Services One of the key benefits of Service Oriented Architecture is the ability to

  16. The ALMA software architecture

    NASA Astrophysics Data System (ADS)

    Schwarz, Joseph; Farris, Allen; Sommer, Heiko

    2004-09-01

    The software for the Atacama Large Millimeter Array (ALMA) is being developed by many institutes on two continents. The software itself will function in a distributed environment, from the 0.5-14 kmbaselines that separate antennas to the larger distances that separate the array site at the Llano de Chajnantor in Chile from the operations and user support facilities in Chile, North America and Europe. Distributed development demands 1) interfaces that allow separated groups to work with minimal dependence on their counterparts at other locations; and 2) a common architecture to minimize duplication and ensure that developers can always perform similar tasks in a similar way. The Container/Component model provides a blueprint for the separation of functional from technical concerns: application developers concentrate on implementing functionality in Components, which depend on Containers to provide them with services such as access to remote resources, transparent serialization of entity objects to XML, logging, error handling and security. Early system integrations have verified that this architecture is sound and that developers can successfully exploit its features. The Containers and their services are provided by a system-orienteddevelopment team as part of the ALMA Common Software (ACS), middleware that is based on CORBA.

  17. Importance of balanced architectures in the design of high-performance imaging systems

    NASA Astrophysics Data System (ADS)

    Sgro, Joseph A.; Stanton, Paul C.

    1999-03-01

    Imaging systems employed in demanding military and industrial applications, such as automatic target recognition and computer vision, typically require real-time high-performance computing resources. While high- performances computing systems have traditionally relied on proprietary architectures and custom components, recent advances in high performance general-purpose microprocessor technology have produced an abundance of low cost components suitable for use in high-performance computing systems. A common pitfall in the design of high performance imaging system, particularly systems employing scalable multiprocessor architectures, is the failure to balance computational and memory bandwidth. The performance of standard cluster designs, for example, in which several processors share a common memory bus, is typically constrained by memory bandwidth. The symptom characteristic of this problem is failure to the performance of the system to scale as more processors are added. The problem becomes exacerbated if I/O and memory functions share the same bus. The recent introduction of microprocessors with large internal caches and high performance external memory interfaces makes it practical to design high performance imaging system with balanced computational and memory bandwidth. Real word examples of such designs will be presented, along with a discussion of adapting algorithm design to best utilize available memory bandwidth.

  18. Performance measurement and modeling of component applications in a high performance computing environment : a case study.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Armstrong, Robert C.; Ray, Jaideep; Malony, A.

    2003-11-01

    We present a case study of performance measurement and modeling of a CCA (Common Component Architecture) component-based application in a high performance computing environment. We explore issues peculiar to component-based HPC applications and propose a performance measurement infrastructure for HPC based loosely on recent work done for Grid environments. A prototypical implementation of the infrastructure is used to collect data for a three components in a scientific application and construct performance models for two of them. Both computational and message-passing performance are addressed.

  19. Evolution of System Architectures: Where Do We Need to Fail Next?

    NASA Astrophysics Data System (ADS)

    Bermudez, Luis; Alameh, Nadine; Percivall, George

    2013-04-01

    Innovation requires testing and failing. Thomas Edison was right when he said "I have not failed. I've just found 10,000 ways that won't work". For innovation and improvement of standards to happen, service Architectures have to be tested and tested. Within the Open Geospatial Consortium (OGC), testing of service architectures has occurred for the last 15 years. This talk will present an evolution of these service architectures and a possible future path. OGC is a global forum for the collaboration of developers and users of spatial data products and services, and for the advancement and development of international standards for geospatial interoperability. The OGC Interoperability Program is a series of hands-on, fast paced, engineering initiatives to accelerate the development and acceptance of OGC standards. Each initiative is organized in threads that provide focus under a particular theme. The first testbed, OGC Web Services phase 1, completed in 2003 had four threads: Common Architecture, Web Mapping, Sensor Web and Web Imagery Enablement. The Common Architecture was a cross-thread theme, to ensure that the Web Mapping and Sensor Web experiments built on a base common architecture. The architecture was based on the three main SOA components: Broker, Requestor and Provider. It proposed a general service model defining service interactions and dependencies; categorization of service types; registries to allow discovery and access of services; data models and encodings; and common services (WMS, WFS, WCS). For the latter, there was a clear distinction on the different services: Data Services (e.g. WMS), Application services (e.g. Coordinate transformation) and server-side client applications (e.g. image exploitation). The latest testbed, OGC Web Service phase 9, completed in 2012 had 5 threads: Aviation, Cross-Community Interoperability (CCI), Security and Services Interoperability (SSI), OWS Innovations and Compliance & Interoperability Testing & Evaluation (CITE). Compared to the first testbed, OWS-9 did not have a separate common architecture thread. Instead the emphasis was on brokering information models, securing them and making data available efficiently on mobile devices. The outcome is an architecture based on usability and non-intrusiveness while leveraging mediation of information models from different communities. This talk will use lessons learned from the evolution from OGC Testbed phase 1 to phase 9 to better understand how global and complex infrastructures evolve to support many communities including the Earth System Science Community.

  20. Common variants explain a large fraction of the variability in the liability to psoriasis in a Han Chinese population.

    PubMed

    Yin, Xianyong; Wineinger, Nathan E; Cheng, Hui; Cui, Yong; Zhou, Fusheng; Zuo, Xianbo; Zheng, Xiaodong; Yang, Sen; Schork, Nicholas J; Zhang, Xuejun

    2014-01-30

    Psoriasis is a common inflammatory skin disease with a known genetic component. Our previously published psoriasis genome-wide association study identified dozens of novel susceptibility loci in Han Chinese. However, these markers explained only a small fraction of the estimated heritable component of psoriasis. To better understand the unknown yet likely polygenic architecture in psoriasis, we applied a linear mixed model to quantify the variation in the liability to psoriasis explained by common genetic markers (minor allele frequency > 0.01) in a Han Chinese population. We explored the polygenic genetic architecture of psoriasis using genome-wide association data from 2,271 Han Chinese individuals. We estimated that 34.9% (s.e. = 6.0%, P = 9 × 10-9) of the variation in the liability to psoriasis is captured by common genotyped and imputed variants. We discuss these results in the context of the strong association between HLA variants and psoriasis. We also show that the variance explained by each chromosome is linearly correlated to its length (R2 = 0.27, P=0.01), and quantify the impact of a polygenic effect on the prediction and diagnosis of psoriasis. Our results suggest that psoriasis has a substantial polygenic component, which not only has implications for the development of genetic diagnostics and prognostics for psoriasis, but also suggests that more individual variants contributing to psoriasis may be detected if sample sizes in future association studies are increased.

  1. Most genetic risk for autism resides with common variation.

    PubMed

    Gaugler, Trent; Klei, Lambertus; Sanders, Stephan J; Bodea, Corneliu A; Goldberg, Arthur P; Lee, Ann B; Mahajan, Milind; Manaa, Dina; Pawitan, Yudi; Reichert, Jennifer; Ripke, Stephan; Sandin, Sven; Sklar, Pamela; Svantesson, Oscar; Reichenberg, Abraham; Hultman, Christina M; Devlin, Bernie; Roeder, Kathryn; Buxbaum, Joseph D

    2014-08-01

    A key component of genetic architecture is the allelic spectrum influencing trait variability. For autism spectrum disorder (herein termed autism), the nature of the allelic spectrum is uncertain. Individual risk-associated genes have been identified from rare variation, especially de novo mutations. From this evidence, one might conclude that rare variation dominates the allelic spectrum in autism, yet recent studies show that common variation, individually of small effect, has substantial impact en masse. At issue is how much of an impact relative to rare variation this common variation has. Using a unique epidemiological sample from Sweden, new methods that distinguish total narrow-sense heritability from that due to common variation and synthesis of results from other studies, we reach several conclusions about autism's genetic architecture: its narrow-sense heritability is ∼52.4%, with most due to common variation, and rare de novo mutations contribute substantially to individual liability, yet their contribution to variance in liability, 2.6%, is modest compared to that for heritable variation.

  2. Advanced information processing system: The Army fault tolerant architecture conceptual study. Volume 2: Army fault tolerant architecture design and analysis

    NASA Technical Reports Server (NTRS)

    Harper, R. E.; Alger, L. S.; Babikyan, C. A.; Butler, B. P.; Friend, S. A.; Ganska, R. J.; Lala, J. H.; Masotto, T. K.; Meyer, A. J.; Morton, D. P.

    1992-01-01

    Described here is the Army Fault Tolerant Architecture (AFTA) hardware architecture and components and the operating system. The architectural and operational theory of the AFTA Fault Tolerant Data Bus is discussed. The test and maintenance strategy developed for use in fielded AFTA installations is presented. An approach to be used in reducing the probability of AFTA failure due to common mode faults is described. Analytical models for AFTA performance, reliability, availability, life cycle cost, weight, power, and volume are developed. An approach is presented for using VHSIC Hardware Description Language (VHDL) to describe and design AFTA's developmental hardware. A plan is described for verifying and validating key AFTA concepts during the Dem/Val phase. Analytical models and partial mission requirements are used to generate AFTA configurations for the TF/TA/NOE and Ground Vehicle missions.

  3. Network-driven design principles for neuromorphic systems.

    PubMed

    Partzsch, Johannes; Schüffny, Rene

    2015-01-01

    Synaptic connectivity is typically the most resource-demanding part of neuromorphic systems. Commonly, the architecture of these systems is chosen mainly on technical considerations. As a consequence, the potential for optimization arising from the inherent constraints of connectivity models is left unused. In this article, we develop an alternative, network-driven approach to neuromorphic architecture design. We describe methods to analyse performance of existing neuromorphic architectures in emulating certain connectivity models. Furthermore, we show step-by-step how to derive a neuromorphic architecture from a given connectivity model. For this, we introduce a generalized description for architectures with a synapse matrix, which takes into account shared use of circuit components for reducing total silicon area. Architectures designed with this approach are fitted to a connectivity model, essentially adapting to its connection density. They are guaranteeing faithful reproduction of the model on chip, while requiring less total silicon area. In total, our methods allow designers to implement more area-efficient neuromorphic systems and verify usability of the connectivity resources in these systems.

  4. Network-driven design principles for neuromorphic systems

    PubMed Central

    Partzsch, Johannes; Schüffny, Rene

    2015-01-01

    Synaptic connectivity is typically the most resource-demanding part of neuromorphic systems. Commonly, the architecture of these systems is chosen mainly on technical considerations. As a consequence, the potential for optimization arising from the inherent constraints of connectivity models is left unused. In this article, we develop an alternative, network-driven approach to neuromorphic architecture design. We describe methods to analyse performance of existing neuromorphic architectures in emulating certain connectivity models. Furthermore, we show step-by-step how to derive a neuromorphic architecture from a given connectivity model. For this, we introduce a generalized description for architectures with a synapse matrix, which takes into account shared use of circuit components for reducing total silicon area. Architectures designed with this approach are fitted to a connectivity model, essentially adapting to its connection density. They are guaranteeing faithful reproduction of the model on chip, while requiring less total silicon area. In total, our methods allow designers to implement more area-efficient neuromorphic systems and verify usability of the connectivity resources in these systems. PMID:26539079

  5. Sensor Open System Architecture (SOSA) evolution for collaborative standards development

    NASA Astrophysics Data System (ADS)

    Collier, Charles Patrick; Lipkin, Ilya; Davidson, Steven A.; Baldwin, Rusty; Orlovsky, Michael C.; Ibrahim, Tim

    2017-04-01

    The Sensor Open System Architecture (SOSA) is a C4ISR-focused technical and economic collaborative effort between the Air Force, Navy, Army, the Department of Defense (DoD), Industry, and other Governmental agencies to develop (and incorporate) a technical Open Systems Architecture standard in order to maximize C4ISR sub-system, system, and platform affordability, re-configurability, and hardware/software/firmware re-use. The SOSA effort will effectively create an operational and technical framework for the integration of disparate payloads into C4ISR systems; with a focus on the development of a modular decomposition (defining functions and behaviors) and associated key interfaces (physical and logical) for common multi-purpose architecture for radar, EO/IR, SIGINT, EW, and Communications. SOSA addresses hardware, software, and mechanical/electrical interfaces. The modular decomposition will produce a set of re-useable components, interfaces, and sub-systems that engender reusable capabilities. This, in effect, creates a realistic and affordable ecosystem enabling mission effectiveness through systematic re-use of all available re-composed hardware, software, and electrical/mechanical base components and interfaces. To this end, SOSA will leverage existing standards as much as possible and evolve the SOSA architecture through modification, reuse, and enhancements to achieve C4ISR goals. This paper will present accomplishments over the first year of SOSA initiative.

  6. PDS4 - Some Principles for Agile Data Curation

    NASA Astrophysics Data System (ADS)

    Hughes, J. S.; Crichton, D. J.; Hardman, S. H.; Joyner, R.; Algermissen, S.; Padams, J.

    2015-12-01

    PDS4, a research data management and curation system for NASA's Planetary Science Archive, was developed using principles that promote the characteristics of agile development. The result is an efficient system that produces better research data products while using less resources (time, effort, and money) and maximizes their usefulness for current and future scientists. The key principle is architectural. The PDS4 information architecture is developed and maintained independent of the infrastructure's process, application and technology architectures. The information architecture is based on an ontology-based information model developed to leverage best practices from standard reference models for digital archives, digital object registries, and metadata registries and capture domain knowledge from a panel of planetary science domain experts. The information model provides a sharable, stable, and formal set of information requirements for the system and is the primary source for information to configure most system components, including the product registry, search engine, validation and display tools, and production pipelines. Multi-level governance is also allowed for the effective management of the informational elements at the common, discipline, and project level. This presentation will describe the development principles, components, and uses of the information model and how an information model-driven architecture exhibits characteristics of agile curation including early delivery, evolutionary development, adaptive planning, continuous improvement, and rapid and flexible response to change.

  7. Systems and technologies for high-speed inter-office/datacenter interface

    NASA Astrophysics Data System (ADS)

    Sone, Y.; Nishizawa, H.; Yamamoto, S.; Fukutoku, M.; Yoshimatsu, T.

    2017-01-01

    Emerging requirements for inter-office/inter-datacenter short reach links for data center interconnects (DCI) and metro transport networks have led to various inter-office and inter-datacenter optical interface technologies. These technologies are bringing significant changes to systems and network architectures. In this paper, we present a system and ZR optical interface technologies for DCI and metro transport networks, then introduce the latest challenges facing the system framework. There are two trends in reach extension; one is to use Ethernet and the other is to use digital coherent technologies. The first approach achieves reach extension while using as many existing Ethernet components as possible. It offers low costs as reuses the cost-effective components created for the large Ethernet market. The second approach adopts low-cost and low power coherent DSPs that implement the minimal set long haul transmission functions. This paper introduces an architecture that integrates both trends. The architecture satisfies both datacom and telecom needs with a common control and management interface and automated configuration.

  8. NASA Integrated Network Monitor and Control Software Architecture

    NASA Technical Reports Server (NTRS)

    Shames, Peter; Anderson, Michael; Kowal, Steve; Levesque, Michael; Sindiy, Oleg; Donahue, Kenneth; Barnes, Patrick

    2012-01-01

    The National Aeronautics and Space Administration (NASA) Space Communications and Navigation office (SCaN) has commissioned a series of trade studies to define a new architecture intended to integrate the three existing networks that it operates, the Deep Space Network (DSN), Space Network (SN), and Near Earth Network (NEN), into one integrated network that offers users a set of common, standardized, services and interfaces. The integrated monitor and control architecture utilizes common software and common operator interfaces that can be deployed at all three network elements. This software uses state-of-the-art concepts such as a pool of re-programmable equipment that acts like a configurable software radio, distributed hierarchical control, and centralized management of the whole SCaN integrated network. For this trade space study a model-based approach using SysML was adopted to describe and analyze several possible options for the integrated network monitor and control architecture. This model was used to refine the design and to drive the costing of the four different software options. This trade study modeled the three existing self standing network elements at point of departure, and then described how to integrate them using variations of new and existing monitor and control system components for the different proposed deployments under consideration. This paper will describe the trade space explored, the selected system architecture, the modeling and trade study methods, and some observations on useful approaches to implementing such model based trade space representation and analysis.

  9. Lifecycle Prognostics Architecture for Selected High-Cost Active Components

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    N. Lybeck; B. Pham; M. Tawfik

    There are an extensive body of knowledge and some commercial products available for calculating prognostics, remaining useful life, and damage index parameters. The application of these technologies within the nuclear power community is still in its infancy. Online monitoring and condition-based maintenance is seeing increasing acceptance and deployment, and these activities provide the technological bases for expanding to add predictive/prognostics capabilities. In looking to deploy prognostics there are three key aspects of systems that are presented and discussed: (1) component/system/structure selection, (2) prognostic algorithms, and (3) prognostics architectures. Criteria are presented for component selection: feasibility, failure probability, consequences of failure,more » and benefits of the prognostics and health management (PHM) system. The basis and methods commonly used for prognostics algorithms are reviewed and summarized. Criteria for evaluating PHM architectures are presented: open, modular architecture; platform independence; graphical user interface for system development and/or results viewing; web enabled tools; scalability; and standards compatibility. Thirteen software products were identified and discussed in the context of being potentially useful for deployment in a PHM program applied to systems in a nuclear power plant (NPP). These products were evaluated by using information available from company websites, product brochures, fact sheets, scholarly publications, and direct communication with vendors. The thirteen products were classified into four groups of software: (1) research tools, (2) PHM system development tools, (3) deployable architectures, and (4) peripheral tools. Eight software tools fell into the deployable architectures category. Of those eight, only two employ all six modules of a full PHM system. Five systems did not offer prognostic estimates, and one system employed the full health monitoring suite but lacked operations and maintenance support. Each product is briefly described in Appendix A. Selection of the most appropriate software package for a particular application will depend on the chosen component, system, or structure. Ongoing research will determine the most appropriate choices for a successful demonstration of PHM systems in aging NPPs.« less

  10. NASA Enterprise Architecture and Its Use in Transition of Research Results to Operations

    NASA Astrophysics Data System (ADS)

    Frisbie, T. E.; Hall, C. M.

    2006-12-01

    Enterprise architecture describes the design of the components of an enterprise, their relationships and how they support the objectives of that enterprise. NASA Stennis Space Center leads several projects involving enterprise architecture tools used to gather information on research assets within NASA's Earth Science Division. In the near future, enterprise architecture tools will link and display the relevant requirements, parameters, observatories, models, decision systems, and benefit/impact information relationships and map to the Federal Enterprise Architecture Reference Models. Components configured within the enterprise architecture serving the NASA Applied Sciences Program include the Earth Science Components Knowledge Base, the Systems Components database, and the Earth Science Architecture Tool. The Earth Science Components Knowledge Base systematically catalogues NASA missions, sensors, models, data products, model products, and network partners appropriate for consideration in NASA Earth Science applications projects. The Systems Components database is a centralized information warehouse of NASA's Earth Science research assets and a critical first link in the implementation of enterprise architecture. The Earth Science Architecture Tool is used to analyze potential NASA candidate systems that may be beneficial to decision-making capabilities of other Federal agencies. Use of the current configuration of NASA enterprise architecture (the Earth Science Components Knowledge Base, the Systems Components database, and the Earth Science Architecture Tool) has far exceeded its original intent and has tremendous potential for the transition of research results to operational entities.

  11. Most genetic risk for autism resides with common variation

    PubMed Central

    Gaugler, Trent; Klei, Lambertus; Sanders, Stephan J.; Bodea, Corneliu A.; Goldberg, Arthur P.; Lee, Ann B.; Mahajan, Milind; Manaa, Dina; Pawitan, Yudi; Reichert, Jennifer; Ripke, Stephan; Sandin, Sven; Sklar, Pamela; Svantesson, Oscar; Reichenberg, Abraham; Hultman, Christina M.; Devlin, Bernie

    2014-01-01

    A key component of genetic architecture is the allelic spectrum influencing trait variability. For autism spectrum disorder (henceforth autism) the nature of its allelic spectrum is uncertain. Individual risk genes have been identified from rare variation, especially de novo mutations1–8. From this evidence one might conclude that rare variation dominates its allelic spectrum, yet recent studies show that common variation, individually of small effect, has substantial impact en masse9,10. At issue is how much of an impact relative to rare variation. Using a unique epidemiological sample from Sweden, novel methods that distinguish total narrow-sense heritability from that due to common variation, and by synthesizing results from other studies, we reach several conclusions about autism’s genetic architecture: its narrow-sense heritability is ≈54% and most traces to common variation; rare de novo mutations contribute substantially to individuals’ liability; still their contribution to variance in liability, 2.6%, is modest compared to heritable variation. PMID:25038753

  12. AliEn—ALICE environment on the GRID

    NASA Astrophysics Data System (ADS)

    Saiz, P.; Aphecetche, L.; Bunčić, P.; Piskač, R.; Revsbech, J.-E.; Šego, V.; Alice Collaboration

    2003-04-01

    AliEn ( http://alien.cern.ch) (ALICE Environment) is a Grid framework built on top of the latest Internet standards for information exchange and authentication (SOAP, PKI) and common Open Source components. AliEn provides a virtual file catalogue that allows transparent access to distributed datasets and a number of collaborating Web services which implement the authentication, job execution, file transport, performance monitor and event logging. In the paper we will present the architecture and components of the system.

  13. Intelligent Agent Architectures: Reactive Planning Testbed

    NASA Technical Reports Server (NTRS)

    Rosenschein, Stanley J.; Kahn, Philip

    1993-01-01

    An Integrated Agent Architecture (IAA) is a framework or paradigm for constructing intelligent agents. Intelligent agents are collections of sensors, computers, and effectors that interact with their environments in real time in goal-directed ways. Because of the complexity involved in designing intelligent agents, it has been found useful to approach the construction of agents with some organizing principle, theory, or paradigm that gives shape to the agent's components and structures their relationships. Given the wide variety of approaches being taken in the field, the question naturally arises: Is there a way to compare and evaluate these approaches? The purpose of the present work is to develop common benchmark tasks and evaluation metrics to which intelligent agents, including complex robotic agents, constructed using various architectural approaches can be subjected.

  14. Reconfigurable vision system for real-time applications

    NASA Astrophysics Data System (ADS)

    Torres-Huitzil, Cesar; Arias-Estrada, Miguel

    2002-03-01

    Recently, a growing community of researchers has used reconfigurable systems to solve computationally intensive problems. Reconfigurability provides optimized processors for systems on chip designs, and makes easy to import technology to a new system through reusable modules. The main objective of this work is the investigation of a reconfigurable computer system targeted for computer vision and real-time applications. The system is intended to circumvent the inherent computational load of most window-based computer vision algorithms. It aims to build a system for such tasks by providing an FPGA-based hardware architecture for task specific vision applications with enough processing power, using the minimum amount of hardware resources as possible, and a mechanism for building systems using this architecture. Regarding the software part of the system, a library of pre-designed and general-purpose modules that implement common window-based computer vision operations is being investigated. A common generic interface is established for these modules in order to define hardware/software components. These components can be interconnected to develop more complex applications, providing an efficient mechanism for transferring image and result data among modules. Some preliminary results are presented and discussed.

  15. Gravity response mechanisms of lateral organs and the control of plant architecture in Arabidopsis

    NASA Astrophysics Data System (ADS)

    Mullen, J.; Hangarter, R.

    Most research on gravity responses in plants has focused on primary roots and shoots, which typically grow in a vertical orientation. However, the patterns of lateral organ formation and their growth orientation, which typically are not vertical, govern plant architecture. For example, in Arabidopsis, when lateral roots emerge from the primary root, they grow at a nearly horizontal orientation. As they elongate, the roots slowly curve until they eventually reach a vertical orientation. The regulation of this lateral root orientation is an important component affecting the overall root system architecture. We have found that this change in orientation is not simply due to the onset of gravitropic competence, as non-vertical lateral roots are capable of both positive and negative gravitropism. Thus, the horizontal growth of the new lateral roots is determined by what is called the gravitropic set-point angle (GSA). In Arabidopsis shoots, rosette leaves and inflorescence branches also display GSA-dependent developmental changes in their orientation. The developmental control of the GSA of lateral organs in Arabidopsis provides us with a useful system for investigating the components involved in regulating directionality of tropistic responses. We have identified several Arabidopsis mutants that have either altered lateral root orientations, altered orientation of lateral organs in the shoot, or both, but maintain normal primary organ orientation. The mgsa ({m}odified {g}ravitropic {s}et-point {a}ngle) mutants with both altered lateral root and shoot orientation show that there are common components in the regulation of growth orientation in the different organs. Rosette leaves and lateral roots also have in common a regulation of positioning by red light. Further molecular and physiological analyses of the GSA mutants will provide insight into the basis of GSA regulation and, thus, a better understanding of how gravity controls plant architecture. [This work was supported by the National Aeronautics and Space Administration through grant no. NCC 2-1200.

  16. Evaluation of the impact of deep learning architectural components selection and dataset size on a medical imaging task

    NASA Astrophysics Data System (ADS)

    Dutta, Sandeep; Gros, Eric

    2018-03-01

    Deep Learning (DL) has been successfully applied in numerous fields fueled by increasing computational power and access to data. However, for medical imaging tasks, limited training set size is a common challenge when applying DL. This paper explores the applicability of DL to the task of classifying a single axial slice from a CT exam into one of six anatomy regions. A total of 29000 images selected from 223 CT exams were manually labeled for ground truth. An additional 54 exams were labeled and used as an independent test set. The network architecture developed for this application is composed of 6 convolutional layers and 2 fully connected layers with RELU non-linear activations between each layer. Max-pooling was used after every second convolutional layer, and a softmax layer was used at the end. Given this base architecture, the effect of inclusion of network architecture components such as Dropout and Batch Normalization on network performance and training is explored. The network performance as a function of training and validation set size is characterized by training each network architecture variation using 5,10,20,40,50 and 100% of the available training data. The performance comparison of the various network architectures was done for anatomy classification as well as two computer vision datasets. The anatomy classifier accuracy varied from 74.1% to 92.3% in this study depending on the training size and network layout used. Dropout layers improved the model accuracy for all training sizes.

  17. System Architectural Considerations on Reliable Guidance, Navigation, and Control (GN and C) for Constellation Program (CxP) Spacecraft

    NASA Technical Reports Server (NTRS)

    Dennehy, Cornelius J.

    2010-01-01

    This final report summarizes the results of a comparative assessment of the fault tolerance and reliability of different Guidance, Navigation and Control (GN&C) architectural approaches. This study was proactively performed by a combined Massachusetts Institute of Technology (MIT) and Draper Laboratory team as a GN&C "Discipline-Advancing" activity sponsored by the NASA Engineering and Safety Center (NESC). This systematic comparative assessment of GN&C system architectural approaches was undertaken as a fundamental step towards understanding the opportunities for, and limitations of, architecting highly reliable and fault tolerant GN&C systems composed of common avionic components. The primary goal of this study was to obtain architectural 'rules of thumb' that could positively influence future designs in the direction of an optimized (i.e., most reliable and cost-efficient) GN&C system. A secondary goal was to demonstrate the application and the utility of a systematic modeling approach that maps the entire possible architecture solution space.

  18. The NASA Integrated Information Technology Architecture

    NASA Technical Reports Server (NTRS)

    Baldridge, Tim

    1997-01-01

    This document defines an Information Technology Architecture for the National Aeronautics and Space Administration (NASA), where Information Technology (IT) refers to the hardware, software, standards, protocols and processes that enable the creation, manipulation, storage, organization and sharing of information. An architecture provides an itemization and definition of these IT structures, a view of the relationship of the structures to each other and, most importantly, an accessible view of the whole. It is a fundamental assumption of this document that a useful, interoperable and affordable IT environment is key to the execution of the core NASA scientific and project competencies and business practices. This Architecture represents the highest level system design and guideline for NASA IT related activities and has been created on the authority of the NASA Chief Information Officer (CIO) and will be maintained under the auspices of that office. It addresses all aspects of general purpose, research, administrative and scientific computing and networking throughout the NASA Agency and is applicable to all NASA administrative offices, projects, field centers and remote sites. Through the establishment of five Objectives and six Principles this Architecture provides a blueprint for all NASA IT service providers: civil service, contractor and outsourcer. The most significant of the Objectives and Principles are the commitment to customer-driven IT implementations and the commitment to a simpler, cost-efficient, standards-based, modular IT infrastructure. In order to ensure that the Architecture is presented and defined in the context of the mission, project and business goals of NASA, this Architecture consists of four layers in which each subsequent layer builds on the previous layer. They are: 1) the Business Architecture: the operational functions of the business, or Enterprise, 2) the Systems Architecture: the specific Enterprise activities within the context of IT systems, 3) the Technical Architecture: a common, vendor-independent framework for design, integration and implementation of IT systems and 4) the Product Architecture: vendor=specific IT solutions. The Systems Architecture is effectively a description of the end-user "requirements". Generalized end-user requirements are discussed and subsequently organized into specific mission and project functions. The Technical Architecture depicts the framework, and relationship, of the specific IT components that enable the end-user functionality as described in the Systems Architecture. The primary components as described in the Technical Architecture are: 1) Applications: Basic Client Component, Object Creation Applications, Collaborative Applications, Object Analysis Applications, 2) Services: Messaging, Information Broker, Collaboration, Distributed Processing, and 3) Infrastructure: Network, Security, Directory, Certificate Management, Enterprise Management and File System. This Architecture also provides specific Implementation Recommendations, the most significant of which is the recognition of IT as core to NASA activities and defines a plan, which is aligned with the NASA strategic planning processes, for keeping the Architecture alive and useful.

  19. Space station integrated propulsion and fluid systems study. Space station program fluid management systems databook

    NASA Technical Reports Server (NTRS)

    Bicknell, B.; Wilson, S.; Dennis, M.; Lydon, M.

    1988-01-01

    Commonality and integration of propulsion and fluid systems associated with the Space Station elements are being evaluated. The Space Station elements consist of the core station, which includes habitation and laboratory modules, nodes, airlocks, and trusswork; and associated vehicles, platforms, experiments, and payloads. The program is being performed as two discrete tasks. Task 1 investigated the components of the Space Station architecture to determine the feasibility and practicality of commonality and integration among the various propulsion elements. This task was completed. Task 2 is examining integration and commonality among fluid systems which were identified by the Phase B Space Station contractors as being part of the initial operating capability (IOC) and growth Space Station architectures. Requirements and descriptions for reference fluid systems were compiled from Space Station documentation and other sources. The fluid systems being examined are: an experiment gas supply system, an oxygen/hydrogen supply system, an integrated water system, the integrated nitrogen system, and the integrated waste fluids system. Definitions and descriptions of alternate systems were developed, along with analyses and discussions of their benefits and detriments. This databook includes fluid systems descriptions, requirements, schematic diagrams, component lists, and discussions of the fluid systems. In addition, cost comparison are used in some cases to determine the optimum system for a specific task.

  20. NetVLAD: CNN Architecture for Weakly Supervised Place Recognition.

    PubMed

    Arandjelovic, Relja; Gronat, Petr; Torii, Akihiko; Pajdla, Tomas; Sivic, Josef

    2018-06-01

    We tackle the problem of large scale visual place recognition, where the task is to quickly and accurately recognize the location of a given query photograph. We present the following four principal contributions. First, we develop a convolutional neural network (CNN) architecture that is trainable in an end-to-end manner directly for the place recognition task. The main component of this architecture, NetVLAD, is a new generalized VLAD layer, inspired by the "Vector of Locally Aggregated Descriptors" image representation commonly used in image retrieval. The layer is readily pluggable into any CNN architecture and amenable to training via backpropagation. Second, we create a new weakly supervised ranking loss, which enables end-to-end learning of the architecture's parameters from images depicting the same places over time downloaded from Google Street View Time Machine. Third, we develop an efficient training procedure which can be applied on very large-scale weakly labelled tasks. Finally, we show that the proposed architecture and training procedure significantly outperform non-learnt image representations and off-the-shelf CNN descriptors on challenging place recognition and image retrieval benchmarks.

  1. Framework for a clinical information system.

    PubMed

    Van de Velde, R

    2000-01-01

    The current status of our work towards the design and implementation of a reference architecture for a Clinical Information System is presented. This architecture has been developed and implemented based on components following a strong underlying conceptual and technological model. Common Object Request Broker and n-tier technology featuring centralised and departmental clinical information systems as the back-end store for all clinical data are used. Servers located in the 'middle' tier apply the clinical (business) model and application rules to communicate with so-called 'thin client' workstations. The main characteristics are the focus on modelling and reuse of both data and business logic as there is a shift away from data and functional modelling towards object modelling. Scalability as well as adaptability to constantly changing requirements via component driven computing are the main reasons for that approach.

  2. Prokaryotic regulatory systems biology: Common principles governing the functional architectures of Bacillus subtilis and Escherichia coli unveiled by the natural decomposition approach.

    PubMed

    Freyre-González, Julio A; Treviño-Quintanilla, Luis G; Valtierra-Gutiérrez, Ilse A; Gutiérrez-Ríos, Rosa María; Alonso-Pavón, José A

    2012-10-31

    Escherichia coli and Bacillus subtilis are two of the best-studied prokaryotic model organisms. Previous analyses of their transcriptional regulatory networks have shown that they exhibit high plasticity during evolution and suggested that both converge to scale-free-like structures. Nevertheless, beyond this suggestion, no analyses have been carried out to identify the common systems-level components and principles governing these organisms. Here we show that these two phylogenetically distant organisms follow a set of common novel biologically consistent systems principles revealed by the mathematically and biologically founded natural decomposition approach. The discovered common functional architecture is a diamond-shaped, matryoshka-like, three-layer (coordination, processing, and integration) hierarchy exhibiting feedback, which is shaped by four systems-level components: global transcription factors (global TFs), locally autonomous modules, basal machinery and intermodular genes. The first mathematical criterion to identify global TFs, the κ-value, was reassessed on B. subtilis and confirmed its high predictive power by identifying all the previously reported, plus three potential, master regulators and eight sigma factors. The functionally conserved cores of modules, basal cell machinery, and a set of non-orthologous common physiological global responses were identified via both orthologous genes and non-orthologous conserved functions. This study reveals novel common systems principles maintained between two phylogenetically distant organisms and provides a comparison of their lifestyle adaptations. Our results shed new light on the systems-level principles and the fundamental functions required by bacteria to sustain life. Copyright © 2012 Elsevier B.V. All rights reserved.

  3. Engineering Promoter Architecture in Oleaginous Yeast Yarrowia lipolytica.

    PubMed

    Shabbir Hussain, Murtaza; Gambill, Lauren; Smith, Spencer; Blenner, Mark A

    2016-03-18

    Eukaryotic promoters have a complex architecture to control both the strength and timing of gene transcription spanning up to thousands of bases from the initiation site. This complexity makes rational fine-tuning of promoters in fungi difficult to predict; however, this very same complexity enables multiple possible strategies for engineering promoter strength. Here, we studied promoter architecture in the oleaginous yeast, Yarrowia lipolytica. While recent studies have focused on upstream activating sequences, we systematically examined various components common in fungal promoters. Here, we examine several promoter components including upstream activating sequences, proximal promoter sequences, core promoters, and the TATA box in autonomously replicating expression plasmids and integrated into the genome. Our findings show that promoter strength can be fine-tuned through the engineering of the TATA box sequence, core promoter, and upstream activating sequences. Additionally, we identified a previously unreported oleic acid responsive transcription enhancement in the XPR2 upstream activating sequences, which illustrates the complexity of fungal promoters. The promoters engineered here provide new genetic tools for metabolic engineering in Y. lipolytica and provide promoter engineering strategies that may be useful in engineering other non-model fungal systems.

  4. Engineering intelligent tutoring systems

    NASA Technical Reports Server (NTRS)

    Warren, Kimberly C.; Goodman, Bradley A.

    1993-01-01

    We have defined an object-oriented software architecture for Intelligent Tutoring Systems (ITS's) to facilitate the rapid development, testing, and fielding of ITS's. This software architecture partitions the functionality of the ITS into a collection of software components with well-defined interfaces and execution concept. The architecture was designed to isolate advanced technology components, partition domain dependencies, take advantage of the increased availability of commercial software packages, and reduce the risks involved in acquiring ITS's. A key component of the architecture, the Executive, is a publish and subscribe message handling component that coordinates all communication between ITS components.

  5. Control software and electronics architecture design in the framework of the E-ELT instrumentation

    NASA Astrophysics Data System (ADS)

    Di Marcantonio, P.; Coretti, I.; Cirami, R.; Comari, M.; Santin, P.; Pucillo, M.

    2010-07-01

    During the last years the European Southern Observatory (ESO), in collaboration with other European astronomical institutes, has started several feasibility studies for the E-ELT (European-Extremely Large Telescope) instrumentation and post-focal adaptive optics. The goal is to create a flexible suite of instruments to deal with the wide variety of scientific questions astronomers would like to see solved in the coming decades. In this framework INAF-Astronomical Observatory of Trieste (INAF-AOTs) is currently responsible of carrying out the analysis and the preliminary study of the architecture of the electronics and control software of three instruments: CODEX (control software and electronics) and OPTIMOS-EVE/OPTIMOS-DIORAMAS (control software). To cope with the increased complexity and new requirements for stability, precision, real-time latency and communications among sub-systems imposed by these instruments, new solutions have been investigated by our group. In this paper we present the proposed software and electronics architecture based on a distributed common framework centered on the Component/Container model that uses OPC Unified Architecture as a standard layer to communicate with COTS components of three different vendors. We describe three working prototypes that have been set-up in our laboratory and discuss their performances, integration complexity and ease of deployment.

  6. Kinetic insulation as an effective mechanism for achieving pathway specificity in intracellular signaling networks

    PubMed Central

    Behar, Marcelo; Dohlman, Henrik G.; Elston, Timothy C.

    2007-01-01

    Intracellular signaling pathways that share common components often elicit distinct physiological responses. In most cases, the biochemical mechanisms responsible for this signal specificity remain poorly understood. Protein scaffolds and cross-inhibition have been proposed as strategies to prevent unwanted cross-talk. Here, we report a mechanism for signal specificity termed “kinetic insulation.” In this approach signals are selectively transmitted through the appropriate pathway based on their temporal profile. In particular, we demonstrate how pathway architectures downstream of a common component can be designed to efficiently separate transient signals from signals that increase slowly over time. Furthermore, we demonstrate that upstream signaling proteins can generate the appropriate input to the common pathway component regardless of the temporal profile of the external stimulus. Our results suggest that multilevel signaling cascades may have evolved to modulate the temporal profile of pathway activity so that stimulus information can be efficiently encoded and transmitted while ensuring signal specificity. PMID:17913886

  7. Weighted Components of i-Government Enterprise Architecture

    NASA Astrophysics Data System (ADS)

    Budiardjo, E. K.; Firmansyah, G.; Hasibuan, Z. A.

    2017-01-01

    Lack of government performance, among others due to the lack of coordination and communication among government agencies. Whilst, Enterprise Architecture (EA) in the government can be use as a strategic planning tool to improve productivity, efficiency, and effectivity. However, the existence components of Government Enterprise Architecture (GEA) do not show level of importance, that cause difficulty in implementing good e-government for good governance. This study is to explore the weight of GEA components using Principal Component Analysis (PCA) in order to discovered an inherent structure of e-government. The results show that IT governance component of GEA play a major role in the GEA. The rest of components that consist of e-government system, e-government regulation, e-government management, and application key operational, contributed more or less the same. Beside that GEA from other countries analyzes using comparative base on comon enterprise architecture component. These weighted components use to construct i-Government enterprise architecture. and show the relative importance of component in order to established priorities in developing e-government.

  8. ESPC Common Model Architecture

    DTIC Science & Technology

    2014-09-30

    1 DISTRIBUTION STATEMENT A. Approved for public release; distribution is unlimited. ESPC Common Model Architecture Earth System Modeling...Operational Prediction Capability (NUOPC) was established between NOAA and Navy to develop common software architecture for easy and efficient...development under a common model architecture and other software-related standards in this project. OBJECTIVES NUOPC proposes to accelerate

  9. Real-Time Discovery Services over Large, Heterogeneous and Complex Healthcare Datasets Using Schema-Less, Column-Oriented Methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Begoli, Edmon; Dunning, Ted; Charlie, Frasure

    We present a service platform for schema-leess exploration of data and discovery of patient-related statistics from healthcare data sets. The architecture of this platform is motivated by the need for fast, schema-less, and flexible approaches to SQL-based exploration and discovery of information embedded in the common, heterogeneously structured healthcare data sets and supporting components (electronic health records, practice management systems, etc.) The motivating use cases described in the paper are clinical trials candidate discovery, and a treatment effectiveness analysis. Following the use cases, we discuss the key features and software architecture of the platform, the underlying core components (Apache Parquet,more » Drill, the web services server), and the runtime profiles and performance characteristics of the platform. We conclude by showing dramatic speedup with some approaches, and the performance tradeoffs and limitations of others.« less

  10. High-performance image processing architecture

    NASA Astrophysics Data System (ADS)

    Coffield, Patrick C.

    1992-04-01

    The proposed architecture is a logical design specifically for image processing and other related computations. The design is a hybrid electro-optical concept consisting of three tightly coupled components: a spatial configuration processor (the optical analog portion), a weighting processor (digital), and an accumulation processor (digital). The systolic flow of data and image processing operations are directed by a control buffer and pipelined to each of the three processing components. The image processing operations are defined by an image algebra developed by the University of Florida. The algebra is capable of describing all common image-to-image transformations. The merit of this architectural design is how elegantly it handles the natural decomposition of algebraic functions into spatially distributed, point-wise operations. The effect of this particular decomposition allows convolution type operations to be computed strictly as a function of the number of elements in the template (mask, filter, etc.) instead of the number of picture elements in the image. Thus, a substantial increase in throughput is realized. The logical architecture may take any number of physical forms. While a hybrid electro-optical implementation is of primary interest, the benefits and design issues of an all digital implementation are also discussed. The potential utility of this architectural design lies in its ability to control all the arithmetic and logic operations of the image algebra's generalized matrix product. This is the most powerful fundamental formulation in the algebra, thus allowing a wide range of applications.

  11. Space vehicle chassis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Judd, Stephen; Dallmann, Nicholas; Seitz, Daniel

    A modular space vehicle chassis may facilitate convenient access to internal components of the space vehicle. Each module may be removable from the others such that each module may be worked on individually. Multiple panels of at least one of the modules may swing open or otherwise be removable, exposing large portions of the internal components of the space vehicle. Such chassis architectures may reduce the time required for and difficulty of performing maintenance or modifications, may allow multiple space vehicles to take advantage of a common chassis design, and may further allow for highly customizable space vehicles.

  12. From data to the decision: A software architecture to integrate predictive modelling in clinical settings.

    PubMed

    Martinez-Millana, A; Fernandez-Llatas, C; Sacchi, L; Segagni, D; Guillen, S; Bellazzi, R; Traver, V

    2015-08-01

    The application of statistics and mathematics over large amounts of data is providing healthcare systems with new tools for screening and managing multiple diseases. Nonetheless, these tools have many technical and clinical limitations as they are based on datasets with concrete characteristics. This proposition paper describes a novel architecture focused on providing a validation framework for discrimination and prediction models in the screening of Type 2 diabetes. For that, the architecture has been designed to gather different data sources under a common data structure and, furthermore, to be controlled by a centralized component (Orchestrator) in charge of directing the interaction flows among data sources, models and graphical user interfaces. This innovative approach aims to overcome the data-dependency of the models by providing a validation framework for the models as they are used within clinical settings.

  13. Numerical Propulsion System Simulation Architecture

    NASA Technical Reports Server (NTRS)

    Naiman, Cynthia G.

    2004-01-01

    The Numerical Propulsion System Simulation (NPSS) is a framework for performing analysis of complex systems. Because the NPSS was developed using the object-oriented paradigm, the resulting architecture is an extensible and flexible framework that is currently being used by a diverse set of participants in government, academia, and the aerospace industry. NPSS is being used by over 15 different institutions to support rockets, hypersonics, power and propulsion, fuel cells, ground based power, and aerospace. Full system-level simulations as well as subsystems may be modeled using NPSS. The NPSS architecture enables the coupling of analyses at various levels of detail, which is called numerical zooming. The middleware used to enable zooming and distributed simulations is the Common Object Request Broker Architecture (CORBA). The NPSS Developer's Kit offers tools for the developer to generate CORBA-based components and wrap codes. The Developer's Kit enables distributed multi-fidelity and multi-discipline simulations, preserves proprietary and legacy codes, and facilitates addition of customized codes. The platforms supported are PC, Linux, HP, Sun, and SGI.

  14. Frequency multiplexed flux locked loop architecture providing an array of DC SQUIDS having both shared and unshared components

    DOEpatents

    Ganther, Jr., Kenneth R.; Snapp, Lowell D.

    2002-01-01

    Architecture for frequency multiplexing multiple flux locked loops in a system comprising an array of DC SQUID sensors. The architecture involves dividing the traditional flux locked loop into multiple unshared components and a single shared component which, in operation, form a complete flux locked loop relative to each DC SQUID sensor. Each unshared flux locked loop component operates on a different flux modulation frequency. The architecture of the present invention allows a reduction from 2N to N+1 in the number of connections between the cryogenic DC SQUID sensors and their associated room temperature flux locked loops. Furthermore, the 1.times.N architecture of the present invention can be paralleled to form an M.times.N array architecture without increasing the required number of flux modulation frequencies.

  15. Reconfigurable Transceiver and Software-Defined Radio Architecture and Technology Evaluated for NASA Space Communications

    NASA Technical Reports Server (NTRS)

    Reinhart, Richard C.; Kacpura, Thomas J.

    2004-01-01

    The NASA Glenn Research Center is investigating the development and suitability of a software-based open-architecture for space-based reconfigurable transceivers (RTs) and software-defined radios (SDRs). The main objectives of this project are to enable advanced operations and reduce mission costs. SDRs are becoming more common because of the capabilities of reconfigurable digital signal processing technologies such as field programmable gate arrays and digital signal processors, which place radio functions in firmware and software that were traditionally performed with analog hardware components. Features of interest of this communications architecture include nonproprietary open standards and application programming interfaces to enable software reuse and portability, independent hardware and software development, and hardware and software functional separation. The goals for RT and SDR technologies for NASA space missions include prelaunch and on-orbit frequency and waveform reconfigurability and programmability, high data rate capability, and overall communications and processing flexibility. These operational advances over current state-of-art transceivers will be provided to reduce the power, mass, and cost of RTs and SDRs for space communications. The open architecture for NASA communications will support existing (legacy) communications needs and capabilities while providing a path to more capable, advanced waveform development and mission concepts (e.g., ad hoc constellations with self-healing networks and high-rate science data return). A study was completed to assess the state of the art in RT architectures, implementations, and technologies. In-house researchers conducted literature searches and analysis, interviewed Government and industry contacts, and solicited information and white papers from industry on space-qualifiable RTs and SDRs and their associated technologies for space-based NASA applications. The white papers were evaluated, compiled, and used to assess RT and SDR system architectures and core technology elements to determine an appropriate investment strategy to advance these technologies to meet future mission needs. The use of these radios in the space environment represents a challenge because of the space radiation suitability of the components, which drastically reduces the processing capability. The radios available for space are considered to be RTs (as opposed to SDRs), which are digitally programmable radios with selectable changes from an architecture combining analog and digital components. The limited flexibility of this design contrasts against the desire to have a power-efficient solution and open architecture.

  16. Genome-Wide Association Analysis Reveals Different Genetic Control in Panicle Architecture Between and Rice.

    PubMed

    Bai, Xufeng; Zhao, Hu; Huang, Yong; Xie, Weibo; Han, Zhongmin; Zhang, Bo; Guo, Zilong; Yang, Lin; Dong, Haijiao; Xue, Weiya; Li, Guangwei; Hu, Gang; Hu, Yong; Xing, Yongzhong

    2016-07-01

    Panicle architecture determines the number of spikelets per panicle (SPP) and is highly associated with grain yield in rice ( L.). Understanding the genetic basis of panicle architecture is important for improving the yield of rice grain. In this study, we dissected panicle architecture traits into eight components, which were phenotyped from a germplasm collection of 529 cultivars. Multiple regression analysis revealed that the number of secondary branch (NSB) was the major factor that contributed to SPP. Genome-wide association analysis was performed independently for the eight particle architecture traits observed in the and rice subpopulations compared with the whole rice population. In total, 30 loci were associated with these traits. Of these, 13 loci were closely linked to known panicle architecture genes, and 17 novel loci were repeatedly identified in different environments. An association signal cluster was identified for NSB and number of spikelets per secondary branch (NSSB) in the region of 31.6 to 31.7 Mb on chromosome 4. In addition to the common associations detected in both and subpopulations, many associated loci were unique to one subpopulation. For example, and were specifically associated with panicle length (PL) in and rice, respectively. Moreover, the -mediated flowering genes and were associated with the formation of panicle architecture in rice. These results suggest that different gene networks regulate panicle architecture in and rice. Copyright © 2016 Crop Science Society of America.

  17. A reference architecture for the component factory

    NASA Technical Reports Server (NTRS)

    Basili, Victor R.; Caldiera, Gianluigi; Cantone, Giovanni

    1992-01-01

    Software reuse can be achieved through an organization that focuses on utilization of life cycle products from previous developments. The component factory is both an example of the more general concepts of experience and domain factory and an organizational unit worth being considered independently. The critical features of such an organization are flexibility and continuous improvement. In order to achieve these features we can represent the architecture of the factory at different levels of abstraction and define a reference architecture from which specific architectures can be derived by instantiation. A reference architecture is an implementation and organization independent representation of the component factory and its environment. The paper outlines this reference architecture, discusses the instantiation process, and presents some examples of specific architectures by comparing them in the framework of the reference model.

  18. Hierarchical control and performance evaluation of multi-vehicle autonomous systems

    NASA Astrophysics Data System (ADS)

    Balakirsky, Stephen; Scrapper, Chris; Messina, Elena

    2005-05-01

    This paper will describe how the Mobility Open Architecture Tools and Simulation (MOAST) framework can facilitate performance evaluations of RCS compliant multi-vehicle autonomous systems. This framework provides an environment that allows for simulated and real architectural components to function seamlessly together. By providing repeatable environmental conditions, this framework allows for the development of individual components as well as component performance metrics. MOAST is composed of high-fidelity and low-fidelity simulation systems, a detailed model of real-world terrain, actual hardware components, a central knowledge repository, and architectural glue to tie all of the components together. This paper will describe the framework"s components in detail and provide an example that illustrates how the framework can be utilized to develop and evaluate a single architectural component through the use of repeatable trials and experimentation that includes both virtual and real components functioning together

  19. 3-D Packaging: A Technology Review

    NASA Technical Reports Server (NTRS)

    Strickland, Mark; Johnson, R. Wayne; Gerke, David

    2005-01-01

    Traditional electronics are assembled as a planar arrangement of components on a printed circuit board (PCB) or other type of substrate. These planar assemblies may then be plugged into a motherboard or card cage creating a volume of electronics. This architecture is common in many military and space electronic systems as well as large computer and telecommunications systems and industrial electronics. The individual PCB assemblies can be replaced if defective or for system upgrade. Some applications are constrained by the volume or the shape of the system and are not compatible with the motherboard or card cage architecture. Examples include missiles, camcorders, and digital cameras. In these systems, planar rigid-flex substrates are folded to create complex 3-D shapes. The flex circuit serves the role of motherboard, providing interconnection between the rigid boards. An example of a planar rigid - flex assembly prior to folding is shown. In both architectures, the interconnection is effectively 2-D.

  20. An Evaluation of the High Level Architecture (HLA) as a Framework for NASA Modeling and Simulation

    NASA Technical Reports Server (NTRS)

    Reid, Michael R.; Powers, Edward I. (Technical Monitor)

    2000-01-01

    The High Level Architecture (HLA) is a current US Department of Defense and an industry (IEEE-1516) standard architecture for modeling and simulations. It provides a framework and set of functional rules and common interfaces for integrating separate and disparate simulators into a larger simulation. The goal of the HLA is to reduce software costs by facilitating the reuse of simulation components and by providing a runtime infrastructure to manage the simulations. In order to evaluate the applicability of the HLA as a technology for NASA space mission simulations, a Simulations Group at Goddard Space Flight Center (GSFC) conducted a study of the HLA and developed a simple prototype HLA-compliant space mission simulator. This paper summarizes the prototyping effort and discusses the potential usefulness of the HLA in the design and planning of future NASA space missions with a focus on risk mitigation and cost reduction.

  1. ITS system specification. Appendix D, physical architecture component interfaces

    DOT National Transportation Integrated Search

    1997-01-01

    The objective of the Polaris Project is to define an Intelligent Transportation Systems (ITS) architecture for the state of Minnesota. An architecture is a framework that defines how multiple ITS Components interrelate and contribute to the overall I...

  2. An Object-Oriented Network-Centric Software Architecture for Physical Computing

    NASA Astrophysics Data System (ADS)

    Palmer, Richard

    1997-08-01

    Recent developments in object-oriented computer languages and infrastructure such as the Internet, Web browsers, and the like provide an opportunity to define a more productive computational environment for scientific programming that is based more closely on the underlying mathematics describing physics than traditional programming languages such as FORTRAN or C++. In this talk I describe an object-oriented software architecture for representing physical problems that includes classes for such common mathematical objects as geometry, boundary conditions, partial differential and integral equations, discretization and numerical solution methods, etc. In practice, a scientific program written using this architecture looks remarkably like the mathematics used to understand the problem, is typically an order of magnitude smaller than traditional FORTRAN or C++ codes, and hence easier to understand, debug, describe, etc. All objects in this architecture are ``network-enabled,'' which means that components of a software solution to a physical problem can be transparently loaded from anywhere on the Internet or other global network. The architecture is expressed as an ``API,'' or application programmers interface specification, with reference embeddings in Java, Python, and C++. A C++ class library for an early version of this API has been implemented for machines ranging from PC's to the IBM SP2, meaning that phidentical codes run on all architectures.

  3. CORDETS ( Component Oriented Development Techniques) and DOMENG (Domain Engineering)

    NASA Astrophysics Data System (ADS)

    Rodríquez-Dapena, P.

    2008-08-01

    This document presents the results of Workshop 2 held on the 28th of May 2008 in Palma de Mallorca as part of the DASIA2008 conference. The workshop is used for the setup and animation of the stakeholders' network intended to bring together the actors in the field of the future generic space on-board software architectures, in order to get a common vision, technical understanding and industrial interests.

  4. Sensitivity analysis by approximation formulas - Illustrative examples. [reliability analysis of six-component architectures

    NASA Technical Reports Server (NTRS)

    White, A. L.

    1983-01-01

    This paper examines the reliability of three architectures for six components. For each architecture, the probabilities of the failure states are given by algebraic formulas involving the component fault rate, the system recovery rate, and the operating time. The dominant failure modes are identified, and the change in reliability is considered with respect to changes in fault rate, recovery rate, and operating time. The major conclusions concern the influence of system architecture on failure modes and parameter requirements. Without this knowledge, a system designer may pick an inappropriate structure.

  5. Predicate calculus for an architecture of multiple neural networks

    NASA Astrophysics Data System (ADS)

    Consoli, Robert H.

    1990-08-01

    Future projects with neural networks will require multiple individual network components. Current efforts along these lines are ad hoc. This paper relates the neural network to a classical device and derives a multi-part architecture from that model. Further it provides a Predicate Calculus variant for describing the location and nature of the trainings and suggests Resolution Refutation as a method for determining the performance of the system as well as the location of needed trainings for specific proofs. 2. THE NEURAL NETWORK AND A CLASSICAL DEVICE Recently investigators have been making reports about architectures of multiple neural networksL234. These efforts are appearing at an early stage in neural network investigations they are characterized by architectures suggested directly by the problem space. Touretzky and Hinton suggest an architecture for processing logical statements1 the design of this architecture arises from the syntax of a restricted class of logical expressions and exhibits syntactic limitations. In similar fashion a multiple neural netword arises out of a control problem2 from the sequence learning problem3 and from the domain of machine learning. 4 But a general theory of multiple neural devices is missing. More general attempts to relate single or multiple neural networks to classical computing devices are not common although an attempt is made to relate single neural devices to a Turing machines and Sun et a!. develop a multiple neural architecture that performs pattern classification.

  6. Migration strategies for service-enabling ground control stations for unmanned systems

    NASA Astrophysics Data System (ADS)

    Kroculick, Joseph B.

    2011-06-01

    Future unmanned systems will be integrated into the Global Information Grid (GIG) and support net-centric data sharing, where information in a domain is exposed to a wide variety of GIG stakeholders that can make use of the information provided. Adopting a Service-Oriented Architecture (SOA) approach to package reusable UAV control station functionality into common control services provides a number of benefits including enabling dynamic plug and play of components depending on changing mission requirements, supporting information sharing to the enterprise, and integrating information from authoritative sources such as mission planners with the UAV control stations data model. It also allows the wider enterprise community to use the services provided by unmanned systems and improve data quality to support more effective decision-making. We explore current challenges in migrating UAV control systems that manage multiple types of vehicles to a Service-Oriented Architecture (SOA). Service-oriented analysis involves reviewing legacy systems and determining which components can be made into a service. Existing UAV control stations provide audio/visual, navigation, and vehicle health and status information that are useful to C4I systems. However, many were designed to be closed systems with proprietary software and hardware implementations, message formats, and specific mission requirements. An architecture analysis can be performed that reviews legacy systems and determines which components can be made into a service. A phased SOA adoption approach can then be developed that improves system interoperability.

  7. Lessons Learned from Engineering a Multi-Mission Satellite Operations Center

    NASA Technical Reports Server (NTRS)

    Madden, Maureen; Cary, Everett, Jr.; Esposito, Timothy; Parker, Jeffrey; Bradley, David

    2006-01-01

    NASA's Small Explorers (SMEX) satellites have surpassed their designed science-lifetimes and their flight operations teams are now facing the challenge of continuing operations with reduced funding. At present, these missions are being reengineered into a fleet-oriented ground system at Goddard Space Flight Center (GSFC). When completed, this ground system will provide command and control of four SMEX missions and will demonstrate fleet automation and control concepts. As a path-finder for future mission consolidation efforts, this ground system will also demonstrate new ground-based technologies that show promise of supporting longer mission lifecycles and simplifying component integration. One of the core technologies being demonstrated in the SMEiX Mission Operations Center is the GSFC Mission Services Evolution Center (GMSEC) architecture. The GMSEC architecture uses commercial Message Oriented Middleware with a common messaging standard to realize a higher level of component interoperability, allowing for interchangeable components in ground systems. Moreover, automation technologies utilizing the GMSEC architecture are being evaluated and implemented to provide extended lights-out operations. This mode of operation will provide routine monitoring and control of the heterogeneous spacecraft fleet. The operational concepts being developed will reduce the need for staffed contacts and is seen as a necessity for fleet management. This paper will describe the experiences of the integration team throughout the reengineering effort of the SMEX ground system. Additionally, lessons learned will be presented based on the team s experiences with integrating multiple missions into a fleet-based automated ground system.

  8. Lessons Learned from Engineering a Multi-Mission Satellite Operations Center

    NASA Technical Reports Server (NTRS)

    Madden, Maureen; Cary, Everett, Jr.; Esposito, Timothy; Parker, Jeffrey; Bradley, David

    2006-01-01

    NASA's Small Explorers (SMEX) satellites have surpassed their designed science-lifetimes and their flight operations teams are now facing the challenge of continuing operations with reduced funding. At present, these missions are being re-engineered into a fleet-oriented ground system at Goddard Space Flight Center (GSFC). When completed, this ground system will provide command and control of four SMEX missions and will demonstrate fleet automation and control concepts. As a path-finder for future mission consolidation efforts, this ground system will also demonstrate new ground-based technologies that show promise of supporting longer mission lifecycles and simplifying component integration. One of the core technologies being demonstrated in the SMEX Mission Operations Center is the GSFC Mission Services Evolution Center (GMSEC) architecture. The GMSEC architecture uses commercial Message Oriented Middleware with a common messaging standard to realize a higher level of component interoperability, allowing for interchangeable components in ground systems. Moreover, automation technologies utilizing the GMSEC architecture are being evaluated and implemented to provide extended lights-out operations. This mode of operation will provide routine monitoring and control of the heterogeneous spacecraft fleet. The operational concepts being developed will reduce the need for staffed contacts and is seen as a necessity for fleet management. This paper will describe the experiences of the integration team throughout the re-enginering effort of the SMEX ground system. Additionally, lessons learned will be presented based on the team's experiences with integrating multiple missions into a fleet-automated ground system.

  9. HYDRA : High-speed simulation architecture for precision spacecraft formation simulation

    NASA Technical Reports Server (NTRS)

    Martin, Bryan J.; Sohl, Garett.

    2003-01-01

    e Hierarchical Distributed Reconfigurable Architecture- is a scalable simulation architecture that provides flexibility and ease-of-use which take advantage of modern computation and communication hardware. It also provides the ability to implement distributed - or workstation - based simulations and high-fidelity real-time simulation from a common core. Originally designed to serve as a research platform for examining fundamental challenges in formation flying simulation for future space missions, it is also finding use in other missions and applications, all of which can take advantage of the underlying Object-Oriented structure to easily produce distributed simulations. Hydra automates the process of connecting disparate simulation components (Hydra Clients) through a client server architecture that uses high-level descriptions of data associated with each client to find and forge desirable connections (Hydra Services) at run time. Services communicate through the use of Connectors, which abstract messaging to provide single-interface access to any desired communication protocol, such as from shared-memory message passing to TCP/IP to ACE and COBRA. Hydra shares many features with the HLA, although providing more flexibility in connectivity services and behavior overriding.

  10. ITS component specification. Appendix B, Input data flows for components

    DOT National Transportation Integrated Search

    1997-11-01

    The objective of the Polaris Project is to define an Intelligent Transportation Systems (ITS) architecture for the state of Minnesota. This appendix defines the input data flows for each component of the Polaris Physical Architecture.

  11. ITS component specification. Appendix C, Output data flows for components

    DOT National Transportation Integrated Search

    1997-01-01

    The objective of the Polaris Project is to define an Intelligent Transportation Systems (ITS) architecture for the state of Minnesota. This appendix defines the output data flows for each component of the Polaris Physical Architecture.

  12. Phylogeny informs ontogeny: a proposed common theme in the arterial pole of the vertebrate heart

    PubMed Central

    Grimes, Adrian C.; Durán, Ana Carmen; Sans-Coma, Valentín; Hami, Danyal; Santoro, Massimo M.; Torres, Miguel

    2014-01-01

    SUMMARY In chick and mouse embryogenesis, a population of cells described as the secondary heart field (SHF) adds both myocardium and smooth muscle to the developing cardiac outflow tract (OFT). Following this addition, at approximately HH stage 22 in chick embryos, for example, the SHF can be identified architecturally by an overlapping seam at the arterial pole, where beating myocardium forms a junction with the smooth muscle of the arterial system. Previously, using either immunohistochemistry or nitric oxide indicators such as diaminofluorescein 2-diacetate, we have shown that a similar overlapping architecture also exists in the arterial pole of zebrafish and some shark species. However, although recent work suggests that development of the zebrafish OFT may also proceed by addition of a SHF-like population of cells, the presence of a true SHF in zebrafish and in many other developmental biological models remains an open question. We performed a comprehensive morphological study of the OFT of a wide range of vertebrates. Our data suggest that all vertebrates possess three fundamental OFT components: a proximal myocardial component, a distal smooth muscle component, and a middle component that contains overlapping myocardium and smooth muscle surrounding and supporting the outflow valves. Because the middle OFT component of avians and mammals is derived from the SHF, our observations suggest that a SHF may be an evolutionarily conserved theme in vertebrate embryogenesis. PMID:21040422

  13. NATO Human View Architecture and Human Networks

    NASA Technical Reports Server (NTRS)

    Handley, Holly A. H.; Houston, Nancy P.

    2010-01-01

    The NATO Human View is a system architectural viewpoint that focuses on the human as part of a system. Its purpose is to capture the human requirements and to inform on how the human impacts the system design. The viewpoint contains seven static models that include different aspects of the human element, such as roles, tasks, constraints, training and metrics. It also includes a Human Dynamics component to perform simulations of the human system under design. One of the static models, termed Human Networks, focuses on the human-to-human communication patterns that occur as a result of ad hoc or deliberate team formation, especially teams distributed across space and time. Parameters of human teams that effect system performance can be captured in this model. Human centered aspects of networks, such as differences in operational tempo (sense of urgency), priorities (common goal), and team history (knowledge of the other team members), can be incorporated. The information captured in the Human Network static model can then be included in the Human Dynamics component so that the impact of distributed teams is represented in the simulation. As the NATO militaries transform to a more networked force, the Human View architecture is an important tool that can be used to make recommendations on the proper mix of technological innovations and human interactions.

  14. Managing Scientific Software Complexity with Bocca and CCA

    DOE PAGES

    Allan, Benjamin A.; Norris, Boyana; Elwasif, Wael R.; ...

    2008-01-01

    In high-performance scientific software development, the emphasis is often on short time to first solution. Even when the development of new components mostly reuses existing components or libraries and only small amounts of new code must be created, dealing with the component glue code and software build processes to obtain complete applications is still tedious and error-prone. Component-based software meant to reduce complexity at the application level increases complexity to the extent that the user must learn and remember the interfaces and conventions of the component model itself. To address these needs, we introduce Bocca, the first tool to enablemore » application developers to perform rapid component prototyping while maintaining robust software-engineering practices suitable to HPC environments. Bocca provides project management and a comprehensive build environment for creating and managing applications composed of Common Component Architecture components. Of critical importance for high-performance computing (HPC) applications, Bocca is designed to operate in a language-agnostic way, simultaneously handling components written in any of the languages commonly used in scientific applications: C, C++, Fortran, Python and Java. Bocca automates the tasks related to the component glue code, freeing the user to focus on the scientific aspects of the application. Bocca embraces the philosophy pioneered by Ruby on Rails for web applications: start with something that works, and evolve it to the user's purpose.« less

  15. A Conceptual Architecture for National Biosurveillance: Moving Beyond Situational Awareness to Enable Digital Detection of Emerging Threats.

    PubMed

    Velsko, Stephan; Bates, Thomas

    2016-01-01

    Despite numerous calls for improvement, the US biosurveillance enterprise remains a patchwork of uncoordinated systems that fail to take advantage of the rapid progress in information processing, communication, and analytics made in the past decade. By synthesizing components from the extensive biosurveillance literature, we propose a conceptual framework for a national biosurveillance architecture and provide suggestions for implementation. The framework differs from the current federal biosurveillance development pathway in that it is not focused on systems useful for "situational awareness" but is instead focused on the long-term goal of having true warning capabilities. Therefore, a guiding design objective is the ability to digitally detect emerging threats that span jurisdictional boundaries, because attempting to solve the most challenging biosurveillance problem first provides the strongest foundation to meet simpler surveillance objectives. Core components of the vision are: (1) a whole-of-government approach to support currently disparate federal surveillance efforts that have a common data need, including those for food safety, vaccine and medical product safety, and infectious disease surveillance; (2) an information architecture that enables secure national access to electronic health records, yet does not require that data be sent to a centralized location for surveillance analysis; (3) an inference architecture that leverages advances in "big data" analytics and learning inference engines-a significant departure from the statistical process control paradigm that underpins nearly all current syndromic surveillance systems; and (4) an organizational architecture with a governance model aimed at establishing national biosurveillance as a critical part of the US national infrastructure. Although it will take many years to implement, and a national campaign of education and debate to acquire public buy-in for such a comprehensive system, the potential benefits warrant increased consideration by the US government.

  16. Portable Map-Reduce Utility for MIT SuperCloud Environment

    DTIC Science & Technology

    2015-09-17

    Reuther, A. Rosa, C. Yee, “Driving Big Data With Big Compute,” IEEE HPEC, Sep 10-12, 2012, Waltham, MA. [6] Apache Hadoop 1.2.1 Documentation: HDFS... big data architecture, which is designed to address these challenges, is made of the computing resources, scheduler, central storage file system...databases, analytics software and web interfaces [1]. These components are common to many big data and supercomputing systems. The platform is

  17. 10th Annual CMMI Technology Conference and User Group Tutorial Session

    DTIC Science & Technology

    2010-11-15

    Reuse That Pays Off: Software Product Lines BUSINESS GOALS/ APPLICATION DOMAIN ARCHITECTURE COMPONENTS and SERVICES pertain to share an are built... services PRODUCT LINES = STRATEGIC REUSE CMMI V1.3 and Architecture Oct 2010 © 2010 Carnegie Mellon University 46 91 CMMI V1.3 and Architecture © 2010... product component, the performance mustquality attribute can sometimes be partitioned for unique allocation to each product component as a derived

  18. Variable pleiotropic effects from mutations at the same locus hamper prediction of fitness from a fitness component.

    PubMed

    Pepin, Kim M; Samuel, Melanie A; Wichman, Holly A

    2006-04-01

    The relationship of genotype, fitness components, and fitness can be complicated by genetic effects such as pleiotropy and epistasis and by heterogeneous environments. However, because it is often difficult to measure genotype and fitness directly, fitness components are commonly used to estimate fitness without regard to genetic architecture. The small bacteriophage X174 enables direct evaluation of genetic and environmental effects on fitness components and fitness. We used 15 mutants to study mutation effects on attachment rate and fitness in six hosts. The mutants differed from our lab strain of X174 by only one or two amino acids in the major capsid protein (gpF, sites 101 and 102). The sites are variable in natural and experimentally evolved X174 populations and affect phage attachment rate. Within the limits of detection of our assays, all mutations were neutral or deleterious relative to the wild type; 11 mutants had decreased host range. While fitness was predictable from attachment rate in most cases, 3 mutants had rapid attachment but low fitness on most hosts. Thus, some mutations had a pleiotropic effect on a fitness component other than attachment rate. In addition, on one host most mutants had high attachment rate but decreased fitness, suggesting that pleiotropic effects also depended on host. The data highlight that even in this simple, well-characterized system, prediction of fitness from a fitness component depends on genetic architecture and environment.

  19. An architecture for integrating distributed and cooperating knowledge-based Air Force decision aids

    NASA Technical Reports Server (NTRS)

    Nugent, Richard O.; Tucker, Richard W.

    1988-01-01

    MITRE has been developing a Knowledge-Based Battle Management Testbed for evaluating the viability of integrating independently-developed knowledge-based decision aids in the Air Force tactical domain. The primary goal for the testbed architecture is to permit a new system to be added to a testbed with little change to the system's software. Each system that connects to the testbed network declares that it can provide a number of services to other systems. When a system wants to use another system's service, it does not address the server system by name, but instead transmits a request to the testbed network asking for a particular service to be performed. A key component of the testbed architecture is a common database which uses a relational database management system (RDBMS). The RDBMS provides a database update notification service to requesting systems. Normally, each system is expected to monitor data relations of interest to it. Alternatively, a system may broadcast an announcement message to inform other systems that an event of potential interest has occurred. Current research is aimed at dealing with issues resulting from integration efforts, such as dealing with potential mismatches of each system's assumptions about the common database, decentralizing network control, and coordinating multiple agents.

  20. Structure of bacterial lipopolysaccharides.

    PubMed

    Caroff, Martine; Karibian, Doris

    2003-11-14

    Bacterial lipopolysaccharides are the major components of the outer surface of Gram-negative bacteria They are often of interest in medicine for their immunomodulatory properties. In small amounts they can be beneficial, but in larger amounts they may cause endotoxic shock. Although they share a common architecture, their structural details exert a strong influence on their activity. These molecules comprise: a lipid moiety, called lipid A, which is considered to be the endotoxic component, a glycosidic part consisting of a core of approximately 10 monosaccharides and, in "smooth-type" lipopolysaccharides, a third region, named O-chain, consisting of repetitive subunits of one to eight monosaccharides responsible for much of the immunospecificity of the bacterial cell.

  1. Achieving Better Buying Power through Acquisition of Open Architecture Software Systems. Volume 2 Understanding Open Architecture Software Systems: Licensing and Security Research and Recommendations

    DTIC Science & Technology

    2016-01-06

    of- breed software components and software products lines (SPLs) that are subject to different IP license and cybersecurity requirements. The... commercially priced closed source software components, to be used in the design, implementation, deployment, and evolution of open architecture (OA... breed software components and software products lines (SPLs) that are subject to different IP license and cybersecurity requirements. The Department

  2. Toward a Fault Tolerant Architecture for Vital Medical-Based Wearable Computing.

    PubMed

    Abdali-Mohammadi, Fardin; Bajalan, Vahid; Fathi, Abdolhossein

    2015-12-01

    Advancements in computers and electronic technologies have led to the emergence of a new generation of efficient small intelligent systems. The products of such technologies might include Smartphones and wearable devices, which have attracted the attention of medical applications. These products are used less in critical medical applications because of their resource constraint and failure sensitivity. This is due to the fact that without safety considerations, small-integrated hardware will endanger patients' lives. Therefore, proposing some principals is required to construct wearable systems in healthcare so that the existing concerns are dealt with. Accordingly, this paper proposes an architecture for constructing wearable systems in critical medical applications. The proposed architecture is a three-tier one, supporting data flow from body sensors to cloud. The tiers of this architecture include wearable computers, mobile computing, and mobile cloud computing. One of the features of this architecture is its high possible fault tolerance due to the nature of its components. Moreover, the required protocols are presented to coordinate the components of this architecture. Finally, the reliability of this architecture is assessed by simulating the architecture and its components, and other aspects of the proposed architecture are discussed.

  3. Technology Challenges for Deep-Throttle Cryogenic Engines for Space Exploration

    NASA Technical Reports Server (NTRS)

    Brown, Kendall K.; Nelson, Karl W.

    2005-01-01

    Historically, cryogenic rocket engines have not been used for in-space applications due to their additional complexity, the mission need for high reliability, and the challenges of propellant boil-off. While the mission and vehicle architectures are not yet defined for the lunar and Martian robotic and human exploration objectives, cryogenic rocket engines offer the potential for higher performance and greater architecture/mission flexibility. In-situ cryogenic propellant production could enable a more robust exploration program by significantly reducing the propellant mass delivered to low earth orbit, thus warranting the evaluation of cryogenic rocket engines versus the hypergolic bi-propellant engines used in the Apollo program. A multi-use engine. one which can provide the functionality that separate engines provided in the Apollo mission architecture, is desirable for lunar and Mars exploration missions because it increases overall architecture effectiveness through commonality and modularity. The engine requirement derivation process must address each unique mission application and each unique phase within each mission. The resulting requirements, such as thrust level, performance, packaging, bum duration, number of operations; required impulses for each trajectory phase; operation after extended space or surface exposure; availability for inspection and maintenance; throttle range for planetary descent, ascent, acceleration limits and many more must be addressed. Within engine system studies, the system and component technology, capability, and risks must be evaluated and a balance between the appropriate amount of technology-push and technology-pull must be addressed. This paper will summarize many of the key technology challenges associated with using high-performance cryogenic liquid propellant rocket engine systems and components in the exploration program architectures. The paper is divided into two areas. The first area describes how the mission requirements affect the engine system requirements and create system level technology challenges. An engine system architecture for multiple applications or a family of engines based upon a set of core technologies, design, and fabrication approaches may reduce overall programmatic cost and risk. The engine system discussion will also address the characterization of engine cycle figures of merit, configurations, and design approaches for some in-space vehicle alternatives under consideration. The second area evaluates the component-level technology challenges induced from the system requirements. Component technology issues are discussed addressing injector, thrust chamber, ignition system, turbopump assembly, and valve design for the challenging requirements of high reliability, robustness, fault tolerance, deep throttling, reasonable performance (with respect to weight and specific impulse).

  4. Technology Challenges for Deep-Throttle Cryogenic Engines for Space Exploration

    NASA Astrophysics Data System (ADS)

    Brown, Kendall K.; Nelson, Karl W.

    2005-02-01

    Historically, cryogenic rocket engines have not been used for in-space applications due to their additional complexity, the mission need for high reliability, and the challenges of propellant boil-off. While the mission and vehicle architectures are not yet defined for the lunar and Martian robotic and human exploration objectives, cryogenic rocket engines offer the potential for higher performance and greater architecture/mission flexibility. In-situ cryogenic propellant production could enable a more robust exploration program by significantly reducing the propellant mass delivered to low earth orbit, thus warranting the evaluation of cryogenic rocket engines versus the hypergolic bipropellant engines used in the Apollo program. A multi-use engine, one which can provide the functionality that separate engines provided in the Apollo mission architecture, is desirable for lunar and Mars exploration missions because it increases overall architecture effectiveness through commonality and modularity. The engine requirement derivation process must address each unique mission application and each unique phase within each mission. The resulting requirements, such as thrust level, performance, packaging, burn duration, number of operations; required impulses for each trajectory phase; operation after extended space or surface exposure; availability for inspection and maintenance; throttle range for planetary descent, ascent, acceleration limits and many more must be addressed. Within engine system studies, the system and component technology, capability, and risks must be evaluated and a balance between the appropriate amount of technology-push and technology-pull must be addressed. This paper will summarize many of the key technology challenges associated with using high-performance cryogenic liquid propellant rocket engine systems and components in the exploration program architectures. The paper is divided into two areas. The first area describes how the mission requirements affect the engine system requirements and create system level technology challenges. An engine system architecture for multiple applications or a family of engines based upon a set of core technologies, design, and fabrication approaches may reduce overall programmatic cost and risk. The engine system discussion will also address the characterization of engine cycle figures of merit, configurations, and design approaches for some in-space vehicle alternatives under consideration. The second area evaluates the component-level technology challenges induced from the system requirements. Component technology issues are discussed addressing injector, thrust chamber, ignition system, turbopump assembly, and valve design for the challenging requirements of high reliability, robustness, fault tolerance, deep throttling, reasonable performance (with respect to weight and specific impulse).

  5. An Autonomous Autopilot Control System Design for Small-Scale UAVs

    NASA Technical Reports Server (NTRS)

    Ippolito, Corey; Pai, Ganeshmadhav J.; Denney, Ewen W.

    2012-01-01

    This paper describes the design and implementation of a fully autonomous and programmable autopilot system for small scale autonomous unmanned aerial vehicle (UAV) aircraft. This system was implemented in Reflection and has flown on the Exploration Aerial Vehicle (EAV) platform at NASA Ames Research Center, currently only as a safety backup for an experimental autopilot. The EAV and ground station are built on a component-based architecture called the Reflection Architecture. The Reflection Architecture is a prototype for a real-time embedded plug-and-play avionics system architecture which provides a transport layer for real-time communications between hardware and software components, allowing each component to focus solely on its implementation. The autopilot module described here, although developed in Reflection, contains no design elements dependent on this architecture.

  6. A Hybrid Power Management (HPM) Based Vehicle Architecture

    NASA Technical Reports Server (NTRS)

    Eichenberg, Dennis J.

    2011-01-01

    Society desires vehicles with reduced fuel consumption and reduced emissions. This presents a challenge and an opportunity for industry and the government. The NASA John H. Glenn Research Center (GRC) has developed a Hybrid Power Management (HPM) based vehicle architecture for space and terrestrial vehicles. GRC's Electrical and Electromagnetics Branch of the Avionics and Electrical Systems Division initiated the HPM Program for the GRC Technology Transfer and Partnership Office. HPM is the innovative integration of diverse, state-of-the-art power devices in an optimal configuration for space and terrestrial applications. The appropriate application and control of the various power devices significantly improves overall system performance and efficiency. The basic vehicle architecture consists of a primary power source, and possibly other power sources, providing all power to a common energy storage system, which is used to power the drive motors and vehicle accessory systems, as well as provide power as an emergency power system. Each component is independent, permitting it to be optimized for its intended purpose. This flexible vehicle architecture can be applied to all vehicles to considerably improve system efficiency, reliability, safety, security, and performance. This unique vehicle architecture has the potential to alleviate global energy concerns, improve the environment, stimulate the economy, and enable new missions.

  7. Injury Potential Testing of Suited Occupants During Dynamic Spacecraft Flight Phases

    NASA Technical Reports Server (NTRS)

    McFarland, Shane M.

    2011-01-01

    In support of the NASA Constellation Program, a space-suit architecture was envisioned for support of Launch, Entry, Abort, Micro-g EVA, Post Landing crew operations, and under emergency conditions, survival. This space suit architecture is unique in comparison to previous launch, entry, and abort (LEA) suit architectures in that it utilized rigid mobility elements in the scye and the upper arm regions. The suit architecture also employed rigid thigh disconnect elements to allow for quick disconnect functionality above the knee which allowed for commonality of the lower portion of the suit across two suit configurations. This suit architecture was designed to interface with the Orion seat subsystem, which includes seat components, lateral supports, and restraints. Due to this unique configuration of spacesuit mobility elements, combined with the need to provide occupant protection during dynamic landing events, risks were identified with potential injury due to the suit characteristics described above. To address the risk concerns, a test series was developed to evaluate the likelihood and consequences of these potential issues. Testing included use of Anthropomorphic Test Devices (ATDs), Post Mortem Human Subjects (PMHS), and representative seat/suit hardware in combination with high linear acceleration events. The ensuing treatment focuses on detailed results of the testing that has been conducted under this test series thus far.

  8. Injury Potential Testing of Suited Occupants During Dynamic Spacecraft Flight Phases

    NASA Technical Reports Server (NTRS)

    McFarland, Shane M.

    2010-01-01

    In support of the Constellation Program, a space-suit architecture was envisioned for support of Launch, Entry, Abort, Micro-g EVA, Post Landing crew operations, and under emergency conditions, survival. This space suit architecture is unique in comparison to previous launch, entry, and abort (LEA) suit architectures in that it utilized rigid mobility elements in the scye and the upper arm regions. The suit architecture also employed rigid thigh disconnect elements to allow for quick disconnect functionality above the knee which allowed for commonality of the lower portion of the suit across two suit configurations. This suit architecture was designed to interface with the Orion seat subsystem, which includes seat components, lateral supports, and restraints. Due to this unique configuration of spacesuit mobility elements, combined with the need to provide occupant protection during dynamic landing events, risks were identified with potential injury due to the suit characteristics described above. To address the risk concerns, a test series was developed to evaluate the likelihood and consequences of these potential issues. Testing included use of Anthropomorphic Test Devices (ATDs), Post Mortem Human Subjects (PMHS), and representative seat/suit hardware in combination with high linear acceleration events. The ensuing treatment focuses o detailed results of the testing that has ben conducted under this test series thus far.

  9. Integrating planning and reaction: A preliminary report

    NASA Technical Reports Server (NTRS)

    Bresina, John L.; Drummond, Mark

    1990-01-01

    The Entropy Reduction Engine architecture for integrating planning, scheduling, and control is examined. The architecture is motivated through a NASA mission scenario and a brief list of design goals. An overview is presented of the Entropy Reduction Engine architecture by describing its major components, their interactions, and the way in which these interacting components satisfy the design goals.

  10. Case of gastric neuroendocrine carcinoma showing an interesting tumorigenic pathway.

    PubMed

    Uesugi, Noriyuki; Sugimoto, Ryo; Eizuka, Makoto; Fujita, Yasuko; Osakabe, Mitsumasa; Koeda, Keisuke; Kosaka, Takashi; Yanai, Shunichi; Ishida, Kazuyuki; Sasaki, Akira; Matsumoto, Takayuki; Sugai, Tamotsu

    2017-11-16

    Here, we report a case of gastric neuroendocrine carcinoma showing an interesting tumorigenic pathway. A 57-year-old Japanese woman presented with epigastric tenderness, and distal gastrectomy was performed. In the surgical specimen, histologically, the tumor tissue was composed of three subtypes of tumor components showing different histological architecture and cellular atypia, diagnosed as neuroendocrine tumor (NET) G2, NET G3, and neuroendocrine carcinoma (NEC) components. Immunohistochemically, the Ki-67-positive rates of NET G2, NET G3, and NEC components were 6.5%, 99.5% and 88.1%, respectively. Although allelic imbalance (AI) on chromosomes 1p, 3p, 8q, TP53, 18q and 22q was commonly found in all components, AI of 4p was found in NET G3 and NEC components (but not in the NET G2 component). In contrast, AIs of 5q and 9p were found in only the NEC component. Thus, we showed the progression from NET G2 to NEC, via NET G3, within the same tumor.

  11. Component-Based Approach in Learning Management System Development

    ERIC Educational Resources Information Center

    Zaitseva, Larisa; Bule, Jekaterina; Makarov, Sergey

    2013-01-01

    The paper describes component-based approach (CBA) for learning management system development. Learning object as components of e-learning courses and their metadata is considered. The architecture of learning management system based on CBA being developed in Riga Technical University, namely its architecture, elements and possibilities are…

  12. Architectures for single-chip image computing

    NASA Astrophysics Data System (ADS)

    Gove, Robert J.

    1992-04-01

    This paper will focus on the architectures of VLSI programmable processing components for image computing applications. TI, the maker of industry-leading RISC, DSP, and graphics components, has developed an architecture for a new-generation of image processors capable of implementing a plurality of image, graphics, video, and audio computing functions. We will show that the use of a single-chip heterogeneous MIMD parallel architecture best suits this class of processors--those which will dominate the desktop multimedia, document imaging, computer graphics, and visualization systems of this decade.

  13. Model-Drive Architecture for Agent-Based Systems

    NASA Technical Reports Server (NTRS)

    Gradanin, Denis; Singh, H. Lally; Bohner, Shawn A.; Hinchey, Michael G.

    2004-01-01

    The Model Driven Architecture (MDA) approach uses a platform-independent model to define system functionality, or requirements, using some specification language. The requirements are then translated to a platform-specific model for implementation. An agent architecture based on the human cognitive model of planning, the Cognitive Agent Architecture (Cougaar) is selected for the implementation platform. The resulting Cougaar MDA prescribes certain kinds of models to be used, how those models may be prepared and the relationships of the different kinds of models. Using the existing Cougaar architecture, the level of application composition is elevated from individual components to domain level model specifications in order to generate software artifacts. The software artifacts generation is based on a metamodel. Each component maps to a UML structured component which is then converted into multiple artifacts: Cougaar/Java code, documentation, and test cases.

  14. ITS component specification

    DOT National Transportation Integrated Search

    1997-01-20

    The objective of the Polaris Project is to define an Intelligent Transportation Systems (ITS) architecture for the state of Minnesota. An architecture is a framework that defines how multiple ITS Components interrelate and contribute to the overall I...

  15. A comparative analysis of loop heat pipe based thermal architectures for spacecraft thermal control

    NASA Technical Reports Server (NTRS)

    Pauken, Mike; Birur, Gaj

    2004-01-01

    Loop Heat Pipes (LHP) have gained acceptance as a viable means of heat transport in many spacecraft in recent years. However, applications using LHP technology tend to only remove waste heat from a single component to an external radiator. Removing heat from multiple components has been done by using multiple LHPs. This paper discusses the development and implementation of a Loop Heat Pipe based thermal architecture for spacecraft. In this architecture, a Loop Heat Pipe with multiple evaporators and condensers is described in which heat load sharing and thermal control of multiple components can be achieved. A key element in using a LHP thermal architecture is defining the need for such an architecture early in the spacecraft design process. This paper describes an example in which a LHP based thermal architecture can be used and how such a system can have advantages in weight, cost and reliability over other kinds of distributed thermal control systems. The example used in this paper focuses on a Mars Rover Thermal Architecture. However, the principles described here are applicable to Earth orbiting spacecraft as well.

  16. MACCIS 2.0 - An Architecture Description Framework for Technical Infostructures and Their Enterprise Environment

    DTIC Science & Technology

    2004-06-01

    Viewpoint Component Viewpoint View Architecture Description of Enterprise or Infostructure View Security Concern Business Security Model Business...security concern, when applied to the different viewpoints, addresses both stakeholders, and is described as a business security model or component...Viewpoint View Architecture Description of Enterprise or Infostructure View Security Concern Business Security Model Business Stakeholder IT Architect

  17. Architectural frameworks: defining the structures for implementing learning health systems.

    PubMed

    Lessard, Lysanne; Michalowski, Wojtek; Fung-Kee-Fung, Michael; Jones, Lori; Grudniewicz, Agnes

    2017-06-23

    The vision of transforming health systems into learning health systems (LHSs) that rapidly and continuously transform knowledge into improved health outcomes at lower cost is generating increased interest in government agencies, health organizations, and health research communities. While existing initiatives demonstrate that different approaches can succeed in making the LHS vision a reality, they are too varied in their goals, focus, and scale to be reproduced without undue effort. Indeed, the structures necessary to effectively design and implement LHSs on a larger scale are lacking. In this paper, we propose the use of architectural frameworks to develop LHSs that adhere to a recognized vision while being adapted to their specific organizational context. Architectural frameworks are high-level descriptions of an organization as a system; they capture the structure of its main components at varied levels, the interrelationships among these components, and the principles that guide their evolution. Because these frameworks support the analysis of LHSs and allow their outcomes to be simulated, they act as pre-implementation decision-support tools that identify potential barriers and enablers of system development. They thus increase the chances of successful LHS deployment. We present an architectural framework for LHSs that incorporates five dimensions-goals, scientific, social, technical, and ethical-commonly found in the LHS literature. The proposed architectural framework is comprised of six decision layers that model these dimensions. The performance layer models goals, the scientific layer models the scientific dimension, the organizational layer models the social dimension, the data layer and information technology layer model the technical dimension, and the ethics and security layer models the ethical dimension. We describe the types of decisions that must be made within each layer and identify methods to support decision-making. In this paper, we outline a high-level architectural framework grounded in conceptual and empirical LHS literature. Applying this architectural framework can guide the development and implementation of new LHSs and the evolution of existing ones, as it allows for clear and critical understanding of the types of decisions that underlie LHS operations. Further research is required to assess and refine its generalizability and methods.

  18. caCORE version 3: Implementation of a model driven, service-oriented architecture for semantic interoperability.

    PubMed

    Komatsoulis, George A; Warzel, Denise B; Hartel, Francis W; Shanbhag, Krishnakant; Chilukuri, Ram; Fragoso, Gilberto; Coronado, Sherri de; Reeves, Dianne M; Hadfield, Jillaine B; Ludet, Christophe; Covitz, Peter A

    2008-02-01

    One of the requirements for a federated information system is interoperability, the ability of one computer system to access and use the resources of another system. This feature is particularly important in biomedical research systems, which need to coordinate a variety of disparate types of data. In order to meet this need, the National Cancer Institute Center for Bioinformatics (NCICB) has created the cancer Common Ontologic Representation Environment (caCORE), an interoperability infrastructure based on Model Driven Architecture. The caCORE infrastructure provides a mechanism to create interoperable biomedical information systems. Systems built using the caCORE paradigm address both aspects of interoperability: the ability to access data (syntactic interoperability) and understand the data once retrieved (semantic interoperability). This infrastructure consists of an integrated set of three major components: a controlled terminology service (Enterprise Vocabulary Services), a standards-based metadata repository (the cancer Data Standards Repository) and an information system with an Application Programming Interface (API) based on Domain Model Driven Architecture. This infrastructure is being leveraged to create a Semantic Service-Oriented Architecture (SSOA) for cancer research by the National Cancer Institute's cancer Biomedical Informatics Grid (caBIG).

  19. caCORE version 3: Implementation of a model driven, service-oriented architecture for semantic interoperability

    PubMed Central

    Komatsoulis, George A.; Warzel, Denise B.; Hartel, Frank W.; Shanbhag, Krishnakant; Chilukuri, Ram; Fragoso, Gilberto; de Coronado, Sherri; Reeves, Dianne M.; Hadfield, Jillaine B.; Ludet, Christophe; Covitz, Peter A.

    2008-01-01

    One of the requirements for a federated information system is interoperability, the ability of one computer system to access and use the resources of another system. This feature is particularly important in biomedical research systems, which need to coordinate a variety of disparate types of data. In order to meet this need, the National Cancer Institute Center for Bioinformatics (NCICB) has created the cancer Common Ontologic Representation Environment (caCORE), an interoperability infrastructure based on Model Driven Architecture. The caCORE infrastructure provides a mechanism to create interoperable biomedical information systems. Systems built using the caCORE paradigm address both aspects of interoperability: the ability to access data (syntactic interoperability) and understand the data once retrieved (semantic interoperability). This infrastructure consists of an integrated set of three major components: a controlled terminology service (Enterprise Vocabulary Services), a standards-based metadata repository (the cancer Data Standards Repository) and an information system with an Application Programming Interface (API) based on Domain Model Driven Architecture. This infrastructure is being leveraged to create a Semantic Service Oriented Architecture (SSOA) for cancer research by the National Cancer Institute’s cancer Biomedical Informatics Grid (caBIG™). PMID:17512259

  20. The University of Washington Health Sciences Library BioCommons: an evolving Northwest biomedical research information support infrastructure

    PubMed Central

    Minie, Mark; Bowers, Stuart; Tarczy-Hornoch, Peter; Roberts, Edward; James, Rose A.; Rambo, Neil; Fuller, Sherrilynne

    2006-01-01

    Setting: The University of Washington Health Sciences Libraries and Information Center BioCommons serves the bioinformatics needs of researchers at the university and in the vibrant for-profit and not-for-profit biomedical research sector in the Washington area and region. Program Components: The BioCommons comprises services addressing internal University of Washington, not-for-profit, for-profit, and regional and global clientele. The BioCommons is maintained and administered by the BioResearcher Liaison Team. The BioCommons architecture provides a highly flexible structure for adapting to rapidly changing resources and needs. Evaluation Mechanisms: BioCommons uses Web-based pre- and post-course evaluations and periodic user surveys to assess service effectiveness. Recent surveys indicate substantial usage of BioCommons services and a high level of effectiveness and user satisfaction. Next Steps/Future Directions: BioCommons is developing novel collaborative Web resources to distribute bioinformatics tools and is experimenting with Web-based competency training in bioinformation resource use. PMID:16888667

  1. Structural Definition and Mass Estimation of Lunar Surface Habitats for the Lunar Architecture Team Phase 2 (LAT-2) Study

    NASA Technical Reports Server (NTRS)

    Dorsey, John T.; Wu, K, Chauncey; Smith, Russell W.

    2008-01-01

    The Lunar Architecture Team Phase 2 study defined and assessed architecture options for a Lunar Outpost at the Moon's South Pole. The Habitation Focus Element Team was responsible for developing concepts for all of the Habitats and pressurized logistics modules particular to each of the architectures, and defined the shapes, volumes and internal layouts considering human factors, surface operations and safety requirements, as well as Lander mass and volume constraints. The Structures Subsystem Team developed structural concepts, sizing estimates and mass estimates for the primary Habitat structure. In these studies, the primary structure was decomposed into a more detailed list of components to be sized to gain greater insight into concept mass contributors. Structural mass estimates were developed that captured the effect of major design parameters such as internal pressure load. Analytical and empirical equations were developed for each structural component identified. Over 20 different hard-shell, hybrid expandable and inflatable soft-shell Habitat and pressurized logistics module concepts were sized and compared to assess structural performance and efficiency during the study. Habitats were developed in three categories; Mini Habs that are removed from the Lander and placed on the Lunar surface, Monolithic habitats that remain on the Lander, and Habitats that are part of the Mobile Lander system. Each category of Habitat resulted in structural concepts with advantages and disadvantages. The same modular shell components could be used for the Mini Hab concept, maximizing commonality and minimizing development costs. Larger Habitats had higher volumetric mass efficiency and floor area than smaller Habitats (whose mass was dominated by fixed items such as domes and frames). Hybrid and pure expandable Habitat structures were very mass-efficient, but the structures technology is less mature, and the ability to efficiently package and deploy internal subsystems remains an open issue.

  2. Strategy Revealing Phenotypic Differences among Synthetic Oscillator Designs

    PubMed Central

    2015-01-01

    Considerable progress has been made in identifying and characterizing the component parts of genetic oscillators, which play central roles in all organisms. Nonlinear interaction among components is sufficiently complex that mathematical models are required to elucidate their elusive integrated behavior. Although natural and synthetic oscillators exhibit common architectures, there are numerous differences that are poorly understood. Utilizing synthetic biology to uncover basic principles of simpler circuits is a way to advance understanding of natural circadian clocks and rhythms. Following this strategy, we address the following questions: What are the implications of different architectures and molecular modes of transcriptional control for the phenotypic repertoire of genetic oscillators? Are there designs that are more realizable or robust? We compare synthetic oscillators involving one of three architectures and various combinations of the two modes of transcriptional control using a methodology that provides three innovations: a rigorous definition of phenotype, a procedure for deconstructing complex systems into qualitatively distinct phenotypes, and a graphical representation for illuminating the relationship between genotype, environment, and the qualitatively distinct phenotypes of a system. These methods provide a global perspective on the behavioral repertoire, facilitate comparisons of alternatives, and assist the rational design of synthetic gene circuitry. In particular, the results of their application here reveal distinctive phenotypes for several designs that have been studied experimentally as well as a best design among the alternatives that has yet to be constructed and tested. PMID:25019938

  3. A Successful Component Architecture for Interoperable and Evolvable Ground Data Systems

    NASA Technical Reports Server (NTRS)

    Smith, Danford S.; Bristow, John O.; Wilmot, Jonathan

    2006-01-01

    The National Aeronautics and Space Administration (NASA) Goddard Space Flight Center (GSFC) has adopted an open architecture approach for satellite control centers and is now realizing benefits beyond those originally envisioned. The Goddard Mission Services Evolution Center (GMSEC) architecture utilizes standardized interfaces and a middleware software bus to allow functional components to be easily integrated. This paper presents the GMSEC architectural goals and concepts, the capabilities enabled and the benefits realized by adopting this framework approach. NASA experiences with applying the GMSEC architecture on multiple missions are discussed. The paper concludes with a summary of lessons learned, future directions for GMSEC and the possible applications beyond NASA GSFC.

  4. Framework for teleoperated microassembly systems

    NASA Astrophysics Data System (ADS)

    Reinhart, Gunther; Anton, Oliver; Ehrenstrasser, Michael; Patron, Christian; Petzold, Bernd

    2002-02-01

    Manual assembly of minute parts is currently done using simple devices such as tweezers or magnifying glasses. The operator therefore requires a great deal of concentration for successful assembly. Teleoperated micro-assembly systems are a promising method for overcoming the scaling barrier. However, most of today's telepresence systems are based on proprietary and one-of-a-kind solutions. Frameworks which supply the basic functions of a telepresence system, e.g. to establish flexible communication links that depend on bandwidth requirements or to synchronize distributed components, are not currently available. Large amounts of time and money have to be invested in order to create task-specific teleoperated micro-assembly systems from scratch. For this reason, an object-oriented framework for telepresence systems that is based on CORBA as a common middleware was developed at the Institute for Machine Tools and Industrial Management (iwb). The framework is based on a distributed architectural concept and is realized in C++. External hardware components such as haptic, video or sensor devices are coupled to the system by means of defined software interfaces. In this case, the special requirements of teleoperation systems have to be considered, e.g. dynamic parameter settings for sensors during operation. Consequently, an architectural concept based on logical sensors has been developed to achieve maximum flexibility and to enable a task-oriented integration of hardware components.

  5. Architecture, component, and microbiome of biofilm involved in the fouling of membrane bioreactors.

    PubMed

    Inaba, Tomohiro; Hori, Tomoyuki; Aizawa, Hidenobu; Ogata, Atsushi; Habe, Hiroshi

    2017-01-01

    Biofilm formation on the filtration membrane and the subsequent clogging of membrane pores (called biofouling) is one of the most persistent problems in membrane bioreactors for wastewater treatment and reclamation. Here, we investigated the structure and microbiome of fouling-related biofilms in the membrane bioreactor using non-destructive confocal reflection microscopy and high-throughput Illumina sequencing of 16S rRNA genes. Direct confocal reflection microscopy indicated that the thin biofilms were formed and maintained regardless of the increasing transmembrane pressure, which is a common indicator of membrane fouling, at low organic-loading rates. Their solid components were primarily extracellular polysaccharides and microbial cells. In contrast, high organic-loading rates resulted in a rapid increase in the transmembrane pressure and the development of the thick biofilms mainly composed of extracellular lipids. High-throughput sequencing revealed that the biofilm microbiomes, including major and minor microorganisms, substantially changed in response to the organic-loading rates and biofilm development. These results demonstrated for the first time that the architectures, chemical components, and microbiomes of the biofilms on fouled membranes were tightly associated with one another and differed considerably depending on the organic-loading conditions in the membrane bioreactor, emphasizing the significance of alternative indicators other than the transmembrane pressure for membrane biofouling.

  6. Space vehicle field unit and ground station system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Judd, Stephen; Dallmann, Nicholas; Delapp, Jerry

    A field unit and ground station may use commercial off-the-shelf (COTS) components and share a common architecture, where differences in functionality are governed by software. The field units and ground stations may be easy to deploy, relatively inexpensive, and be relatively easy to operate. A novel file system may be used where datagrams of a file may be stored across multiple drives and/or devices. The datagrams may be received out of order and reassembled at the receiving device.

  7. Information Environments

    NASA Technical Reports Server (NTRS)

    Follen, Gregory J.; Naiman, Cynthia

    2003-01-01

    The objective of GRC CNIS/IE work is to build a plug-n-play infrastructure that provides the Grand Challenge Applications with a suite of tools for coupling codes together, numerical zooming between fidelity of codes and gaining deployment of these simulations onto the Information Power Grid. The GRC CNIS/IE work will streamline and improve this process by providing tighter integration of various tools through the use of object oriented design of component models and data objects and through the use of CORBA (Common Object Request Broker Architecture).

  8. Space vehicle field unit and ground station system

    DOEpatents

    Judd, Stephen; Dallmann, Nicholas; Delapp, Jerry; Proicou, Michael; Seitz, Daniel; Michel, John; Enemark, Donald

    2016-10-25

    A field unit and ground station may use commercial off-the-shelf (COTS) components and share a common architecture, where differences in functionality are governed by software. The field units and ground stations may be easy to deploy, relatively inexpensive, and be relatively easy to operate. A novel file system may be used where datagrams of a file may be stored across multiple drives and/or devices. The datagrams may be received out of order and reassembled at the receiving device.

  9. Web-Based Distributed Simulation of Aeronautical Propulsion System

    NASA Technical Reports Server (NTRS)

    Zheng, Desheng; Follen, Gregory J.; Pavlik, William R.; Kim, Chan M.; Liu, Xianyou; Blaser, Tammy M.; Lopez, Isaac

    2001-01-01

    An application was developed to allow users to run and view the Numerical Propulsion System Simulation (NPSS) engine simulations from web browsers. Simulations were performed on multiple INFORMATION POWER GRID (IPG) test beds. The Common Object Request Broker Architecture (CORBA) was used for brokering data exchange among machines and IPG/Globus for job scheduling and remote process invocation. Web server scripting was performed by JavaServer Pages (JSP). This application has proven to be an effective and efficient way to couple heterogeneous distributed components.

  10. Genome-wide association study identified genetic variations and candidate genes for plant architecture component traits in Chinese upland cotton.

    PubMed

    Su, Junji; Li, Libei; Zhang, Chi; Wang, Caixiang; Gu, Lijiao; Wang, Hantao; Wei, Hengling; Liu, Qibao; Huang, Long; Yu, Shuxun

    2018-06-01

    Thirty significant associations between 22 SNPs and five plant architecture component traits in Chinese upland cotton were identified via GWAS. Four peak SNP loci located on chromosome D03 were simultaneously associated with more plant architecture component traits. A candidate gene, Gh_D03G0922, might be responsible for plant height in upland cotton. A compact plant architecture is increasingly required for mechanized harvesting processes in China. Therefore, cotton plant architecture is an important trait, and its components, such as plant height, fruit branch length and fruit branch angle, affect the suitability of a cultivar for mechanized harvesting. To determine the genetic basis of cotton plant architecture, a genome-wide association study (GWAS) was performed using a panel composed of 355 accessions and 93,250 single nucleotide polymorphisms (SNPs) identified using the specific-locus amplified fragment sequencing method. Thirty significant associations between 22 SNPs and five plant architecture component traits were identified via GWAS. Most importantly, four peak SNP loci located on chromosome D03 were simultaneously associated with more plant architecture component traits, and these SNPs were harbored in one linkage disequilibrium block. Furthermore, 21 candidate genes for plant architecture were predicted in a 0.95-Mb region including the four peak SNPs. One of these genes (Gh_D03G0922) was near the significant SNP D03_31584163 (8.40 kb), and its Arabidopsis homologs contain MADS-box domains that might be involved in plant growth and development. qRT-PCR showed that the expression of Gh_D03G0922 was upregulated in the apical buds and young leaves of the short and compact cotton varieties, and virus-induced gene silencing (VIGS) proved that the silenced plants exhibited increased PH. These results indicate that Gh_D03G0922 is likely the candidate gene for PH in cotton. The genetic variations and candidate genes identified in this study lay a foundation for cultivating moderately short and compact varieties in future Chinese cotton-breeding programs.

  11. Development Of Autonomous Systems

    NASA Astrophysics Data System (ADS)

    Kanade, Takeo

    1989-03-01

    In the last several years at the Robotics Institute of Carnegie Mellon University, we have been working on two projects for developing autonomous systems: Nablab for Autonomous Land Vehicle and Ambler for Mars Rover. These two systems are for different purposes: the Navlab is a four-wheeled vehicle (van) for road and open terrain navigation, and the Ambler is a six-legged locomotor for Mars exploration. The two projects, however, share many common aspects. Both are large-scale integrated systems for navigation. In addition to the development of individual components (eg., construction and control of the vehicle, vision and perception, and planning), integration of those component technologies into a system by means of an appropriate architecture is a major issue.

  12. Open Architecture Standard for NASA's Software-Defined Space Telecommunications Radio Systems

    NASA Technical Reports Server (NTRS)

    Reinhart, Richard C.; Johnson, Sandra K.; Kacpura, Thomas J.; Hall, Charles S.; Smith, Carl R.; Liebetreu, John

    2008-01-01

    NASA is developing an architecture standard for software-defined radios used in space- and ground-based platforms to enable commonality among radio developments to enhance capability and services while reducing mission and programmatic risk. Transceivers (or transponders) with functionality primarily defined in software (e.g., firmware) have the ability to change their functional behavior through software alone. This radio architecture standard offers value by employing common waveform software interfaces, method of instantiation, operation, and testing among different compliant hardware and software products. These common interfaces within the architecture abstract application software from the underlying hardware to enable technology insertion independently at either the software or hardware layer. This paper presents the initial Space Telecommunications Radio System (STRS) Architecture for NASA missions to provide the desired software abstraction and flexibility while minimizing the resources necessary to support the architecture.

  13. Potential of hypocotyl diameter in family selection aiming at plant architecture improvement of common bean.

    PubMed

    Oliveira, A M C; Batista, R O; Carneiro, P C S; Carneiro, J E S; Cruz, C D

    2015-09-28

    Cultivars of common bean with more erect plant architecture and greater tolerance to degree of lodging are required by producers. Thus, to evaluate the potential of hypocotyl diameter (HD) in family selection for plant architecture improvement of common bean, the HDs of 32 F2 plants were measured in 3 distinct populations, and the characteristics related to plant architecture were analyzed in their progenies. Ninety-six F2:3 families and 4 controls were evaluated in a randomized block design, with 3 replications, analyzing plant architecture grade, HD, and grain yield during the winter 2010 and drought 2011 seasons. We found that the correlation between the HD of F2 plants and traits related to plant architecture of F2:3 progenies were of low magnitude compared to the estimates for correlations considering the parents, indicating a high environmental influence on HD in bean plants. There was a predominance of additive genetic effects on the determination of hypocotyl diameter, which showed higher precision and accuracy compared to plant architecture grade. Thus, this characteristic can be used to select progenies in plant architecture improvement of common beans; however, selection must be based on the means of at least 39 plants in the plot, according to the results of repeatability analysis.

  14. Integrating hospital information systems in healthcare institutions: a mediation architecture.

    PubMed

    El Azami, Ikram; Cherkaoui Malki, Mohammed Ouçamah; Tahon, Christian

    2012-10-01

    Many studies have examined the integration of information systems into healthcare institutions, leading to several standards in the healthcare domain (CORBAmed: Common Object Request Broker Architecture in Medicine; HL7: Health Level Seven International; DICOM: Digital Imaging and Communications in Medicine; and IHE: Integrating the Healthcare Enterprise). Due to the existence of a wide diversity of heterogeneous systems, three essential factors are necessary to fully integrate a system: data, functions and workflow. However, most of the previous studies have dealt with only one or two of these factors and this makes the system integration unsatisfactory. In this paper, we propose a flexible, scalable architecture for Hospital Information Systems (HIS). Our main purpose is to provide a practical solution to insure HIS interoperability so that healthcare institutions can communicate without being obliged to change their local information systems and without altering the tasks of the healthcare professionals. Our architecture is a mediation architecture with 3 levels: 1) a database level, 2) a middleware level and 3) a user interface level. The mediation is based on two central components: the Mediator and the Adapter. Using the XML format allows us to establish a structured, secured exchange of healthcare data. The notion of medical ontology is introduced to solve semantic conflicts and to unify the language used for the exchange. Our mediation architecture provides an effective, promising model that promotes the integration of hospital information systems that are autonomous, heterogeneous, semantically interoperable and platform-independent.

  15. VRE4EIC: A Reference Architecture and Components for Research Access

    NASA Astrophysics Data System (ADS)

    Bailo, Daniele; Jeffery, Keith G.; Atakan, Kuvvet; Harrison, Matt

    2017-04-01

    VRE4EIC (www. Vre4eic.eu) is a EC H2020 project with the objective of providing a reference architecture and components for a VRE (Virtual Research Environment). SGs (Science gateways) in North America and VLs (Virtual Laboratories) in Australasia are similar - but significantly different - concepts. A VRE provides not only access to ICT services, data, software components and equipment but also provides a collaborative working environment for cooperation and supports the research lifecycle from idea to publication. Europe has a large number of RIs (Research infrastructures); the major ones are coordinated and planned through the ESFRI (European Strategy Forum on Research Infrastructures) roadmap. Most RIs - such as EPOS - provide a user interface portal function, ranging from (1) a simple list of assets (such as services, datasets, software components, workflows, equipment, experts.. although many provide only information about data) with URLs upon which the user can click to download; (2) to an end-user facility for constructing queries to find relevant assets and subsets of them more-or-less integrated as a downloaded combined dataset; (3) in a few cases - for constructing workflows to achieve the scientific objective. The portal has the scope of the individual RI. The aim of VRE4EIC is to provide a reference architecture, software components and a prototype implementation VRE which allows user access and all the portal functions (and more) not only to an individual RI - such as EPOS - but across RIs thus encouraging multidisciplinary research. Two RIs: EPOS and ENVRIplus (itself spanning 21 RIs) are represented within the project as requirements stakeholders , validators of the architecture and evaluators of the prototype system developed. The characterisation of many more RIs - and their requirements - has been done to ensure wide applicability. The virtualisation across RIs is achieved by using a rich metadata catalog based on CERIF (Common European Research Information Format: a EU Recommendation to Member States and supported, developed and promoted by euroCRIS www.eurocris.org ). The VRE4EIC catalog system harvests from individual RI catalogs (with conversion since they use many different metadata formats) to give the user of VRE4IC a 'canonical view' over the RIs and their assets. The VRE4IC user interface provides portal functions for each and all RIs but also a workflow construction facility. The project expects the RIs to use middleware developed in other projects to facilitate workflow deployment across the eIs (e-Infrastructures) such as GEANT, EUDAT, EGI, OpenAIRE and will itself use the same mechanisms. After 15 months of the project we have validated the requirements from the RIs, defined the architecture and started work on the metadata mapping and conversion. The intention is to have the prototype at M24 for evaluation by the RI partners (and some external Ris) leading to a refined architecture and software stack for production use after M36.

  16. ERECTA signaling controls Arabidopsis inflorescence architecture through chromatin-mediated activation of PRE1 expression.

    PubMed

    Cai, Hanyang; Zhao, Lihua; Wang, Lulu; Zhang, Man; Su, Zhenxia; Cheng, Yan; Zhao, Heming; Qin, Yuan

    2017-06-01

    Flowering plants display a remarkable diversity in inflorescence architecture, and pedicel length is one of the key contributors to this diversity. In Arabidopsis thaliana, the receptor-like kinase ERECTA (ER) mediated signaling pathway plays important roles in regulating inflorescence architecture by promoting cell proliferation. However, the regulating mechanism remains elusive in the pedicel. Genetic interactions between ERECTA signaling and the chromatin remodeling complex SWR1 in the control of inflorescence architecture were studied. Comparative transcriptome analysis was applied to identify downstream components. Chromatin immunoprecipitation and nucleosome occupancy was further investigated. The results indicated that the chromatin remodeler SWR1 coordinates with ERECTA signaling in regulating inflorescence architecture by activating the expression of PRE1 family genes and promoting pedicel elongation. It was found that SWR1 is required for the incorporation of the H2A.Z histone variant into nucleosomes of the whole PRE1 gene family and the ERECTA controlled expression of PRE1 gene family through regulating nucleosome dynamics. We propose that utilization of a chromatin remodeling complex to regulate gene expression is a common theme in developmental control across kingdoms. These findings shed light on the mechanisms through which chromatin remodelers orchestrate complex transcriptional regulation of gene expression in coordination with a developmental cue. © 2017 The Authors. New Phytologist © 2017 New Phytologist Trust.

  17. Client/server approach to image capturing

    NASA Astrophysics Data System (ADS)

    Tuijn, Chris; Stokes, Earle

    1998-01-01

    The diversity of the digital image capturing devices on the market today is quite astonishing and ranges from low-cost CCD scanners to digital cameras (for both action and stand-still scenes), mid-end CCD scanners for desktop publishing and pre- press applications and high-end CCD flatbed scanners and drum- scanners with photo multiplier technology. Each device and market segment has its own specific needs which explains the diversity of the associated scanner applications. What all those applications have in common is the need to communicate with a particular device to import the digital images; after the import, additional image processing might be needed as well as color management operations. Although the specific requirements for all of these applications might differ considerably, a number of image capturing and color management facilities as well as other services are needed which can be shared. In this paper, we propose a client/server architecture for scanning and image editing applications which can be used as a common component for all these applications. One of the principal components of the scan server is the input capturing module. The specification of the input jobs is based on a generic input device model. Through this model we make abstraction of the specific scanner parameters and define the scan job definitions by a number of absolute parameters. As a result, scan job definitions will be less dependent on a particular scanner and have a more universal meaning. In this context, we also elaborate on the interaction of the generic parameters and the color characterization (i.e., the ICC profile). Other topics that are covered are the scheduling and parallel processing capabilities of the server, the image processing facilities, the interaction with the ICC engine, the communication facilities (both in-memory and over the network) and the different client architectures (stand-alone applications, TWAIN servers, plug-ins, OLE or Apple-event driven applications). This paper is structured as follows. In the introduction, we further motive the need for a scan server-based architecture. In the second section, we give a brief architectural overview of the scan server and the other components it is connected to. The third chapter exposes the generic model for input devices as well as the image processing model; the fourth chapter reveals the different shapes the scanning applications (or modules) can have. In the last section, we briefly summarize the presented material and point out trends for future development.

  18. IAIMS Architecture

    PubMed Central

    Hripcsak, George

    1997-01-01

    Abstract An information system architecture defines the components of a system and the interfaces among the components. A good architecture is essential for creating an Integrated Advanced Information Management System (IAIMS) that works as an integrated whole yet is flexible enough to accommodate many users and roles, multiple applications, changing vendors, evolving user needs, and advancing technology. Modularity and layering promote flexibility by reducing the complexity of a system and by restricting the ways in which components may interact. Enterprise-wide mediation promotes integration by providing message routing, support for standards, dictionary-based code translation, a centralized conceptual data schema, business rule implementation, and consistent access to databases. Several IAIMS sites have adopted a client-server architecture, and some have adopted a three-tiered approach, separating user interface functions, application logic, and repositories. PMID:9067884

  19. System on Mobile Devices Middleware: Thinking beyond Basic Phones and PDAs

    NASA Astrophysics Data System (ADS)

    Prasad, Sushil K.

    Several classes of emerging applications, spanning domains such as medical informatics, homeland security, mobile commerce, and scientific applications, are collaborative, and a significant portion of these will harness the capabilities of both the stable and mobile infrastructures (the “mobile grid”). Currently, it is possible to develop a collaborative application running on a collection of heterogeneous, possibly mobile, devices, each potentially hosting data stores, using existing middleware technologies such as JXTA, BREW, Compact .NET and J2ME. However, they require too many ad-hoc techniques as well as cumbersome and time-consuming programming. Our System on Mobile Devices (SyD) middleware, on the other hand, has a modular architecture that makes such application development very systematic and streamlined. The architecture supports transactions over mobile data stores, with a range of remote group invocation options and embedded interdependencies among such data store objects. The architecture further provides a persistent uniform object view, group transaction with Quality of Service (QoS) specifications, and XML vocabulary for inter-device communication. I will present the basic SyD concepts, introduce the architecture and the design of the SyD middleware and its components. We will discuss the basic performance figures of SyD components and a few SyD applications on PDAs. SyD platform has led to developments in distributed web service coordination and workflow technologies, which we will briefly discuss. There is a vital need to develop methodologies and systems to empower common users, such as computational scientists, for rapid development of such applications. Our BondFlow system enables rapid configuration and execution of workflows over web services. The small footprint of the system enables them to reside on Java-enabled handheld devices.

  20. A Component Approach to Collaborative Scientific Software Development: Tools and Techniques Utilized by the Quantum Chemistry Science Application Partnership

    DOE PAGES

    Kenny, Joseph P.; Janssen, Curtis L.; Gordon, Mark S.; ...

    2008-01-01

    Cutting-edge scientific computing software is complex, increasingly involving the coupling of multiple packages to combine advanced algorithms or simulations at multiple physical scales. Component-based software engineering (CBSE) has been advanced as a technique for managing this complexity, and complex component applications have been created in the quantum chemistry domain, as well as several other simulation areas, using the component model advocated by the Common Component Architecture (CCA) Forum. While programming models do indeed enable sound software engineering practices, the selection of programming model is just one building block in a comprehensive approach to large-scale collaborative development which must also addressmore » interface and data standardization, and language and package interoperability. We provide an overview of the development approach utilized within the Quantum Chemistry Science Application Partnership, identifying design challenges, describing the techniques which we have adopted to address these challenges and highlighting the advantages which the CCA approach offers for collaborative development.« less

  1. A Conceptual Architecture for National Biosurveillance: Moving Beyond Situational Awareness to Enable Digital Detection of Emerging Threats

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Velsko, Stephan; Bates, Thomas

    Despite numerous calls for improvement, the U.S. biosurveillance enterprise remains a patchwork of uncoordinated systems that fail to take advantage of the rapid progress in information processing, communication, and analytics made in the past decade. By synthesizing components from the extensive biosurveillance literature, we propose a conceptual framework for a national biosurveillance architecture and provide suggestions for implementation. The framework differs from the current federal biosurveillance development pathway in that it is not focused on systems useful for “situational awareness,” but is instead focused on the long-term goal of having true warning capabilities. Therefore, a guiding design objective is themore » ability to digitally detect emerging threats that span jurisdictional boundaries, because attempting to solve the most challenging biosurveillance problem first provides the strongest foundation to meet simpler surveillance objectives. Core components of the vision are: (1) a whole-of-government approach to support currently disparate federal surveillance efforts that have a common data need, including those for food safety, vaccine and medical product safety, and infectious disease surveillance; (2) an information architecture that enables secure, national access to electronic health records, yet does not require that data be sent to a centralized location for surveillance analysis; (3) an inference architecture that leverages advances in ‘big data’ analytics and learning inference engines—a significant departure from the statistical process control paradigm that underpins nearly all current syndromic surveillance systems; and, (4) an organizational architecture with a governance model aimed at establishing national biosurveillance as a critical part of the U.S. national infrastructure. Although it will take many years to implement, and a national campaign of education and debate to acquire public buy-in for such a comprehensive system, the potential benefits warrant increased consideration within the U.S. government.« less

  2. A Conceptual Architecture for National Biosurveillance: Moving Beyond Situational Awareness to Enable Digital Detection of Emerging Threats

    DOE PAGES

    Velsko, Stephan; Bates, Thomas

    2016-06-17

    Despite numerous calls for improvement, the U.S. biosurveillance enterprise remains a patchwork of uncoordinated systems that fail to take advantage of the rapid progress in information processing, communication, and analytics made in the past decade. By synthesizing components from the extensive biosurveillance literature, we propose a conceptual framework for a national biosurveillance architecture and provide suggestions for implementation. The framework differs from the current federal biosurveillance development pathway in that it is not focused on systems useful for “situational awareness,” but is instead focused on the long-term goal of having true warning capabilities. Therefore, a guiding design objective is themore » ability to digitally detect emerging threats that span jurisdictional boundaries, because attempting to solve the most challenging biosurveillance problem first provides the strongest foundation to meet simpler surveillance objectives. Core components of the vision are: (1) a whole-of-government approach to support currently disparate federal surveillance efforts that have a common data need, including those for food safety, vaccine and medical product safety, and infectious disease surveillance; (2) an information architecture that enables secure, national access to electronic health records, yet does not require that data be sent to a centralized location for surveillance analysis; (3) an inference architecture that leverages advances in ‘big data’ analytics and learning inference engines—a significant departure from the statistical process control paradigm that underpins nearly all current syndromic surveillance systems; and, (4) an organizational architecture with a governance model aimed at establishing national biosurveillance as a critical part of the U.S. national infrastructure. Although it will take many years to implement, and a national campaign of education and debate to acquire public buy-in for such a comprehensive system, the potential benefits warrant increased consideration within the U.S. government.« less

  3. Space Telecommunications Radio System (STRS) Architecture Standard. Release 1.02.1

    NASA Technical Reports Server (NTRS)

    Reinhart, Richard C.; Kacpura, Thomas J.; Handler, Louis M.; Hall, C. Steve; Mortensen, Dale J.; Johnson, Sandra K.; Briones, Janette C.; Nappier, Jennifer M.; Downey, Joseph A.; Lux, James P.

    2012-01-01

    This document contains the NASA architecture standard for software defined radios used in space- and ground-based platforms to enable commonality among radio developments to enhance capability and services while reducing mission and programmatic risk. Transceivers (or transponders) with functionality primarily defined in software (e.g., firmware) have the ability to change their functional behavior through software alone. This radio architecture standard offers value by employing common waveform software interfaces, method of instantiation, operation, and testing among different compliant hardware and software products. These common interfaces within the architecture abstract application software from the underlying hardware to enable technology insertion independently at either the software or hardware layer.

  4. Algorithm Classes for Architecture Research (ACAR)

    DTIC Science & Technology

    2010-03-01

    Project Engineer BRADLEY J. PAUL , Chief Advanced Sensor Components Branch Advanced Sensor Components Branch Aerospace Components Division...establish the need for and the value of innovative research on domain-specific architectures, applications, and tools based on the challenges posed by...California / Information Sciences Institute (USC/ISI) conducted exploratory studies to establish the need for and the value of innovative research on domain

  5. Carbon Capture Multidisciplinary Simulation Center Trilab Support Team (TST) Fall Meeting 2016 Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Draeger, Erik W.

    The theme of this year’s meeting was “Predictivity: Now and in the Future”. After welcoming remarks, Erik Draeger gave a talk on the NNSA Labs’ history of predictive simulation and the new challenges faced by upcoming architecture changes. He described an example where the volume of analysis data produced by a set of inertial confinement fusion (ICF) simulations on the Trinity machine was too large to store or transfer, and the steps needed to reduce it to a manageable size. He also described the software re-engineering plan for LLNL’s suite of multiphysics codes and physics packages with a new pushmore » toward common components, making collaboration with teams like the CCMSC who already have experience trying to architect complex multiphysics code infrastructure on next-generation architectures all the more important. Phil Smith then gave an overview outlining the goals of the project, namely to accelerate development of new technology in the form of high efficiency carbon capture pulverized coal power generation as well as further optimize existing state of the art designs. He then presented a summary of the Center’s top-down uncertainty quantification approach, in which ultimate target predictivity informs uncertainty targets for lower-level components, and gave data on how close all the different components currently are to their targets. Most components still need an approximately two-fold reduction in uncertainty to hit the ultimate predictivity target, but the current accuracy is already rather impressive.« less

  6. Study on the standard architecture for geoinformation common services

    NASA Astrophysics Data System (ADS)

    Zha, Z.; Zhang, L.; Wang, C.; Jiang, J.; Huang, W.

    2014-04-01

    The construction of platform for geoinformation common services was completed or on going in in most provinces and cities in these years in China, and the platforms plays an important role in the economic and social activities. Geoinfromation and geoinfromation based services are the key issues in the platform. The standards on geoinormation common services play as bridges among the users, systems and designers of the platform. The standard architecture for geoinformation common services is the guideline for designing and using the standard system in which the standards integrated to each other to promote the development, sharing and services of geoinformation resources. To establish the standard architecture for geoinformation common services is one of the tasks of "Study on important standards for geonformation common services and management of public facilities in city". The scope of the standard architecture is defined, such as data or information model, interoperability interface or service, information management. Some Research work on the status of international standards of geoinormation common services in organization and countries, like ISO/TC 211, OGC and other countries or unions like USA, EU, Japan have done. Some principles are set up to evaluate the standard, such as availability, suitability and extensible ability. Then the development requirement and practical situation are analyzed, and a framework of the standard architecture for geoinformation common services are proposed. Finally, a summary and prospects of the geoinformation standards are made.

  7. 1A.09: DISTINCT GENETIC ARCHITECTURE OF RENAL IMPAIRMENT COMPONENTS IN TYPE 2 DIABETES WITHIN CAUCASIAN POPULATIONS OF CELTO-GERMANIC AND SLAVIC ORIGINS.

    PubMed

    Harvey, F; Blanchet, F Marois; Phillips, M S; Haloui, M; Chalmers, J P; Woodward, M; Marre, M; Harrap, S B; Tremblay, J; Hamet, P

    2015-06-01

    The genetic architecture of type 2 diabetes (T2D) has been reported to be different between Asian and Caucasian populations (BBRC 2014;452:213-220). It is also well recognized that renal complications of T2D start earlier and are more severe in Asian subjects. Our objective was to determine whether such heterogeneity exists within the Caucasian population with respect to phenotypic and genomic determinants of renal complications in T2D. We analyzed two major aspects of renal impairment: increase of albuminuria as UACR and decline of estimated glomerular filtration rate as log(eGFR) in Caucasian patients during the 5 year period of the ADVANCE trial (NEJM 2014;371:1392-406). Celto-Germanic and Slavic origins of 3449 genotyped subjects were determined by principal component analysis with Eigenstrat software. The first principal component separated the 3449 individuals along a geographical gradient from East/West Europe: 1133 T2D patients were Slavic and 2316 were Celto-Germanic. Phenotypic analyses and Genome Wide Association Studies (GWAS) were performed in the two groups separately. The prevalence of hypertension was significantly higher (p = 1.7x10-32) in ADVANCE Slavic subjects. The prevalence of albuminuria and UACR levels were significantly higher (p = 10-4 and 9.5x10-5, respectively) at baseline and its progression over the 5-year period was steeper (p = 6.2x10-4) in patients of Slavic origin, contrasting with a more significant decline of eGFR in Celto-Germanic subjects (p = 4.9x10-21). Other T2D outcomes (myocardial infarction and stroke) did not exhibit such a difference between East and West Europe. GWAS analyses of eGFR decline did not reveal any associated SNPs (threshold p-value of < 10-3) in common between the two geo-ethnic groups and only 6% of associated genes were shared. Similarly, GWAS of UACR progression showed that only 0.1% of SNPs were common and 7% of genes were shared between the two groups. This was very different for stroke: 25% of SNPs and more than 50% of genes were common. Genetic analyses have to consider geo-ethnic characteristics even within Caucasians, demonstrated here for cardinal features of renal impairment in T2D. Our data suggest that distinct understanding of genomic architectures is important to ascertain clinical utility.

  8. Parallel architecture for rapid image generation and analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nerheim, R.J.

    1987-01-01

    A multiprocessor architecture inspired by the Disney multiplane camera is proposed. For many applications, this approach produces a natural mapping of processors to objects in a scene. Such a mapping promotes parallelism and reduces the hidden-surface work with minimal interprocessor communication and low-overhead cost. Existing graphics architectures store the final picture as a monolithic entity. The architecture here stores each object's image separately. It assembles the final composite picture from component images only when the video display needs to be refreshed. This organization simplifies the work required to animate moving objects that occlude other objects. In addition, the architecture hasmore » multiple processors that generate the component images in parallel. This further shortens the time needed to create a composite picture. In addition to generating images for animation, the architecture has the ability to decompose images.« less

  9. A task-based support architecture for developing point-of-care clinical decision support systems for the emergency department.

    PubMed

    Wilk, S; Michalowski, W; O'Sullivan, D; Farion, K; Sayyad-Shirabad, J; Kuziemsky, C; Kukawka, B

    2013-01-01

    The purpose of this study was to create a task-based support architecture for developing clinical decision support systems (CDSSs) that assist physicians in making decisions at the point-of-care in the emergency department (ED). The backbone of the proposed architecture was established by a task-based emergency workflow model for a patient-physician encounter. The architecture was designed according to an agent-oriented paradigm. Specifically, we used the O-MaSE (Organization-based Multi-agent System Engineering) method that allows for iterative translation of functional requirements into architectural components (e.g., agents). The agent-oriented paradigm was extended with ontology-driven design to implement ontological models representing knowledge required by specific agents to operate. The task-based architecture allows for the creation of a CDSS that is aligned with the task-based emergency workflow model. It facilitates decoupling of executable components (agents) from embedded domain knowledge (ontological models), thus supporting their interoperability, sharing, and reuse. The generic architecture was implemented as a pilot system, MET3-AE--a CDSS to help with the management of pediatric asthma exacerbation in the ED. The system was evaluated in a hospital ED. The architecture allows for the creation of a CDSS that integrates support for all tasks from the task-based emergency workflow model, and interacts with hospital information systems. Proposed architecture also allows for reusing and sharing system components and knowledge across disease-specific CDSSs.

  10. Architectural Design of a LMS with LTSA-Conformance

    ERIC Educational Resources Information Center

    Sengupta, Souvik; Dasgupta, Ranjan

    2017-01-01

    This paper illustrates an approach for architectural design of a Learning Management System (LMS), which is verifiable against the Learning Technology System Architecture (LTSA) conformance rules. We introduce a new method for software architectural design that extends the Unified Modeling Language (UML) component diagram with the formal…

  11. Joint Polar Satellite System (JPSS) Common Ground System (CGS) Technical Performance Measures of the Block 2 Architecture

    NASA Astrophysics Data System (ADS)

    Grant, K. D.; Panas, M.

    2016-12-01

    NOAA and NASA are jointly acquiring the next-generation civilian weather satellite system: the Joint Polar Satellite System (JPSS). JPSS replaced the afternoon orbit component and ground processing of NOAA's old POES system. JPSS satellites carry sensors that collect meteorological, oceanographic, climatological, and solar-geophysical observations of the earth, atmosphere, and space. The ground processing system for JPSS is known as the JPSS Common Ground System (JPSS CGS). Developed and maintained by Raytheon Intelligence, Information and Services (IIS), the CGS is a globally distributed, multi-mission system serving NOAA, NASA and their national and international partners. The CGS has demonstrated its scalability and flexibility to incorporate multiple missions efficiently and with minimal cost, schedule and risk, while strengthening global partnerships in weather and environmental monitoring. The CGS architecture has been upgraded to Block 2.0 to satisfy several key objectives, including: "operationalizing" the first satellite, Suomi NPP, which originally was a risk reduction mission; leveraging lessons learned in multi-mission support, taking advantage of newer, more reliable and efficient technologies and satisfying constraints due of the continually evolving budgetary environment. To ensure the CGS meets these needs, we have developed 48 Technical Performance Measures (TPMs) across 9 categories: Data Availability, Data Latency, Operational Availability, Margin, Scalability, Situational Awareness, Transition (between environments and sites), WAN Efficiency, and Data Recovery Processing. This paper will provide an overview of the CGS Block 2.0 architecture, with particular focus on the 9 TPM categories listed above. We will describe how we ensure the deployed architecture meets these TPMs to satisfy our multi-mission objectives with the deployment of Block 2.0.

  12. The Jupyter/IPython architecture: a unified view of computational research, from interactive exploration to communication and publication.

    NASA Astrophysics Data System (ADS)

    Ragan-Kelley, M.; Perez, F.; Granger, B.; Kluyver, T.; Ivanov, P.; Frederic, J.; Bussonnier, M.

    2014-12-01

    IPython has provided terminal-based tools for interactive computing in Python since 2001. The notebook document format and multi-process architecture introduced in 2011 have expanded the applicable scope of IPython into teaching, presenting, and sharing computational work, in addition to interactive exploration. The new architecture also allows users to work in any language, with implementations in Python, R, Julia, Haskell, and several other languages. The language agnostic parts of IPython have been renamed to Jupyter, to better capture the notion that a cross-language design can encapsulate commonalities present in computational research regardless of the programming language being used. This architecture offers components like the web-based Notebook interface, that supports rich documents that combine code and computational results with text narratives, mathematics, images, video and any media that a modern browser can display. This interface can be used not only in research, but also for publication and education, as notebooks can be converted to a variety of output formats, including HTML and PDF. Recent developments in the Jupyter project include a multi-user environment for hosting notebooks for a class or research group, a live collaboration notebook via Google Docs, and better support for languages other than Python.

  13. Analysis of three-dimensionally proliferated sensor architectures for flexible SSA

    NASA Astrophysics Data System (ADS)

    Cunio, Phillip M.; Flewelling, Brien

    2018-05-01

    The evolution of space into a congested, contested, and competitive regime drives a commensurate need for awareness of events there. As the number of systems on orbit grows, so will the need for sensing and tracking these systems. One avenue for advanced sensing capability is a widespread network of small but capable Space Situational Awareness (SSA) sensors, proliferated widely in the three-dimensional volume extending from the Earth's surface to the Geosynchronous Earth Orbit (GEO) belt, incorporating multiple different varieties and types of sensors. Due to the freedom of movement afforded by solid surfaces and atmosphere, some of these sensors may have substantial mobility. Accordingly, designing a network for maximum SSA coverage at reasonable cost may entail heterogeneous architectures with common logistics (including modular sensor packages or mobility platforms, which may be flexibly re-assigned). Smaller mobile sensors leveraging Commercial-Off-The-Shelf (COTS) components and software are appealing for their ability to simplify logistics versus large, monolithic, uniquely-exquisite sensor systems. This paper examines concepts for such sensor systems, and analyzes the costs associated with their use, while assessing the benefits (including reduced gap time, weather resilience, and multiple-sensor coverage) that such an architecture enables. Recommendations for preferred modes and mixes of fielding sensors in a heterogeneous architecture are made, and directions for future related research are suggested.

  14. Little by Little Does the Trick: Design and Construction of a Discrete Event Agent-Based Simulation Framework

    DTIC Science & Technology

    2007-12-01

    model. Finally, we build a small agent-based model using the component architecture to demonstrate the library’s functionality. 15. NUMBER OF...and a Behavioral model. Finally, we build a small agent-based model using the component architecture to demonstrate the library’s functionality...prototypes an architectural design which is generalizable, reusable, and extensible. We have created an initial set of model elements that demonstrate

  15. Generation of a Multicomponent Library of Disulfide Donor-Acceptor Architectures Using Dynamic Combinatorial Chemistry

    PubMed Central

    Drożdż, Wojciech; Kołodziejski, Michał; Markiewicz, Grzegorz; Jenczak, Anna; Stefankiewicz, Artur R.

    2015-01-01

    We describe here the generation of new donor-acceptor disulfide architectures obtained in aqueous solution at physiological pH. The application of a dynamic combinatorial chemistry approach allowed us to generate a large number of new disulfide macrocyclic architectures together with a new type of [2]catenanes consisting of four distinct components. Up to fifteen types of structurally-distinct dynamic architectures have been generated through one-pot disulfide exchange reactions between four thiol-functionalized aqueous components. The distribution of disulfide products formed was found to be strongly dependent on the structural features of the thiol components employed. This work not only constitutes a success in the synthesis of topologically- and morphologically-complex targets, but it may also open new horizons for the use of this methodology in the construction of molecular machines. PMID:26193265

  16. Generation of a Multicomponent Library of Disulfide Donor-Acceptor Architectures Using Dynamic Combinatorial Chemistry.

    PubMed

    Drożdż, Wojciech; Kołodziejski, Michał; Markiewicz, Grzegorz; Jenczak, Anna; Stefankiewicz, Artur R

    2015-07-17

    We describe here the generation of new donor-acceptor disulfide architectures obtained in aqueous solution at physiological pH. The application of a dynamic combinatorial chemistry approach allowed us to generate a large number of new disulfide macrocyclic architectures together with a new type of [2]catenanes consisting of four distinct components. Up to fifteen types of structurally-distinct dynamic architectures have been generated through one-pot disulfide exchange reactions between four thiol-functionalized aqueous components. The distribution of disulfide products formed was found to be strongly dependent on the structural features of the thiol components employed. This work not only constitutes a success in the synthesis of topologically- and morphologically-complex targets, but it may also open new horizons for the use of this methodology in the construction of molecular machines.

  17. Domain specific software architectures: Command and control

    NASA Technical Reports Server (NTRS)

    Braun, Christine; Hatch, William; Ruegsegger, Theodore; Balzer, Bob; Feather, Martin; Goldman, Neil; Wile, Dave

    1992-01-01

    GTE is the Command and Control contractor for the Domain Specific Software Architectures program. The objective of this program is to develop and demonstrate an architecture-driven, component-based capability for the automated generation of command and control (C2) applications. Such a capability will significantly reduce the cost of C2 applications development and will lead to improved system quality and reliability through the use of proven architectures and components. A major focus of GTE's approach is the automated generation of application components in particular subdomains. Our initial work in this area has concentrated in the message handling subdomain; we have defined and prototyped an approach that can automate one of the most software-intensive parts of C2 systems development. This paper provides an overview of the GTE team's DSSA approach and then presents our work on automated support for message processing.

  18. DAsHER CD: Developing a Data-Oriented Human-Centric Enterprise Architecture for EarthCube

    NASA Astrophysics Data System (ADS)

    Yang, C. P.; Yu, M.; Sun, M.; Qin, H.; Robinson, E.

    2015-12-01

    One of the biggest challenges that face Earth scientists is the resource discovery, access, and sharing in a desired fashion. EarthCube is targeted to enable geoscientists to address the challenges by fostering community-governed efforts that develop a common cyberinfrastructure for the purpose of collecting, accessing, analyzing, sharing and visualizing all forms of data and related resources, through the use of advanced technological and computational capabilities. Here we design an Enterprise Architecture (EA) for EarthCube to facilitate the knowledge management, communication and human collaboration in pursuit of the unprecedented data sharing across the geosciences. The design results will provide EarthCube a reference framework for developing geoscience cyberinfrastructure collaborated by different stakeholders, and identifying topics which should invoke high interest in the community. The development of this EarthCube EA framework leverages popular frameworks, such as Zachman, Gartner, DoDAF, and FEAF. The science driver of this design is the needs from EarthCube community, including the analyzed user requirements from EarthCube End User Workshop reports and EarthCube working group roadmaps, and feedbacks or comments from scientists obtained by organizing workshops. The final product of this Enterprise Architecture is a four-volume reference document: 1) Volume one is this document and comprises an executive summary of the EarthCube architecture, serving as an overview in the initial phases of architecture development; 2) Volume two is the major body of the design product. It outlines all the architectural design components or viewpoints; 3) Volume three provides taxonomy of the EarthCube enterprise augmented with semantics relations; 4) Volume four describes an example of utilizing this architecture for a geoscience project.

  19. Architectural and Functional Design of an Environmental Information Network.

    DTIC Science & Technology

    1984-04-30

    study was accomplished under contract F08635-83-C-013(,, Task 83- 2 for Headquarters Air Force Engineering and Services Center, Engineering and Services...election Procedure ............................... 11 2 General Architecture of Distributed Data Management System...o.......60 A-1 Schema Architecture .......... o-.................. .... 74 A- 2 MULTIBASE Component Architecture

  20. IVHS Architecture Summary

    DOT National Transportation Integrated Search

    1991-07-01

    A SYSTEM ARCHITECTURE IS THE MASTER BUILDING PLAN. IT CAN BE THOUGHT OF AS THE FRAMEWORK THAT CONCEPTUALLY DESCRIBES HOW COMPONENTS INTERACT AND WORK TOGETHER TO ACHIEVE TOTAL SYSTEM GOALS AND OBJECTIVES. IDEALLY, A SYSTEM ARCHITECTURE PROVIDES FOR A...

  1. Hybrid Power Management-Based Vehicle Architecture

    NASA Technical Reports Server (NTRS)

    Eichenberg, Dennis J.

    2011-01-01

    Hybrid Power Management (HPM) is the integration of diverse, state-of-the-art power devices in an optimal configuration for space and terrestrial applications (s ee figure). The appropriate application and control of the various power devices significantly improves overall system performance and efficiency. The basic vehicle architecture consists of a primary power source, and possibly other power sources, that provides all power to a common energy storage system that is used to power the drive motors and vehicle accessory systems. This architecture also provides power as an emergency power system. Each component is independent, permitting it to be optimized for its intended purpose. The key element of HPM is the energy storage system. All generated power is sent to the energy storage system, and all loads derive their power from that system. This can significantly reduce the power requirement of the primary power source, while increasing the vehicle reliability. Ultracapacitors are ideal for an HPM-based energy storage system due to their exceptionally long cycle life, high reliability, high efficiency, high power density, and excellent low-temperature performance. Multiple power sources and multiple loads are easily incorporated into an HPM-based vehicle. A gas turbine is a good primary power source because of its high efficiency, high power density, long life, high reliability, and ability to operate on a wide range of fuels. An HPM controller maintains optimal control over each vehicle component. This flexible operating system can be applied to all vehicles to considerably improve vehicle efficiency, reliability, safety, security, and performance. The HPM-based vehicle architecture has many advantages over conventional vehicle architectures. Ultracapacitors have a much longer cycle life than batteries, which greatly improves system reliability, reduces life-of-system costs, and reduces environmental impact as ultracapacitors will probably never need to be replaced and disposed of. The environmentally safe ultracapacitor components reduce disposal concerns, and their recyclable nature reduces the environmental impact. High ultracapacitor power density provides high power during surges, and the ability to absorb high power during recharging. Ultracapacitors are extremely efficient in capturing recharging energy, are rugged, reliable, maintenance-free, have excellent lowtemperature characteristic, provide consistent performance over time, and promote safety as they can be left indefinitely in a safe, discharged state whereas batteries cannot.

  2. Performance Analysis of Distributed Object-Oriented Applications

    NASA Technical Reports Server (NTRS)

    Schoeffler, James D.

    1998-01-01

    The purpose of this research was to evaluate the efficiency of a distributed simulation architecture which creates individual modules which are made self-scheduling through the use of a message-based communication system used for requesting input data from another module which is the source of that data. To make the architecture as general as possible, the message-based communication architecture was implemented using standard remote object architectures (Common Object Request Broker Architecture (CORBA) and/or Distributed Component Object Model (DCOM)). A series of experiments were run in which different systems are distributed in a variety of ways across multiple computers and the performance evaluated. The experiments were duplicated in each case so that the overhead due to message communication and data transmission can be separated from the time required to actually perform the computational update of a module each iteration. The software used to distribute the modules across multiple computers was developed in the first year of the current grant and was modified considerably to add a message-based communication scheme supported by the DCOM distributed object architecture. The resulting performance was analyzed using a model created during the first year of this grant which predicts the overhead due to CORBA and DCOM remote procedure calls and includes the effects of data passed to and from the remote objects. A report covering the distributed simulation software and the results of the performance experiments has been submitted separately. The above report also discusses possible future work to apply the methodology to dynamically distribute the simulation modules so as to minimize overall computation time.

  3. A real-time standard parts inspection based on deep learning

    NASA Astrophysics Data System (ADS)

    Xu, Kuan; Li, XuDong; Jiang, Hongzhi; Zhao, Huijie

    2017-10-01

    Since standard parts are necessary components in mechanical structure like bogie and connector. These mechanical structures will be shattered or loosen if standard parts are lost. So real-time standard parts inspection systems are essential to guarantee their safety. Researchers would like to take inspection systems based on deep learning because it works well in image with complex backgrounds which is common in standard parts inspection situation. A typical inspection detection system contains two basic components: feature extractors and object classifiers. For the object classifier, Region Proposal Network (RPN) is one of the most essential architectures in most state-of-art object detection systems. However, in the basic RPN architecture, the proposals of Region of Interest (ROI) have fixed sizes (9 anchors for each pixel), they are effective but they waste much computing resources and time. In standard parts detection situations, standard parts have given size, thus we can manually choose sizes of anchors based on the ground-truths through machine learning. The experiments prove that we could use 2 anchors to achieve almost the same accuracy and recall rate. Basically, our standard parts detection system could reach 15fps on NVIDIA GTX1080 (GPU), while achieving detection accuracy 90.01% mAP.

  4. Avionics System Architecture Tool

    NASA Technical Reports Server (NTRS)

    Chau, Savio; Hall, Ronald; Traylor, marcus; Whitfield, Adrian

    2005-01-01

    Avionics System Architecture Tool (ASAT) is a computer program intended for use during the avionics-system-architecture- design phase of the process of designing a spacecraft for a specific mission. ASAT enables simulation of the dynamics of the command-and-data-handling functions of the spacecraft avionics in the scenarios in which the spacecraft is expected to operate. ASAT is built upon I-Logix Statemate MAGNUM, providing a complement of dynamic system modeling tools, including a graphical user interface (GUI), modeling checking capabilities, and a simulation engine. ASAT augments this with a library of predefined avionics components and additional software to support building and analyzing avionics hardware architectures using these components.

  5. Extensible Hardware Architecture for Mobile Robots

    NASA Technical Reports Server (NTRS)

    Park, Eric; Kobayashi, Linda; Lee, Susan Y.

    2005-01-01

    The Intelligent Robotics Group at NASA Ames Research Center has developed a new mobile robot hardware architecture designed for extensibility and reconfigurability. Currently implemented on the k9 rover. and won to be integrated onto the K10 series of human-robot collaboration research robots, this architecture allows for rapid changes in instrumentation configuration and provides a high degree of modularity through a synergistic mix of off-the-shelf and custom designed components, allowing eased transplantation into a wide vane6 of mobile robot platforms. A component level overview of this architecture is presented along with a description of the changes required for implementation on K10 , followed by plans for future work.

  6. Executable Architecture Research at Old Dominion University

    NASA Technical Reports Server (NTRS)

    Tolk, Andreas; Shuman, Edwin A.; Garcia, Johnny J.

    2011-01-01

    Executable Architectures allow the evaluation of system architectures not only regarding their static, but also their dynamic behavior. However, the systems engineering community do not agree on a common formal specification of executable architectures. To close this gap and identify necessary elements of an executable architecture, a modeling language, and a modeling formalism is topic of ongoing PhD research. In addition, systems are generally defined and applied in an operational context to provide capabilities and enable missions. To maximize the benefits of executable architectures, a second PhD effort introduces the idea of creating an executable context in addition to the executable architecture. The results move the validation of architectures from the current information domain into the knowledge domain and improve the reliability of such validation efforts. The paper presents research and results of both doctoral research efforts and puts them into a common context of state-of-the-art of systems engineering methods supporting more agility.

  7. Cummins MD & HD Accessory Hybridization CRADA -Annual Report FY15

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Deter, Dean D.

    2015-10-01

    There are many areas of MD and HD vehicles that can be improved by new technologies and optimized control strategies. Component optimization and idle reduction need to be addressed, this is best done by a two part approach that includes selecting the best component technology, and/or architecture, and optimized controls that are vehicle focused. While this is a common focus in the light duty industry it has been gaining momentum in the MD and HD market as the market gets more competitive and the regulations become more stringent. When looking into systems optimization and idle reduction technologies, affected vehicle systemsmore » must first be considered, and if possible included in the new architecture to get the most benefit out of these new capabilities. Typically, when looking into idle reduction or component optimization for MD/HD, the vehicle s accessories become a prime candidate for electrification or hybridization. While this has already been studied on light duty vehicles (especially on hybrids and electric vehicles) it has not made any head way or market penetration in most MD and HD applications. If hybrids and electric MD and HD vehicles begin to break into the market this would be a necessary step into the ability to make those vehicles successful by allowing for independent, optimized operation separate from the engine.« less

  8. A High Performance COTS Based Computer Architecture

    NASA Astrophysics Data System (ADS)

    Patte, Mathieu; Grimoldi, Raoul; Trautner, Roland

    2014-08-01

    Using Commercial Off The Shelf (COTS) electronic components for space applications is a long standing idea. Indeed the difference in processing performance and energy efficiency between radiation hardened components and COTS components is so important that COTS components are very attractive for use in mass and power constrained systems. However using COTS components in space is not straightforward as one must account with the effects of the space environment on the COTS components behavior. In the frame of the ESA funded activity called High Performance COTS Based Computer, Airbus Defense and Space and its subcontractor OHB CGS have developed and prototyped a versatile COTS based architecture for high performance processing. The rest of the paper is organized as follows: in a first section we will start by recapitulating the interests and constraints of using COTS components for space applications; then we will briefly describe existing fault mitigation architectures and present our solution for fault mitigation based on a component called the SmartIO; in the last part of the paper we will describe the prototyping activities executed during the HiP CBC project.

  9. Rapidly Re-Configurable Flight Simulator Tools for Crew Vehicle Integration Research and Design

    NASA Technical Reports Server (NTRS)

    Schutte, Paul C.; Trujillo, Anna; Pritchett, Amy R.

    2000-01-01

    While simulation is a valuable research and design tool, the time and difficulty required to create new simulations (or re-use existing simulations) often limits their application. This report describes the design of the software architecture for the Reconfigurable Flight Simulator (RFS), which provides a robust simulation framework that allows the simulator to fulfill multiple research and development goals. The core of the architecture provides the interface standards for simulation components, registers and initializes components, and handles the communication between simulation components. The simulation components are each a pre-compiled library 'plug-in' module. This modularity allows independent development and sharing of individual simulation components. Additional interfaces can be provided through the use of Object Data/Method Extensions (OD/ME). RFS provides a programmable run-time environment for real-time access and manipulation, and has networking capabilities using the High Level Architecture (HLA).

  10. Rapidly Re-Configurable Flight Simulator Tools for Crew Vehicle Integration Research and Design

    NASA Technical Reports Server (NTRS)

    Pritchett, Amy R.

    2002-01-01

    While simulation is a valuable research and design tool, the time and difficulty required to create new simulations (or re-use existing simulations) often limits their application. This report describes the design of the software architecture for the Reconfigurable Flight Simulator (RFS), which provides a robust simulation framework that allows the simulator to fulfill multiple research and development goals. The core of the architecture provides the interface standards for simulation components, registers and initializes components, and handles the communication between simulation components. The simulation components are each a pre-compiled library 'plugin' module. This modularity allows independent development and sharing of individual simulation components. Additional interfaces can be provided through the use of Object Data/Method Extensions (OD/ME). RFS provides a programmable run-time environment for real-time access and manipulation, and has networking capabilities using the High Level Architecture (HLA).

  11. Development of a Conceptual Structure for Architectural Solar Energy Systems.

    ERIC Educational Resources Information Center

    Ringel, Robert F.

    Solar subsystems and components were identified and conceptual structure was developed for architectural solar energy heating and cooling systems. Recent literature related to solar energy systems was reviewed and analyzed. Solar heating and cooling system, subsystem, and component data were compared for agreement and completeness. Significant…

  12. Space Station data management system architecture

    NASA Technical Reports Server (NTRS)

    Mallary, William E.; Whitelaw, Virginia A.

    1987-01-01

    Within the Space Station program, the Data Management System (DMS) functions in a dual role. First, it provides the hardware resources and software services which support the data processing, data communications, and data storage functions of the onboard subsystems and payloads. Second, it functions as an integrating entity which provides a common operating environment and human-machine interface for the operation and control of the orbiting Space Station systems and payloads by both the crew and the ground operators. This paper discusses the evolution and derivation of the requirements and issues which have had significant effect on the design of the Space Station DMS, describes the DMS components and services which support system and payload operations, and presents the current architectural view of the system as it exists in October 1986; one-and-a-half years into the Space Station Phase B Definition and Preliminary Design Study.

  13. A statistical approach to root system classification

    PubMed Central

    Bodner, Gernot; Leitner, Daniel; Nakhforoosh, Alireza; Sobotik, Monika; Moder, Karl; Kaul, Hans-Peter

    2013-01-01

    Plant root systems have a key role in ecology and agronomy. In spite of fast increase in root studies, still there is no classification that allows distinguishing among distinctive characteristics within the diversity of rooting strategies. Our hypothesis is that a multivariate approach for “plant functional type” identification in ecology can be applied to the classification of root systems. The classification method presented is based on a data-defined statistical procedure without a priori decision on the classifiers. The study demonstrates that principal component based rooting types provide efficient and meaningful multi-trait classifiers. The classification method is exemplified with simulated root architectures and morphological field data. Simulated root architectures showed that morphological attributes with spatial distribution parameters capture most distinctive features within root system diversity. While developmental type (tap vs. shoot-borne systems) is a strong, but coarse classifier, topological traits provide the most detailed differentiation among distinctive groups. Adequacy of commonly available morphologic traits for classification is supported by field data. Rooting types emerging from measured data, mainly distinguished by diameter/weight and density dominated types. Similarity of root systems within distinctive groups was the joint result of phylogenetic relation and environmental as well as human selection pressure. We concluded that the data-define classification is appropriate for integration of knowledge obtained with different root measurement methods and at various scales. Currently root morphology is the most promising basis for classification due to widely used common measurement protocols. To capture details of root diversity efforts in architectural measurement techniques are essential. PMID:23914200

  14. A statistical approach to root system classification.

    PubMed

    Bodner, Gernot; Leitner, Daniel; Nakhforoosh, Alireza; Sobotik, Monika; Moder, Karl; Kaul, Hans-Peter

    2013-01-01

    Plant root systems have a key role in ecology and agronomy. In spite of fast increase in root studies, still there is no classification that allows distinguishing among distinctive characteristics within the diversity of rooting strategies. Our hypothesis is that a multivariate approach for "plant functional type" identification in ecology can be applied to the classification of root systems. The classification method presented is based on a data-defined statistical procedure without a priori decision on the classifiers. The study demonstrates that principal component based rooting types provide efficient and meaningful multi-trait classifiers. The classification method is exemplified with simulated root architectures and morphological field data. Simulated root architectures showed that morphological attributes with spatial distribution parameters capture most distinctive features within root system diversity. While developmental type (tap vs. shoot-borne systems) is a strong, but coarse classifier, topological traits provide the most detailed differentiation among distinctive groups. Adequacy of commonly available morphologic traits for classification is supported by field data. Rooting types emerging from measured data, mainly distinguished by diameter/weight and density dominated types. Similarity of root systems within distinctive groups was the joint result of phylogenetic relation and environmental as well as human selection pressure. We concluded that the data-define classification is appropriate for integration of knowledge obtained with different root measurement methods and at various scales. Currently root morphology is the most promising basis for classification due to widely used common measurement protocols. To capture details of root diversity efforts in architectural measurement techniques are essential.

  15. Suited Occupant Injury Potential During Dynamic Spacecraft Flight Phases

    NASA Technical Reports Server (NTRS)

    Dub, Mark O.; McFarland, Shane M.

    2010-01-01

    In support of the Constellation Space Suit Element [CSSE], a new space-suit architecture will be created for support of Launch, Entry, Abort, Microgravity Extra- Vehicular Activity [EVA], and post-landing crew operations, safety and, under emergency conditions, survival. The space suit is unique in comparison to previous launch, entry, and abort [LEA] suit architectures in that it utilizes rigid mobility elements in the scye (i.e., shoulder) and the upper arm regions. The suit architecture also utilizes rigid thigh disconnect elements to create a quick disconnect approximately located above the knee. This feature allows commonality of the lower portion of the suit (from the thigh disconnect down), making the lower legs common across two suit configurations. This suit must interface with the Orion vehicle seat subsystem, which includes seat components, lateral supports, and restraints. Due to the unique configuration of spacesuit mobility elements, combined with the need to provide occupant protection during dynamic vehicle events, risks have been identified with potential injury due to the suit characteristics described above. To address the risk concerns, a test series has been developed in coordination with the Injury Biomechanics Research Laboratory [IBRL] to evaluate the likelihood and consequences of these potential issues. Testing includes use of Anthropomorphic Test Devices [ATDs; vernacularly referred to as "crash test dummies"], Post Mortem Human Subjects [PMHS], and representative seat/suit hardware in combination with high linear acceleration events. The ensuing treatment focuses on test purpose and objectives; test hardware, facility, and setup; and preliminary results.

  16. Teaching Case: Enterprise Architecture Specification Case Study

    ERIC Educational Resources Information Center

    Steenkamp, Annette Lerine; Alawdah, Amal; Almasri, Osama; Gai, Keke; Khattab, Nidal; Swaby, Carval; Abaas, Ramy

    2013-01-01

    A graduate course in enterprise architecture had a team project component in which a real-world business case, provided by an industry sponsor, formed the basis of the project charter and the architecture statement of work. The paper aims to share the team project experience on developing the architecture specifications based on the business case…

  17. Developing Dynamic Field Theory Architectures for Embodied Cognitive Systems with cedar.

    PubMed

    Lomp, Oliver; Richter, Mathis; Zibner, Stephan K U; Schöner, Gregor

    2016-01-01

    Embodied artificial cognitive systems, such as autonomous robots or intelligent observers, connect cognitive processes to sensory and effector systems in real time. Prime candidates for such embodied intelligence are neurally inspired architectures. While components such as forward neural networks are well established, designing pervasively autonomous neural architectures remains a challenge. This includes the problem of tuning the parameters of such architectures so that they deliver specified functionality under variable environmental conditions and retain these functions as the architectures are expanded. The scaling and autonomy problems are solved, in part, by dynamic field theory (DFT), a theoretical framework for the neural grounding of sensorimotor and cognitive processes. In this paper, we address how to efficiently build DFT architectures that control embodied agents and how to tune their parameters so that the desired cognitive functions emerge while such agents are situated in real environments. In DFT architectures, dynamic neural fields or nodes are assigned dynamic regimes, that is, attractor states and their instabilities, from which cognitive function emerges. Tuning thus amounts to determining values of the dynamic parameters for which the components of a DFT architecture are in the specified dynamic regime under the appropriate environmental conditions. The process of tuning is facilitated by the software framework cedar , which provides a graphical interface to build and execute DFT architectures. It enables to change dynamic parameters online and visualize the activation states of any component while the agent is receiving sensory inputs in real time. Using a simple example, we take the reader through the workflow of conceiving of DFT architectures, implementing them on embodied agents, tuning their parameters, and assessing performance while the system is coupled to real sensory inputs.

  18. Developing Dynamic Field Theory Architectures for Embodied Cognitive Systems with cedar

    PubMed Central

    Lomp, Oliver; Richter, Mathis; Zibner, Stephan K. U.; Schöner, Gregor

    2016-01-01

    Embodied artificial cognitive systems, such as autonomous robots or intelligent observers, connect cognitive processes to sensory and effector systems in real time. Prime candidates for such embodied intelligence are neurally inspired architectures. While components such as forward neural networks are well established, designing pervasively autonomous neural architectures remains a challenge. This includes the problem of tuning the parameters of such architectures so that they deliver specified functionality under variable environmental conditions and retain these functions as the architectures are expanded. The scaling and autonomy problems are solved, in part, by dynamic field theory (DFT), a theoretical framework for the neural grounding of sensorimotor and cognitive processes. In this paper, we address how to efficiently build DFT architectures that control embodied agents and how to tune their parameters so that the desired cognitive functions emerge while such agents are situated in real environments. In DFT architectures, dynamic neural fields or nodes are assigned dynamic regimes, that is, attractor states and their instabilities, from which cognitive function emerges. Tuning thus amounts to determining values of the dynamic parameters for which the components of a DFT architecture are in the specified dynamic regime under the appropriate environmental conditions. The process of tuning is facilitated by the software framework cedar, which provides a graphical interface to build and execute DFT architectures. It enables to change dynamic parameters online and visualize the activation states of any component while the agent is receiving sensory inputs in real time. Using a simple example, we take the reader through the workflow of conceiving of DFT architectures, implementing them on embodied agents, tuning their parameters, and assessing performance while the system is coupled to real sensory inputs. PMID:27853431

  19. A real-time architecture for time-aware agents.

    PubMed

    Prouskas, Konstantinos-Vassileios; Pitt, Jeremy V

    2004-06-01

    This paper describes the specification and implementation of a new three-layer time-aware agent architecture. This architecture is designed for applications and environments where societies of humans and agents play equally active roles, but interact and operate in completely different time frames. The architecture consists of three layers: the April real-time run-time (ART) layer, the time aware layer (TAL), and the application agents layer (AAL). The ART layer forms the underlying real-time agent platform. An original online, real-time, dynamic priority-based scheduling algorithm is described for scheduling the computation time of agent processes, and it is shown that the algorithm's O(n) complexity and scalable performance are sufficient for application in real-time domains. The TAL layer forms an abstraction layer through which human and agent interactions are temporally unified, that is, handled in a common way irrespective of their temporal representation and scale. A novel O(n2) interaction scheduling algorithm is described for predicting and guaranteeing interactions' initiation and completion times. The time-aware predicting component of a workflow management system is also presented as an instance of the AAL layer. The described time-aware architecture addresses two key challenges in enabling agents to be effectively configured and applied in environments where humans and agents play equally active roles. It provides flexibility and adaptability in its real-time mechanisms while placing them under direct agent control, and it temporally unifies human and agent interactions.

  20. ITS system specification. Appendix A, architectural trade-off analysis

    DOT National Transportation Integrated Search

    1997-01-01

    The objective of the Polaris Project is to define an Intelligent Transportation Systems (ITS) architecture for the state of Minnesota. An architecture is a framework that defines how multiple ITS Components interrelate and contribute to the overall I...

  1. Insider Threat Security Reference Architecture

    DTIC Science & Technology

    2012-04-01

    this challenge. CMU/SEI-2012-TR-007 | 2 2 The Components of the ITSRA Figure 2 shows the four layers of the ITSRA. The Business Security layer......organizations improve their level of preparedness to address the insider threat. Business Security Architecture Data Security Architecture

  2. Common Readout Unit (CRU) - A new readout architecture for the ALICE experiment

    NASA Astrophysics Data System (ADS)

    Mitra, J.; Khan, S. A.; Mukherjee, S.; Paul, R.

    2016-03-01

    The ALICE experiment at the CERN Large Hadron Collider (LHC) is presently going for a major upgrade in order to fully exploit the scientific potential of the upcoming high luminosity run, scheduled to start in the year 2021. The high interaction rate and the large event size will result in an experimental data flow of about 1 TB/s from the detectors, which need to be processed before sending to the online computing system and data storage. This processing is done in a dedicated Common Readout Unit (CRU), proposed for data aggregation, trigger and timing distribution and control moderation. It act as common interface between sub-detector electronic systems, computing system and trigger processors. The interface links include GBT, TTC-PON and PCIe. GBT (Gigabit transceiver) is used for detector data payload transmission and fixed latency path for trigger distribution between CRU and detector readout electronics. TTC-PON (Timing, Trigger and Control via Passive Optical Network) is employed for time multiplex trigger distribution between CRU and Central Trigger Processor (CTP). PCIe (Peripheral Component Interconnect Express) is the high-speed serial computer expansion bus standard for bulk data transport between CRU boards and processors. In this article, we give an overview of CRU architecture in ALICE, discuss the different interfaces, along with the firmware design and implementation of CRU on the LHCb PCIe40 board.

  3. Systematicity and a Categorical Theory of Cognitive Architecture: Universal Construction in Context.

    PubMed

    Phillips, Steven; Wilson, William H

    2016-01-01

    Why does the capacity to think certain thoughts imply the capacity to think certain other, structurally related, thoughts? Despite decades of intensive debate, cognitive scientists have yet to reach a consensus on an explanation for this property of cognitive architecture-the basic processes and modes of composition that together afford cognitive capacity-called systematicity. Systematicity is generally considered to involve a capacity to represent/process common structural relations among the equivalently cognizable entities. However, the predominant theoretical approaches to the systematicity problem, i.e., classical (symbolic) and connectionist (subsymbolic), require arbitrary (ad hoc) assumptions to derive systematicity. That is, their core principles and assumptions do not provide the necessary and sufficient conditions from which systematicity follows, as required of a causal theory. Hence, these approaches fail to fully explain why systematicity is a (near) universal property of human cognition, albeit in restricted contexts. We review an alternative, category theory approach to the systematicity problem. As a mathematical theory of structure, category theory provides necessary and sufficient conditions for systematicity in the form of universal construction: each systematically related cognitive capacity is composed of a common component and a unique component. Moreover, every universal construction can be viewed as the optimal construction in the given context (category). From this view, universal constructions are derived from learning, as an optimization. The ultimate challenge, then, is to explain the determination of context. If context is a category, then a natural extension toward addressing this question is higher-order category theory, where categories themselves are the objects of construction.

  4. Functional Interface Considerations within an Exploration Life Support System Architecture

    NASA Technical Reports Server (NTRS)

    Perry, Jay L.; Sargusingh, Miriam J.; Toomarian, Nikzad

    2016-01-01

    As notional life support system (LSS) architectures are developed and evaluated, myriad options must be considered pertaining to process technologies, components, and equipment assemblies. Each option must be evaluated relative to its impact on key functional interfaces within the LSS architecture. A leading notional architecture has been developed to guide the path toward realizing future crewed space exploration goals. This architecture includes atmosphere revitalization, water recovery and management, and environmental monitoring subsystems. Guiding requirements for developing this architecture are summarized and important interfaces within the architecture are discussed. The role of environmental monitoring within the architecture is described.

  5. Automated visual inspection system based on HAVNET architecture

    NASA Astrophysics Data System (ADS)

    Burkett, K.; Ozbayoglu, Murat A.; Dagli, Cihan H.

    1994-10-01

    In this study, the HAusdorff-Voronoi NETwork (HAVNET) developed at the UMR Smart Engineering Systems Lab is tested in the recognition of mounted circuit components commonly used in printed circuit board assembly systems. The automated visual inspection system used consists of a CCD camera, a neural network based image processing software and a data acquisition card connected to a PC. The experiments are run in the Smart Engineering Systems Lab in the Engineering Management Dept. of the University of Missouri-Rolla. The performance analysis shows that the vision system is capable of recognizing different components under uncontrolled lighting conditions without being effected by rotation or scale differences. The results obtained are promising and the system can be used in real manufacturing environments. Currently the system is being customized for a specific manufacturing application.

  6. Move-tecture: A Conceptual Framework for Designing Movement in Architecture

    NASA Astrophysics Data System (ADS)

    Yilmaz, Irem

    2017-10-01

    Along with the technological improvements in our age, it is now possible for the movement to become one of the basic components of the architectural space. Accordingly, architectural construction of movement changes both our architectural production practices and our understanding of architectural space. However, existing design concepts and approaches are insufficient to discuss and understand this change. In this respect, this study aims to form a conceptual framework on the relationship of architecture and movement. In this sense, the conceptualization of move-tecture is developed to research on the architectural construction of movement and the potentials of spatial creation through architecturally constructed movement. Move-tecture, is a conceptualization that treats movement as a basic component of spatial creation. It presents the framework of a qualitative categorization on the design of moving architectural structures. However, this categorization is a flexible one that can evolve in the direction of the expanding possibilities of the architectural design and the changing living conditions. With this understanding, six categories have been defined within the context of the article: Topological Organization, Choreographic Formation, Kinetic Structuring, Corporeal Constitution, Technological Configuration and Interactional Patterning. In line with these categories, a multifaceted perspective on the moving architectural structures is promoted. It is aimed that such an understanding constitutes a new initiative in the design practices carried out in this area and provides a conceptual basis for the discussions to be developed.

  7. Semantically Enhanced Online Configuration of Feedback Control Schemes.

    PubMed

    Milis, Georgios M; Panayiotou, Christos G; Polycarpou, Marios M

    2018-03-01

    Recent progress toward the realization of the "Internet of Things" has improved the ability of physical and soft/cyber entities to operate effectively within large-scale, heterogeneous systems. It is important that such capacity be accompanied by feedback control capabilities sufficient to ensure that the overall systems behave according to their specifications and meet their functional objectives. To achieve this, such systems require new architectures that facilitate the online deployment, composition, interoperability, and scalability of control system components. Most current control systems lack scalability and interoperability because their design is based on a fixed configuration of specific components, with knowledge of their individual characteristics only implicitly passed through the design. This paper addresses the need for flexibility when replacing components or installing new components, which might occur when an existing component is upgraded or when a new application requires a new component, without the need to readjust or redesign the overall system. A semantically enhanced feedback control architecture is introduced for a class of systems, aimed at accommodating new components into a closed-loop control framework by exploiting the semantic inference capabilities of an ontology-based knowledge model. This architecture supports continuous operation of the control system, a crucial property for large-scale systems for which interruptions have negative impact on key performance metrics that may include human comfort and welfare or economy costs. A case-study example from the smart buildings domain is used to illustrate the proposed architecture and semantic inference mechanisms.

  8. A SCORM Thin Client Architecture for E-Learning Systems Based on Web Services

    ERIC Educational Resources Information Center

    Casella, Giovanni; Costagliola, Gennaro; Ferrucci, Filomena; Polese, Giuseppe; Scanniello, Giuseppe

    2007-01-01

    In this paper we propose an architecture of e-learning systems characterized by the use of Web services and a suitable middleware component. These technical infrastructures allow us to extend the system with new services as well as to integrate and reuse heterogeneous software e-learning components. Moreover, they let us better support the…

  9. Component Architectures and Web-Based Learning Environments

    ERIC Educational Resources Information Center

    Ferdig, Richard E.; Mishra, Punya; Zhao, Yong

    2004-01-01

    The Web has caught the attention of many educators as an efficient communication medium and content delivery system. But we feel there is another aspect of the Web that has not been given the attention it deserves. We call this aspect of the Web its "component architecture." Briefly it means that on the Web one can develop very complex…

  10. A phase one AR/C system design

    NASA Technical Reports Server (NTRS)

    Kachmar, Peter M.; Polutchko, Robert J.; Matusky, Martin; Chu, William; Jackson, William; Montez, Moises

    1991-01-01

    The Phase One AR&C System Design integrates an evolutionary design based on the legacy of previous mission successes, flight tested components from manned Rendezvous and Proximity Operations (RPO) space programs, and additional AR&C components validated using proven methods. The Phase One system has a modular, open architecture with the standardized interfaces proposed for Space Station Freedom system architecture.

  11. Immunogold scanning electron microscopy can reveal the polysaccharide architecture of xylem cell walls

    PubMed Central

    Sun, Yuliang; Juzenas, Kevin

    2017-01-01

    Abstract Immunofluorescence microscopy (IFM) and immunogold transmission electron microscopy (TEM) are the two main techniques commonly used to detect polysaccharides in plant cell walls. Both are important in localizing cell wall polysaccharides, but both have major limitations, such as low resolution in IFM and restricted sample size for immunogold TEM. In this study, we have developed a robust technique that combines immunocytochemistry with scanning electron microscopy (SEM) to study cell wall polysaccharide architecture in xylem cells at high resolution over large areas of sample. Using multiple cell wall monoclonal antibodies (mAbs), this immunogold SEM technique reliably localized groups of hemicellulosic and pectic polysaccharides in the cell walls of five different xylem structures (vessel elements, fibers, axial and ray parenchyma cells, and tyloses). This demonstrates its important advantages over the other two methods for studying cell wall polysaccharide composition and distribution in these structures. In addition, it can show the three-dimensional distribution of a polysaccharide group in the vessel lateral wall and the polysaccharide components in the cell wall of developing tyloses. This technique, therefore, should be valuable for understanding the cell wall polysaccharide composition, architecture and functions of diverse cell types. PMID:28398585

  12. Compositional Specification of Software Architecture

    NASA Technical Reports Server (NTRS)

    Penix, John; Lau, Sonie (Technical Monitor)

    1998-01-01

    This paper describes our experience using parameterized algebraic specifications to model properties of software architectures. The goal is to model the decomposition of requirements independent of the style used to implement the architecture. We begin by providing an overview of the role of architecture specification in software development. We then describe how architecture specifications are build up from component and connector specifications and give an overview of insights gained from a case study used to validate the method.

  13. A federated design for a neurobiological simulation engine: the CBI federated software architecture.

    PubMed

    Cornelis, Hugo; Coop, Allan D; Bower, James M

    2012-01-01

    Simulator interoperability and extensibility has become a growing requirement in computational biology. To address this, we have developed a federated software architecture. It is federated by its union of independent disparate systems under a single cohesive view, provides interoperability through its capability to communicate, execute programs, or transfer data among different independent applications, and supports extensibility by enabling simulator expansion or enhancement without the need for major changes to system infrastructure. Historically, simulator interoperability has relied on development of declarative markup languages such as the neuron modeling language NeuroML, while simulator extension typically occurred through modification of existing functionality. The software architecture we describe here allows for both these approaches. However, it is designed to support alternative paradigms of interoperability and extensibility through the provision of logical relationships and defined application programming interfaces. They allow any appropriately configured component or software application to be incorporated into a simulator. The architecture defines independent functional modules that run stand-alone. They are arranged in logical layers that naturally correspond to the occurrence of high-level data (biological concepts) versus low-level data (numerical values) and distinguish data from control functions. The modular nature of the architecture and its independence from a given technology facilitates communication about similar concepts and functions for both users and developers. It provides several advantages for multiple independent contributions to software development. Importantly, these include: (1) Reduction in complexity of individual simulator components when compared to the complexity of a complete simulator, (2) Documentation of individual components in terms of their inputs and outputs, (3) Easy removal or replacement of unnecessary or obsoleted components, (4) Stand-alone testing of components, and (5) Clear delineation of the development scope of new components.

  14. Judicious use of custom development in an open source component architecture

    NASA Astrophysics Data System (ADS)

    Bristol, S.; Latysh, N.; Long, D.; Tekell, S.; Allen, J.

    2014-12-01

    Modern software engineering is not as much programming from scratch as innovative assembly of existing components. Seamlessly integrating disparate components into scalable, performant architecture requires sound engineering craftsmanship and can often result in increased cost efficiency and accelerated capabilities if software teams focus their creativity on the edges of the problem space. ScienceBase is part of the U.S. Geological Survey scientific cyberinfrastructure, providing data and information management, distribution services, and analysis capabilities in a way that strives to follow this pattern. ScienceBase leverages open source NoSQL and relational databases, search indexing technology, spatial service engines, numerous libraries, and one proprietary but necessary software component in its architecture. The primary engineering focus is cohesive component interaction, including construction of a seamless Application Programming Interface (API) across all elements. The API allows researchers and software developers alike to leverage the infrastructure in unique, creative ways. Scaling the ScienceBase architecture and core API with increasing data volume (more databases) and complexity (integrated science problems) is a primary challenge addressed by judicious use of custom development in the component architecture. Other data management and informatics activities in the earth sciences have independently resolved to a similar design of reusing and building upon established technology and are working through similar issues for managing and developing information (e.g., U.S. Geoscience Information Network; NASA's Earth Observing System Clearing House; GSToRE at the University of New Mexico). Recent discussions facilitated through the Earth Science Information Partners are exploring potential avenues to exploit the implicit relationships between similar projects for explicit gains in our ability to more rapidly advance global scientific cyberinfrastructure.

  15. A Federated Design for a Neurobiological Simulation Engine: The CBI Federated Software Architecture

    PubMed Central

    Cornelis, Hugo; Coop, Allan D.; Bower, James M.

    2012-01-01

    Simulator interoperability and extensibility has become a growing requirement in computational biology. To address this, we have developed a federated software architecture. It is federated by its union of independent disparate systems under a single cohesive view, provides interoperability through its capability to communicate, execute programs, or transfer data among different independent applications, and supports extensibility by enabling simulator expansion or enhancement without the need for major changes to system infrastructure. Historically, simulator interoperability has relied on development of declarative markup languages such as the neuron modeling language NeuroML, while simulator extension typically occurred through modification of existing functionality. The software architecture we describe here allows for both these approaches. However, it is designed to support alternative paradigms of interoperability and extensibility through the provision of logical relationships and defined application programming interfaces. They allow any appropriately configured component or software application to be incorporated into a simulator. The architecture defines independent functional modules that run stand-alone. They are arranged in logical layers that naturally correspond to the occurrence of high-level data (biological concepts) versus low-level data (numerical values) and distinguish data from control functions. The modular nature of the architecture and its independence from a given technology facilitates communication about similar concepts and functions for both users and developers. It provides several advantages for multiple independent contributions to software development. Importantly, these include: (1) Reduction in complexity of individual simulator components when compared to the complexity of a complete simulator, (2) Documentation of individual components in terms of their inputs and outputs, (3) Easy removal or replacement of unnecessary or obsoleted components, (4) Stand-alone testing of components, and (5) Clear delineation of the development scope of new components. PMID:22242154

  16. A Bayesian Approach to Model Selection in Hierarchical Mixtures-of-Experts Architectures.

    PubMed

    Tanner, Martin A.; Peng, Fengchun; Jacobs, Robert A.

    1997-03-01

    There does not exist a statistical model that shows good performance on all tasks. Consequently, the model selection problem is unavoidable; investigators must decide which model is best at summarizing the data for each task of interest. This article presents an approach to the model selection problem in hierarchical mixtures-of-experts architectures. These architectures combine aspects of generalized linear models with those of finite mixture models in order to perform tasks via a recursive "divide-and-conquer" strategy. Markov chain Monte Carlo methodology is used to estimate the distribution of the architectures' parameters. One part of our approach to model selection attempts to estimate the worth of each component of an architecture so that relatively unused components can be pruned from the architecture's structure. A second part of this approach uses a Bayesian hypothesis testing procedure in order to differentiate inputs that carry useful information from nuisance inputs. Simulation results suggest that the approach presented here adheres to the dictum of Occam's razor; simple architectures that are adequate for summarizing the data are favored over more complex structures. Copyright 1997 Elsevier Science Ltd. All Rights Reserved.

  17. A unified approach to the design of clinical reporting systems.

    PubMed

    Gouveia-Oliveira, A; Salgado, N C; Azevedo, A P; Lopes, L; Raposo, V D; Almeida, I; de Melo, F G

    1994-12-01

    Computer-based Clinical Reporting Systems (CRS) for diagnostic departments that use structured data entry have a number of functional and structural affinities suggesting that a common software architecture for CRS may be defined. Such an architecture should allow easy expandability and reusability of a CRS. We report the development methodology and the architecture of SISCOPE, a CRS originally designed for gastrointestinal endoscopy that is expandable and reusable. Its main components are a patient database, a knowledge base, a reports base, and screen and reporting engines. The knowledge base contains the description of the controlled vocabulary and all the information necessary to control the menu system, and is easily accessed and modified with a conventional text editor. The structure of the controlled vocabulary is formally presented as an entity-relationship diagram. The screen engine drives a dynamic user interface and the reporting engine automatically creates a medical report; both engines operate by following a set of rules and the information contained in the knowledge base. Clinical experience has shown this architecture to be highly flexible and to allow frequent modifications of both the vocabulary and the menu system. This structure provided increased collaboration among development teams, insulating the domain expert from the details of the database, and enabling him to modify the system as necessary and to test the changes immediately. The system has also been reused in several different domains.

  18. Architecture of a wireless Personal Assistant for telemedical diabetes care.

    PubMed

    García-Sáez, Gema; Hernando, M Elena; Martínez-Sarriegui, Iñaki; Rigla, Mercedes; Torralba, Verónica; Brugués, Eulalia; de Leiva, Alberto; Gómez, Enrique J

    2009-06-01

    Advanced information technologies joined to the increasing use of continuous medical devices for monitoring and treatment, have made possible the definition of a new telemedical diabetes care scenario based on a hand-held Personal Assistant (PA). This paper describes the architecture, functionality and implementation of the PA, which communicates different medical devices in a personal wireless network. The PA is a mobile system for patients with diabetes connected to a telemedical center. The software design follows a modular approach to make the integration of medical devices or new functionalities independent from the rest of its components. Physicians can remotely control medical devices from the telemedicine server through the integration of the Common Object Request Broker Architecture (CORBA) and mobile GPRS communications. Data about PA modules' usage and patients' behavior evaluation come from a pervasive tracing system implemented into the PA. The PA architecture has been technically validated with commercially available medical devices during a clinical experiment for ambulatory monitoring and expert feedback through telemedicine. The clinical experiment has allowed defining patients' patterns of usage and preferred scenarios and it has proved the Personal Assistant's feasibility. The patients showed high acceptability and interest in the system as recorded in the usability and utility questionnaires. Future work will be devoted to the validation of the system with automatic control strategies from the telemedical center as well as with closed-loop control algorithms.

  19. The UAS control segment architecture: an overview

    NASA Astrophysics Data System (ADS)

    Gregory, Douglas A.; Batavia, Parag; Coats, Mark; Allport, Chris; Jennings, Ann; Ernst, Richard

    2013-05-01

    The Under Secretary of Defense (Acquisition, Technology and Logistics) directed the Services in 2009 to jointly develop and demonstrate a common architecture for command and control of Department of Defense (DoD) Unmanned Aircraft Systems (UAS) Groups 2 through 5. The UAS Control Segment (UCS) Architecture is an architecture framework for specifying and designing the softwareintensive capabilities of current and emerging UCS systems in the DoD inventory. The UCS Architecture is based on Service Oriented Architecture (SOA) principles that will be adopted by each of the Services as a common basis for acquiring, integrating, and extending the capabilities of the UAS Control Segment. The UAS Task Force established the UCS Working Group to develop and support the UCS Architecture. The Working Group currently has over three hundred members, and is open to qualified representatives from DoD-approved defense contractors, academia, and the Government. The UCS Architecture is currently at Release 2.2, with Release 3.0 planned for July 2013. This paper discusses the current and planned elements of the UCS Architecture, and related activities of the UCS Community of Interest.

  20. Real World Data and Service Integration: Demonstrations and Lessons Learnt from the GEOSS Architecture Implementation Pilot Phase Four

    NASA Astrophysics Data System (ADS)

    Simonis, I.; Alameh, N.; Percivall, G.

    2012-04-01

    The GEOSS Architecture Implementation Pilots (AIP) develop and pilot new process and infrastructure components for the GEOSS Common Infrastructure (GCI) and the broader GEOSS architecture through an evolutionary development process consisting of a set of phases. Each phase addresses a set of Societal Benefit Areas (SBA) and geoinformatic topics. The first three phases consisted of architecture refinements based on interactions with users; component interoperability testing; and SBA-driven demonstrations. The fourth phase (AIP-4) documented here focused on fostering interoperability arrangements and common practices for GEOSS by facilitating access to priority earth observation data sources and by developing and testing specific clients and mediation components to enable such access. Additionally, AIP-4 supported the development of a thesaurus for earth observation parameters and tutorials to guide data providers to make their data available through GEOSS. The results of AIP-4 are documented in two engineering reports and captured in a series of videos posted online. Led by the Open Geospatial Consortium (OGC), AIP-4 built on contributions from over 60 organizations. This wide portfolio helped testing interoperability arrangements in a highly heterogeneous environment. AIP-4 participants cooperated closely to test available data sets, access services, and client applications in multiple workflows and set ups. Eventually, AIP-4 improved the accessibility of GEOSS datasets identified as supporting Critical Earth Observation Priorities by the GEO User Interface Committee (UIC), and increased the use of the data through promoting availability of new data services, clients, and applications. During AIP-4, A number of key earth observation data sources have been made available online at standard service interfaces, discovered using brokered search approaches, and processed and visualized in generalized client applications. AIP-4 demonstrated the level of interoperability that can be achieved using currently available standards and corresponding products and implementations. The AIP-4 integration testing process proved that the integration of heterogeneous data resources available via interoperability arrangements such as WMS, WFS, WCS and WPS indeed works. However, the integration often required various levels of customizations on the client side to accommodate for variations in the service implementations. Those variations seem to be based on both malfunctioning service implementations as well as varying interpretations of or inconsistencies in existing standards. Other interoperability issues identified revolve around missing metadata or using unrecognized identifiers in the description of GEOSS resources. Once such issues are resolved, continuous compliance testing is necessary to ensure minimizing variability of implementations. Once data providers can choose from a set of enhanced implementations for offering their data using consistent interoperability arrangements, the barrier to client and decision support implementation developers will be lowered, leading to true leveraging of earth observation data through GEOSS. AIP-4 results, lessons learnt from previous AIPs 1-3 and close coordination with the Infrastructure Implementation Board (IIB), the successor of the Architecture and Data Committee (ADC), form the basis in the current preparation phase for the next Architecture Implementation Pilot, AIP-5. The Call For Participation will be launched in February and the pilot will be conducted from May to November 2012. The current planning foresees a scenario- oriented approach, with possible scenarios coming from the domains of disaster management, health (including air quality and waterborne diseases), water resource observations, energy, biodiversity and climate change, and agriculture.

  1. Evolutionary dynamics of protein domain architecture in plants

    PubMed Central

    2012-01-01

    Background Protein domains are the structural, functional and evolutionary units of the protein. Protein domain architectures are the linear arrangements of domain(s) in individual proteins. Although the evolutionary history of protein domain architecture has been extensively studied in microorganisms, the evolutionary dynamics of domain architecture in the plant kingdom remains largely undefined. To address this question, we analyzed the lineage-based protein domain architecture content in 14 completed green plant genomes. Results Our analyses show that all 14 plant genomes maintain similar distributions of species-specific, single-domain, and multi-domain architectures. Approximately 65% of plant domain architectures are universally present in all plant lineages, while the remaining architectures are lineage-specific. Clear examples are seen of both the loss and gain of specific protein architectures in higher plants. There has been a dynamic, lineage-wise expansion of domain architectures during plant evolution. The data suggest that this expansion can be largely explained by changes in nuclear ploidy resulting from rounds of whole genome duplications. Indeed, there has been a decrease in the number of unique domain architectures when the genomes were normalized into a presumed ancestral genome that has not undergone whole genome duplications. Conclusions Our data show the conservation of universal domain architectures in all available plant genomes, indicating the presence of an evolutionarily conserved, core set of protein components. However, the occurrence of lineage-specific domain architectures indicates that domain architecture diversity has been maintained beyond these core components in plant genomes. Although several features of genome-wide domain architecture content are conserved in plants, the data clearly demonstrate lineage-wise, progressive changes and expansions of individual protein domain architectures, reinforcing the notion that plant genomes have undergone dynamic evolution. PMID:22252370

  2. The entropy reduction engine: Integrating planning, scheduling, and control

    NASA Technical Reports Server (NTRS)

    Drummond, Mark; Bresina, John L.; Kedar, Smadar T.

    1991-01-01

    The Entropy Reduction Engine, an architecture for the integration of planning, scheduling, and control, is described. The architecture is motivated, presented, and analyzed in terms of its different components; namely, problem reduction, temporal projection, and situated control rule execution. Experience with this architecture has motivated the recent integration of learning. The learning methods are described along with their impact on architecture performance.

  3. Defense Against National Vulnerabilities in Public Data

    DTIC Science & Technology

    2017-02-28

    ingestion of subscription based precision data sources ( Business Intelligence Databases, Monster, others).  Flexible data architecture that allows for... Architecture Objective: Develop a data acquisition architecture that can successfully ingest 1,000,000 records per hour from up to 100 different open...data sources.  Developed and operate a data acquisition architecture comprised of the four following major components:  Robust website

  4. Technology Review of Multi-Agent Systems and Tools

    DTIC Science & Technology

    2005-06-01

    over a network, including the Internet. A web services architecture is the logical evolution of object-oriented analysis and design coupled with...the logical evolution of components geared towards the architecture, design, implementation, and deployment of e-business solutions. As in object...querying. The Web Services architecture describes the principles behind the next generation of e- business architectures, presenting a logical evolution

  5. A development framework for semantically interoperable health information systems.

    PubMed

    Lopez, Diego M; Blobel, Bernd G M E

    2009-02-01

    Semantic interoperability is a basic challenge to be met for new generations of distributed, communicating and co-operating health information systems (HIS) enabling shared care and e-Health. Analysis, design, implementation and maintenance of such systems and intrinsic architectures have to follow a unified development methodology. The Generic Component Model (GCM) is used as a framework for modeling any system to evaluate and harmonize state of the art architecture development approaches and standards for health information systems as well as to derive a coherent architecture development framework for sustainable, semantically interoperable HIS and their components. The proposed methodology is based on the Rational Unified Process (RUP), taking advantage of its flexibility to be configured for integrating other architectural approaches such as Service-Oriented Architecture (SOA), Model-Driven Architecture (MDA), ISO 10746, and HL7 Development Framework (HDF). Existing architectural approaches have been analyzed, compared and finally harmonized towards an architecture development framework for advanced health information systems. Starting with the requirements for semantic interoperability derived from paradigm changes for health information systems, and supported in formal software process engineering methods, an appropriate development framework for semantically interoperable HIS has been provided. The usability of the framework has been exemplified in a public health scenario.

  6. Similar recent selection criteria associated with different behavioural effects in two dog breeds.

    PubMed

    Sundman, A-S; Johnsson, M; Wright, D; Jensen, P

    2016-11-01

    Selection during the last decades has split some established dog breeds into morphologically and behaviourally divergent types. These breed splits are interesting models for behaviour genetics since selection has often been for few and well-defined behavioural traits. The aim of this study was to explore behavioural differences between selection lines in golden and Labrador retriever, in both of which a split between a common type (pet and conformation) and a field type (hunting) has occurred. We hypothesized that the behavioural profiles of the types would be similar in both breeds. Pedigree data and results from a standardized behavioural test from 902 goldens (698 common and 204 field) and 1672 Labradors (1023 and 649) were analysed. Principal component analysis revealed six behavioural components: curiosity, play interest, chase proneness, social curiosity, social greeting and threat display. Breed and type affected all components, but interestingly there was an interaction between breed and type for most components. For example, in Labradors the common type had higher curiosity than the field type (F 1,1668 = 18.359; P < 0.001), while the opposite was found in goldens (F 1,897 = 65.201; P < 0.001). Heritability estimates showed considerable genetic contributions to the behavioural variations in both breeds, but different heritabilities between the types within breeds was also found, suggesting different selection pressures. In conclusion, in spite of similar genetic origin and similar recent selection criteria, types behave differently in the breeds. This suggests that the genetic architecture related to behaviour differs between the breeds. © 2016 John Wiley & Sons Ltd and International Behavioural and Neural Genetics Society.

  7. Design Principles of Regulatory Networks: Searching for the Molecular Algorithms of the Cell

    PubMed Central

    Lim, Wendell A.; Lee, Connie M.; Tang, Chao

    2013-01-01

    A challenge in biology is to understand how complex molecular networks in the cell execute sophisticated regulatory functions. Here we explore the idea that there are common and general principles that link network structures to biological functions, principles that constrain the design solutions that evolution can converge upon for accomplishing a given cellular task. We describe approaches for classifying networks based on abstract architectures and functions, rather than on the specific molecular components of the networks. For any common regulatory task, can we define the space of all possible molecular solutions? Such inverse approaches might ultimately allow the assembly of a design table of core molecular algorithms that could serve as a guide for building synthetic networks and modulating disease networks. PMID:23352241

  8. Comparing architectural solutions of IPT application SDKs utilizing H.323 and SIP

    NASA Astrophysics Data System (ADS)

    Keskinarkaus, Anja; Korhonen, Jani; Ohtonen, Timo; Kilpelanaho, Vesa; Koskinen, Esa; Sauvola, Jaakko J.

    2001-07-01

    This paper presents two approaches to efficient service development for Internet Telephony. In first approach we consider services ranging from core call signaling features and media control as stated in ITU-T's H.323 to end user services that supports user interaction. The second approach supports IETF's SIP protocol. We compare these from differing architectural perspectives, economy of network and terminal development, and propose efficient architecture models for both protocols. In their design, the main criteria were component independence, lightweight operation and portability in heterogeneous end-to-end environments. In proposed architecture, the vertical division of call signaling and streaming media control logic allows for using the components either individually or combined, depending on the level of functionality required by an application.

  9. Different micromanipulation applications based on common modular control architecture

    NASA Astrophysics Data System (ADS)

    Sipola, Risto; Vallius, Tero; Pudas, Marko; Röning, Juha

    2010-01-01

    This paper validates a previously introduced scalable modular control architecture and shows how it can be used to implement research equipment. The validation is conducted by presenting different kinds of micromanipulation applications that use the architecture. Conditions of the micro-world are very different from those of the macro-world. Adhesive forces are significant compared to gravitational forces when micro-scale objects are manipulated. Manipulation is mainly conducted by automatic control relying on haptic feedback provided by force sensors. The validated architecture is a hierarchical layered hybrid architecture, including a reactive layer and a planner layer. The implementation of the architecture is modular, and the architecture has a lot in common with open architectures. Further, the architecture is extensible, scalable, portable and it enables reuse of modules. These are the qualities that we validate in this paper. To demonstrate the claimed features, we present different applications that require special control in micrometer, millimeter and centimeter scales. These applications include a device that measures cell adhesion, a device that examines properties of thin films, a device that measures adhesion of micro fibers and a device that examines properties of submerged gel produced by bacteria. Finally, we analyze how the architecture is used in these applications.

  10. ITS horizon scan : the societal, technical, and environmental trends that will influence ITS research and deployment.

    DOT National Transportation Integrated Search

    1996-06-01

    Evaluation of the intelligent transportation system (ITS) Architecture was one of the key components of the ITS National Architecture program. Evaluation of the architecture served three purposes: (1) It lead to more informed decision on how best to ...

  11. From hospital information system components to the medical record and clinical guidelines & protocols.

    PubMed

    Veloso, M; Estevão, N; Ferreira, P; Rodrigues, R; Costa, C T; Barahona, P

    1997-01-01

    This paper introduces an ongoing project towards the development of a new generation HIS, aiming at the integration of clinical and administrative information within a common framework. Its design incorporates explicit knowledge about domain objects and professional activities to be processed by the system together with related knowledge management services and act management services. The paper presents the conceptual model of the proposed HIS architecture, that supports a rich and fully integrated patient data model, enabling the implementation of a dynamic electronic patient record tightly coupled with computerised guideline knowledge bases.

  12. CALS Baseline Architecture Analysis of Weapons System. Technical Information: Army, Draft. Volume 8

    DOT National Transportation Integrated Search

    1989-09-01

    This effort was performed to provide a common framework for analysis and planning of CALS initiatives across the military services, leading eventually to the development of a common DoD-wide architecture for CALS. This study addresses Army technical ...

  13. Large liquid rocket engine transient performance simulation system

    NASA Technical Reports Server (NTRS)

    Mason, J. R.; Southwick, R. D.

    1989-01-01

    Phase 1 of the Rocket Engine Transient Simulation (ROCETS) program consists of seven technical tasks: architecture; system requirements; component and submodel requirements; submodel implementation; component implementation; submodel testing and verification; and subsystem testing and verification. These tasks were completed. Phase 2 of ROCETS consists of two technical tasks: Technology Test Bed Engine (TTBE) model data generation; and system testing verification. During this period specific coding of the system processors was begun and the engineering representations of Phase 1 were expanded to produce a simple model of the TTBE. As the code was completed, some minor modifications to the system architecture centering on the global variable common, GLOBVAR, were necessary to increase processor efficiency. The engineering modules completed during Phase 2 are listed: INJTOO - main injector; MCHBOO - main chamber; NOZLOO - nozzle thrust calculations; PBRNOO - preburner; PIPE02 - compressible flow without inertia; PUMPOO - polytropic pump; ROTROO - rotor torque balance/speed derivative; and TURBOO - turbine. Detailed documentation of these modules is in the Appendix. In addition to the engineering modules, several submodules were also completed. These submodules include combustion properties, component performance characteristics (maps), and specific utilities. Specific coding was begun on the system configuration processor. All functions necessary for multiple module operation were completed but the SOLVER implementation is still under development. This system, the Verification Checkout Facility (VCF) allows interactive comparison of module results to store data as well as provides an intermediate checkout of the processor code. After validation using the VCF, the engineering modules and submodules were used to build a simple TTBE.

  14. Component-Based Modelling for Scalable Smart City Systems Interoperability: A Case Study on Integrating Energy Demand Response Systems.

    PubMed

    Palomar, Esther; Chen, Xiaohong; Liu, Zhiming; Maharjan, Sabita; Bowen, Jonathan

    2016-10-28

    Smart city systems embrace major challenges associated with climate change, energy efficiency, mobility and future services by embedding the virtual space into a complex cyber-physical system. Those systems are constantly evolving and scaling up, involving a wide range of integration among users, devices, utilities, public services and also policies. Modelling such complex dynamic systems' architectures has always been essential for the development and application of techniques/tools to support design and deployment of integration of new components, as well as for the analysis, verification, simulation and testing to ensure trustworthiness. This article reports on the definition and implementation of a scalable component-based architecture that supports a cooperative energy demand response (DR) system coordinating energy usage between neighbouring households. The proposed architecture, called refinement of Cyber-Physical Component Systems (rCPCS), which extends the refinement calculus for component and object system (rCOS) modelling method, is implemented using Eclipse Extensible Coordination Tools (ECT), i.e., Reo coordination language. With rCPCS implementation in Reo, we specify the communication, synchronisation and co-operation amongst the heterogeneous components of the system assuring, by design scalability and the interoperability, correctness of component cooperation.

  15. Component-Based Modelling for Scalable Smart City Systems Interoperability: A Case Study on Integrating Energy Demand Response Systems

    PubMed Central

    Palomar, Esther; Chen, Xiaohong; Liu, Zhiming; Maharjan, Sabita; Bowen, Jonathan

    2016-01-01

    Smart city systems embrace major challenges associated with climate change, energy efficiency, mobility and future services by embedding the virtual space into a complex cyber-physical system. Those systems are constantly evolving and scaling up, involving a wide range of integration among users, devices, utilities, public services and also policies. Modelling such complex dynamic systems’ architectures has always been essential for the development and application of techniques/tools to support design and deployment of integration of new components, as well as for the analysis, verification, simulation and testing to ensure trustworthiness. This article reports on the definition and implementation of a scalable component-based architecture that supports a cooperative energy demand response (DR) system coordinating energy usage between neighbouring households. The proposed architecture, called refinement of Cyber-Physical Component Systems (rCPCS), which extends the refinement calculus for component and object system (rCOS) modelling method, is implemented using Eclipse Extensible Coordination Tools (ECT), i.e., Reo coordination language. With rCPCS implementation in Reo, we specify the communication, synchronisation and co-operation amongst the heterogeneous components of the system assuring, by design scalability and the interoperability, correctness of component cooperation. PMID:27801829

  16. Hardware architecture design of image restoration based on time-frequency domain computation

    NASA Astrophysics Data System (ADS)

    Wen, Bo; Zhang, Jing; Jiao, Zipeng

    2013-10-01

    The image restoration algorithms based on time-frequency domain computation is high maturity and applied widely in engineering. To solve the high-speed implementation of these algorithms, the TFDC hardware architecture is proposed. Firstly, the main module is designed, by analyzing the common processing and numerical calculation. Then, to improve the commonality, the iteration control module is planed for iterative algorithms. In addition, to reduce the computational cost and memory requirements, the necessary optimizations are suggested for the time-consuming module, which include two-dimensional FFT/IFFT and the plural calculation. Eventually, the TFDC hardware architecture is adopted for hardware design of real-time image restoration system. The result proves that, the TFDC hardware architecture and its optimizations can be applied to image restoration algorithms based on TFDC, with good algorithm commonality, hardware realizability and high efficiency.

  17. ASAC Executive Assistant Architecture Description Summary

    NASA Technical Reports Server (NTRS)

    Roberts, Eileen; Villani, James A.

    1997-01-01

    In this technical document, we describe the system architecture developed for the Aviation System Analysis Capability (ASAC) Executive Assistant (EA). We describe the genesis and role of the ASAC system, discuss the objectives of the ASAC system and provide an overview of components and models within the ASAC system, discuss our choice for an architecture methodology, the Domain Specific Software Architecture (DSSA), and the DSSA approach to developing a system architecture, and describe the development process and the results of the ASAC EA system architecture. The document has six appendices.

  18. OXC management and control system architecture with scalability, maintenance, and distributed managing environment

    NASA Astrophysics Data System (ADS)

    Park, Soomyung; Joo, Seong-Soon; Yae, Byung-Ho; Lee, Jong-Hyun

    2002-07-01

    In this paper, we present the Optical Cross-Connect (OXC) Management Control System Architecture, which has the scalability and robust maintenance and provides the distributed managing environment in the optical transport network. The OXC system we are developing, which is divided into the hardware and the internal and external software for the OXC system, is made up the OXC subsystem with the Optical Transport Network (OTN) sub layers-hardware and the optical switch control system, the signaling control protocol subsystem performing the User-to-Network Interface (UNI) and Network-to-Network Interface (NNI) signaling control, the Operation Administration Maintenance & Provisioning (OAM&P) subsystem, and the network management subsystem. And the OXC management control system has the features that can support the flexible expansion of the optical transport network, provide the connectivity to heterogeneous external network elements, be added or deleted without interrupting OAM&P services, be remotely operated, provide the global view and detail information for network planner and operator, and have Common Object Request Broker Architecture (CORBA) based the open system architecture adding and deleting the intelligent service networking functions easily in future. To meet these considerations, we adopt the object oriented development method in the whole developing steps of the system analysis, design, and implementation to build the OXC management control system with the scalability, the maintenance, and the distributed managing environment. As a consequently, the componentification for the OXC operation management functions of each subsystem makes the robust maintenance, and increases code reusability. Also, the component based OXC management control system architecture will have the flexibility and scalability in nature.

  19. Modular open RF architecture: extending VICTORY to RF systems

    NASA Astrophysics Data System (ADS)

    Melber, Adam; Dirner, Jason; Johnson, Michael

    2015-05-01

    Radio frequency products spanning multiple functions have become increasingly critical to the warfighter. Military use of the electromagnetic spectrum now includes communications, electronic warfare (EW), intelligence, and mission command systems. Due to the urgent needs of counterinsurgency operations, various quick reaction capabilities (QRCs) have been fielded to enhance warfighter capability. Although these QRCs were highly successfully in their respective missions, they were designed independently resulting in significant challenges when integrated on a common platform. This paper discusses how the Modular Open RF Architecture (MORA) addresses these challenges by defining an open architecture for multifunction missions that decomposes monolithic radio systems into high-level components with welldefined functions and interfaces. The functional decomposition maximizes hardware sharing while minimizing added complexity and cost due to modularization. MORA achieves significant size, weight and power (SWaP) savings by allowing hardware such as power amplifiers and antennas to be shared across systems. By separating signal conditioning from the processing that implements the actual radio application, MORA exposes previously inaccessible architecture points, providing system integrators with the flexibility to insert third-party capabilities to address technical challenges and emerging requirements. MORA leverages the Vehicular Integration for Command, Control, Communication, Computers, Intelligence, Surveillance, and Reconnaissance (C4ISR)/EW Interoperability (VICTORY) framework. This paper concludes by discussing how MORA, VICTORY and other standards such as OpenVPX are being leveraged by the U.S. Army Research, Development, and Engineering Command (RDECOM) Communications Electronics Research, Development, and Engineering Center (CERDEC) to define a converged architecture enabling rapid technology insertion, interoperability and reduced SWaP.

  20. Architectural approaches for HL7-based health information systems implementation.

    PubMed

    López, D M; Blobel, B

    2010-01-01

    Information systems integration is hard, especially when semantic and business process interoperability requirements need to be met. To succeed, a unified methodology, approaching different aspects of systems architecture such as business, information, computational, engineering and technology viewpoints, has to be considered. The paper contributes with an analysis and demonstration on how the HL7 standard set can support health information systems integration. Based on the Health Information Systems Development Framework (HIS-DF), common architectural models for HIS integration are analyzed. The framework is a standard-based, consistent, comprehensive, customizable, scalable methodology that supports the design of semantically interoperable health information systems and components. Three main architectural models for system integration are analyzed: the point to point interface, the messages server and the mediator models. Point to point interface and messages server models are completely supported by traditional HL7 version 2 and version 3 messaging. The HL7 v3 standard specification, combined with service-oriented, model-driven approaches provided by HIS-DF, makes the mediator model possible. The different integration scenarios are illustrated by describing a proof-of-concept implementation of an integrated public health surveillance system based on Enterprise Java Beans technology. Selecting the appropriate integration architecture is a fundamental issue of any software development project. HIS-DF provides a unique methodological approach guiding the development of healthcare integration projects. The mediator model - offered by the HIS-DF and supported in HL7 v3 artifacts - is the more promising one promoting the development of open, reusable, flexible, semantically interoperable, platform-independent, service-oriented and standard-based health information systems.

  1. The interplay between inflorescence development and function as the crucible of architectural diversity

    PubMed Central

    Harder, Lawrence D.; Prusinkiewicz, Przemyslaw

    2013-01-01

    Background Most angiosperms present flowers in inflorescences, which play roles in reproduction, primarily related to pollination, beyond those served by individual flowers alone. An inflorescence's overall reproductive contribution depends primarily on the three-dimensional arrangement of the floral canopy and its dynamics during its flowering period. These features depend in turn on characteristics of the underlying branching structure (scaffold) that supports and supplies water and nutrients to the floral canopy. This scaffold is produced by developmental algorithms that are genetically specified and hormonally mediated. Thus, the extensive inflorescence diversity evident among angiosperms evolves through changes in the developmental programmes that specify scaffold characteristics, which in turn modify canopy features that promote reproductive performance in a particular pollination and mating environment. Nevertheless, developmental and ecological aspects of inflorescences have typically been studied independently, limiting comprehensive understanding of the relations between inflorescence form, reproductive function, and evolution. Scope This review fosters an integrated perspective on inflorescences by summarizing aspects of their development and pollination function that enable and guide inflorescence evolution and diversification. Conclusions The architecture of flowering inflorescences comprises three related components: topology (branching patterns, flower number), geometry (phyllotaxis, internode and pedicel lengths, three-dimensional flower arrangement) and phenology (flower opening rate and longevity, dichogamy). Genetic and developmental evidence reveals that these components are largely subject to quantitative control. Consequently, inflorescence evolution proceeds along a multidimensional continuum. Nevertheless, some combinations of topology, geometry and phenology are represented more commonly than others, because they serve reproductive function particularly effectively. For wind-pollinated species, these combinations often represent compromise solutions to the conflicting physical influences on pollen removal, transport and deposition. For animal-pollinated species, dominant selective influences include the conflicting benefits of large displays for attracting pollinators and of small displays for limiting among-flower self-pollination. The variety of architectural components that comprise inflorescences enable diverse resolutions of these conflicts. PMID:23243190

  2. The Architecture Based Design Method

    DTIC Science & Technology

    2000-01-01

    implementation of components of different types. The software templates include a description of how components interact with shared services and also include citizenship responsibilities for components.

  3. Comprehensive multiplatform collaboration

    NASA Astrophysics Data System (ADS)

    Singh, Kundan; Wu, Xiaotao; Lennox, Jonathan; Schulzrinne, Henning G.

    2003-12-01

    We describe the architecture and implementation of our comprehensive multi-platform collaboration framework known as Columbia InterNet Extensible Multimedia Architecture (CINEMA). It provides a distributed architecture for collaboration using synchronous communications like multimedia conferencing, instant messaging, shared web-browsing, and asynchronous communications like discussion forums, shared files, voice and video mails. It allows seamless integration with various communication means like telephones, IP phones, web and electronic mail. In addition, it provides value-added services such as call handling based on location information and presence status. The paper discusses the media services needed for collaborative environment, the components provided by CINEMA and the interaction among those components.

  4. GFam: a platform for automatic annotation of gene families.

    PubMed

    Sasidharan, Rajkumar; Nepusz, Tamás; Swarbreck, David; Huala, Eva; Paccanaro, Alberto

    2012-10-01

    We have developed GFam, a platform for automatic annotation of gene/protein families. GFam provides a framework for genome initiatives and model organism resources to build domain-based families, derive meaningful functional labels and offers a seamless approach to propagate functional annotation across periodic genome updates. GFam is a hybrid approach that uses a greedy algorithm to chain component domains from InterPro annotation provided by its 12 member resources followed by a sequence-based connected component analysis of un-annotated sequence regions to derive consensus domain architecture for each sequence and subsequently generate families based on common architectures. Our integrated approach increases sequence coverage by 7.2 percentage points and residue coverage by 14.6 percentage points higher than the coverage relative to the best single-constituent database within InterPro for the proteome of Arabidopsis. The true power of GFam lies in maximizing annotation provided by the different InterPro data sources that offer resource-specific coverage for different regions of a sequence. GFam's capability to capture higher sequence and residue coverage can be useful for genome annotation, comparative genomics and functional studies. GFam is a general-purpose software and can be used for any collection of protein sequences. The software is open source and can be obtained from http://www.paccanarolab.org/software/gfam/.

  5. A MoTe2-based light-emitting diode and photodetector for silicon photonic integrated circuits.

    PubMed

    Bie, Ya-Qing; Grosso, Gabriele; Heuck, Mikkel; Furchi, Marco M; Cao, Yuan; Zheng, Jiabao; Bunandar, Darius; Navarro-Moratalla, Efren; Zhou, Lin; Efetov, Dmitri K; Taniguchi, Takashi; Watanabe, Kenji; Kong, Jing; Englund, Dirk; Jarillo-Herrero, Pablo

    2017-12-01

    One of the current challenges in photonics is developing high-speed, power-efficient, chip-integrated optical communications devices to address the interconnects bottleneck in high-speed computing systems. Silicon photonics has emerged as a leading architecture, in part because of the promise that many components, such as waveguides, couplers, interferometers and modulators, could be directly integrated on silicon-based processors. However, light sources and photodetectors present ongoing challenges. Common approaches for light sources include one or few off-chip or wafer-bonded lasers based on III-V materials, but recent system architecture studies show advantages for the use of many directly modulated light sources positioned at the transmitter location. The most advanced photodetectors in the silicon photonic process are based on germanium, but this requires additional germanium growth, which increases the system cost. The emerging two-dimensional transition-metal dichalcogenides (TMDs) offer a path for optical interconnect components that can be integrated with silicon photonics and complementary metal-oxide-semiconductors (CMOS) processing by back-end-of-the-line steps. Here, we demonstrate a silicon waveguide-integrated light source and photodetector based on a p-n junction of bilayer MoTe 2 , a TMD semiconductor with an infrared bandgap. This state-of-the-art fabrication technology provides new opportunities for integrated optoelectronic systems.

  6. Using CORBA to integrate manufacturing cells to a virtual enterprise

    NASA Astrophysics Data System (ADS)

    Pancerella, Carmen M.; Whiteside, Robert A.

    1997-01-01

    It is critical in today's enterprises that manufacturing facilities are not isolated from design, planning, and other business activities and that information flows easily and bidirectionally between these activities. It is also important and cost-effective that COTS software, databases, and corporate legacy codes are well integrated in the information architecture. Further, much of the information generated during manufacturing must be dynamically accessible to engineering and business operations both in a restricted corporate intranet and on the internet. The software integration strategy in the Sandia Agile Manufacturing Testbed supports these enterprise requirements. We are developing a CORBA-based distributed object software system for manufacturing. Each physical machining device is a CORBA object and exports a common IDL interface to allow for rapid and dynamic insertion, deletion, and upgrading within the manufacturing cell. Cell management CORBA components access manufacturing devices without knowledge of any device-specific implementation. To support information flow from design to planning data is accessible to machinists on the shop floor. CORBA allows manufacturing components to be easily accessible to the enterprise. Dynamic clients can be created using web browsers and portable Java GUI's. A CORBA-OLE adapter allows integration to PC desktop applications. Other commercial software can access CORBA network objects in the information architecture through vendor API's.

  7. Modular Approach to Launch Vehicle Design Based on a Common Core Element

    NASA Technical Reports Server (NTRS)

    Creech, Dennis M.; Threet, Grady E., Jr.; Philips, Alan D.; Waters, Eric D.; Baysinger, Mike

    2010-01-01

    With a heavy lift launch vehicle as the centerpiece of our nation's next exploration architecture's infrastructure, the Advanced Concepts Office at NASA's Marshall Space Flight Center initiated a study to examine the utilization of elements derived from a heavy lift launch vehicle for other potential launch vehicle applications. The premise of this study is to take a vehicle concept, which has been optimized for Lunar Exploration, and utilize the core stage with other existing or near existing stages and boosters to determine lift capabilities for alternative missions. This approach not only yields a vehicle matrix with a wide array of capabilities, but also produces an evolutionary pathway to a vehicle family based on a minimum development and production cost approach to a launch vehicle system architecture, instead of a purely performance driven approach. The upper stages and solid rocket booster selected for this study were chosen to reflect a cross-section of: modified existing assets in the form of a modified Delta IV upper stage and Castor-type boosters; potential near term launch vehicle component designs including an Ares I upper stage and 5-segment boosters; and longer lead vehicle components such as a Shuttle External Tank diameter upper stage. The results of this approach to a modular launch system are given in this paper.

  8. A MoTe2-based light-emitting diode and photodetector for silicon photonic integrated circuits

    NASA Astrophysics Data System (ADS)

    Bie, Ya-Qing; Grosso, Gabriele; Heuck, Mikkel; Furchi, Marco M.; Cao, Yuan; Zheng, Jiabao; Bunandar, Darius; Navarro-Moratalla, Efren; Zhou, Lin; Efetov, Dmitri K.; Taniguchi, Takashi; Watanabe, Kenji; Kong, Jing; Englund, Dirk; Jarillo-Herrero, Pablo

    2017-12-01

    One of the current challenges in photonics is developing high-speed, power-efficient, chip-integrated optical communications devices to address the interconnects bottleneck in high-speed computing systems. Silicon photonics has emerged as a leading architecture, in part because of the promise that many components, such as waveguides, couplers, interferometers and modulators, could be directly integrated on silicon-based processors. However, light sources and photodetectors present ongoing challenges. Common approaches for light sources include one or few off-chip or wafer-bonded lasers based on III-V materials, but recent system architecture studies show advantages for the use of many directly modulated light sources positioned at the transmitter location. The most advanced photodetectors in the silicon photonic process are based on germanium, but this requires additional germanium growth, which increases the system cost. The emerging two-dimensional transition-metal dichalcogenides (TMDs) offer a path for optical interconnect components that can be integrated with silicon photonics and complementary metal-oxide-semiconductors (CMOS) processing by back-end-of-the-line steps. Here, we demonstrate a silicon waveguide-integrated light source and photodetector based on a p-n junction of bilayer MoTe2, a TMD semiconductor with an infrared bandgap. This state-of-the-art fabrication technology provides new opportunities for integrated optoelectronic systems.

  9. Joint Common Architecture Demonstration (JCA Demo) Final Report

    DTIC Science & Technology

    2016-07-28

    approach for implementing open systems [16], formerly known as the Modular Open Systems Approach (MOSA). OSA is a business and technical strategy to... TECHNICAL REPORT RDMR-AD-16-01 JOINT COMMON ARCHITECTURE DEMONSTRATION (JCA DEMO) FINAL REPORT Scott A. Wigginton... Modular Avionics .......................................................................... 5 E. Model-Based Engineering

  10. Numerical Propulsion System Simulation: A Common Tool for Aerospace Propulsion Being Developed

    NASA Technical Reports Server (NTRS)

    Follen, Gregory J.; Naiman, Cynthia G.

    2001-01-01

    The NASA Glenn Research Center is developing an advanced multidisciplinary analysis environment for aerospace propulsion systems called the Numerical Propulsion System Simulation (NPSS). This simulation is initially being used to support aeropropulsion in the analysis and design of aircraft engines. NPSS provides increased flexibility for the user, which reduces the total development time and cost. It is currently being extended to support the Aviation Safety Program and Advanced Space Transportation. NPSS focuses on the integration of multiple disciplines such as aerodynamics, structure, and heat transfer with numerical zooming on component codes. Zooming is the coupling of analyses at various levels of detail. NPSS development includes using the Common Object Request Broker Architecture (CORBA) in the NPSS Developer's Kit to facilitate collaborative engineering. The NPSS Developer's Kit will provide the tools to develop custom components and to use the CORBA capability for zooming to higher fidelity codes, coupling to multidiscipline codes, transmitting secure data, and distributing simulations across different platforms. These powerful capabilities will extend NPSS from a zero-dimensional simulation tool to a multifidelity, multidiscipline system-level simulation tool for the full life cycle of an engine.

  11. An information model for a virtual private optical network (OVPN) using virtual routers (VRs)

    NASA Astrophysics Data System (ADS)

    Vo, Viet Minh Nhat

    2002-05-01

    This paper describes a virtual private optical network architecture (Optical VPN - OVPN) based on virtual router (VR). It improves over architectures suggested for virtual private networks by using virtual routers with optical networks. The new things in this architecture are necessary changes to adapt to devices and protocols used in optical networks. This paper also presents information models for the OVPN: at the architecture level and at the service level. These are extensions to the DEN (directory enable network) and CIM (Common Information Model) for OVPNs using VRs. The goal is to propose a common management model using policies.

  12. LEGOS: Object-based software components for mission-critical systems. Final report, June 1, 1995--December 31, 1997

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    1998-08-01

    An estimated 85% of the installed base of software is a custom application with a production quantity of one. In practice, almost 100% of military software systems are custom software. Paradoxically, the marginal costs of producing additional units are near zero. So why hasn`t the software market, a market with high design costs and low productions costs evolved like other similar custom widget industries, such as automobiles and hardware chips? The military software industry seems immune to market pressures that have motivated a multilevel supply chain structure in other widget industries: design cost recovery, improve quality through specialization, and enablemore » rapid assembly from purchased components. The primary goal of the ComponentWare Consortium (CWC) technology plan was to overcome barriers to building and deploying mission-critical information systems by using verified, reusable software components (Component Ware). The adoption of the ComponentWare infrastructure is predicated upon a critical mass of the leading platform vendors` inevitable adoption of adopting emerging, object-based, distributed computing frameworks--initially CORBA and COM/OLE. The long-range goal of this work is to build and deploy military systems from verified reusable architectures. The promise of component-based applications is to enable developers to snap together new applications by mixing and matching prefabricated software components. A key result of this effort is the concept of reusable software architectures. A second important contribution is the notion that a software architecture is something that can be captured in a formal language and reused across multiple applications. The formalization and reuse of software architectures provide major cost and schedule improvements. The Unified Modeling Language (UML) is fast becoming the industry standard for object-oriented analysis and design notation for object-based systems. However, the lack of a standard real-time distributed object operating system, lack of a standard Computer-Aided Software Environment (CASE) tool notation and lack of a standard CASE tool repository has limited the realization of component software. The approach to fulfilling this need is the software component factory innovation. The factory approach takes advantage of emerging standards such as UML, CORBA, Java and the Internet. The key technical innovation of the software component factory is the ability to assemble and test new system configurations as well as assemble new tools on demand from existing tools and architecture design repositories.« less

  13. GUEST EDITORS' INTRODUCTION: Guest Editors' introduction

    NASA Astrophysics Data System (ADS)

    Guerraoui, Rachid; Vinoski, Steve

    1997-09-01

    The organization of a distributed system can have a tremendous impact on its capabilities, its performance, and its ability to evolve to meet changing requirements. For example, the client - server organization model has proven to be adequate for organizing a distributed system as a number of distributed servers that offer various functions to client processes across the network. However, it lacks peer-to-peer capabilities, and experience with the model has been predominantly in the context of local networks. To achieve peer-to-peer cooperation in a more global context, systems issues of scale, heterogeneity, configuration management, accounting and sharing are crucial, and the complexity of migrating from locally distributed to more global systems demands new tools and techniques. An emphasis on interfaces and modules leads to the modelling of a complex distributed system as a collection of interacting objects that communicate with each other only using requests sent to well defined interfaces. Although object granularity typically varies at different levels of a system architecture, the same object abstraction can be applied to various levels of a computing architecture. Since 1989, the Object Management Group (OMG), an international software consortium, has been defining an architecture for distributed object systems called the Object Management Architecture (OMA). At the core of the OMA is a `software bus' called an Object Request Broker (ORB), which is specified by the OMG Common Object Request Broker Architecture (CORBA) specification. The OMA distributed object model fits the structure of heterogeneous distributed applications, and is applied in all layers of the OMA. For example, each of the OMG Object Services, such as the OMG Naming Service, is structured as a set of distributed objects that communicate using the ORB. Similarly, higher-level OMA components such as Common Facilities and Domain Interfaces are also organized as distributed objects that can be layered over both Object Services and the ORB. The OMG creates specifications, not code, but the interfaces it standardizes are always derived from demonstrated technology submitted by member companies. The specified interfaces are written in a neutral Interface Definition Language (IDL) that defines contractual interfaces with potential clients. Interfaces written in IDL can be translated to a number of programming languages via OMG standard language mappings so that they can be used to develop components. The resulting components can transparently communicate with other components written in different languages and running on different operating systems and machine types. The ORB is responsible for providing the illusion of `virtual homogeneity' regardless of the programming languages, tools, operating systems and networks used to realize and support these components. With the adoption of the CORBA 2.0 specification in 1995, these components are able to interoperate across multi-vendor CORBA-based products. More than 700 member companies have joined the OMG, including Hewlett-Packard, Digital, Siemens, IONA Technologies, Netscape, Sun Microsystems, Microsoft and IBM, which makes it the largest standards body in existence. These companies continue to work together within the OMG to refine and enhance the OMA and its components. This special issue of Distributed Systems Engineering publishes five papers that were originally presented at the `Distributed Object-Based Platforms' track of the 30th Hawaii International Conference on System Sciences (HICSS), which was held in Wailea on Maui on 6 - 10 January 1997. The papers, which were selected based on their quality and the range of topics they cover, address different aspects of CORBA, including advanced aspects such as fault tolerance and transactions. These papers discuss the use of CORBA and evaluate CORBA-based development for different types of distributed object systems and architectures. The first paper, by S Rahkila and S Stenberg, discusses the application of CORBA to telecommunication management networks. In the second paper, P Narasimhan, L E Moser and P M Melliar-Smith present a fault-tolerant extension of an ORB. The third paper, by J Liang, S Sédillot and B Traverson, provides an overview of the CORBA Transaction Service and its integration with the ISO Distributed Transaction Processing protocol. In the fourth paper, D Sherer, T Murer and A Würtz discuss the evolution of a cooperative software engineering infrastructure to a CORBA-based framework. The fifth paper, by R Fatoohi, evaluates the communication performance of a commercially-available Object Request Broker (Orbix from IONA Technologies) on several networks, and compares the performance with that of more traditional communication primitives (e.g., BSD UNIX sockets and PVM). We wish to thank both the referees and the authors of these papers, as their cooperation was fundamental in ensuring timely publication.

  14. Optical linear algebra processors - Architectures and algorithms

    NASA Technical Reports Server (NTRS)

    Casasent, David

    1986-01-01

    Attention is given to the component design and optical configuration features of a generic optical linear algebra processor (OLAP) architecture, as well as the large number of OLAP architectures, number representations, algorithms and applications encountered in current literature. Number-representation issues associated with bipolar and complex-valued data representations, high-accuracy (including floating point) performance, and the base or radix to be employed, are discussed, together with case studies on a space-integrating frequency-multiplexed architecture and a hybrid space-integrating and time-integrating multichannel architecture.

  15. Human Factors Assessment of the UH-60M Common Avionics Architecture System (CAAS) Crew Station During the Limited User Evaluation (LEUE)

    DTIC Science & Technology

    2005-12-01

    weapon system evaluation as a high-level architecture and distributed interactive simulation 6 compliant, human-in-the-loop, virtual environment...Directorate to participate in the Limited Early User Evaluation (LEUE) of the Common Avionics Architecture System (CAAS) cockpit. ARL conducted a human...CAAS, the UH-60M PO conducted a limited early user evaluation (LEUE) to evaluate the integration of the CAAS in the UH-60M crew station. The

  16. Software Architecture Evolution

    DTIC Science & Technology

    2013-12-01

    system’s major components occurring via a Java Message Service message bus [69]. This architecture was designed to promote loose coupling of soft- ware...play reconfiguration of the system. The components were Java -based and platform-independent; the interfaces by which they communicated were based on...The MPCS database, a MySQL database used for storing telemetry as well as some other information, such as logs and commanding data [68]. This

  17. Challenges in the Development and Evolution of Secure Open Architecture Command and Control Systems (Briefing Charts)

    DTIC Science & Technology

    2013-06-01

    widgets for an OA system Design-time architecture: Browser, email, widget, DB, OS Go ogle Instance architecture: Chrome, Gmail, Google...provides functionally similar components or applications compatible with an OA system design Firefox Browser, WP, calendar Opera Instance...architecture: Firefox , AbiWord, Evolution, Fedora GPL Ab1Word Google Docs Instance ardlitecture: Fire fox, OR Google cal., Google Docs, Fedora

  18. Architecture for Variable Data Entry into a National Registry.

    PubMed

    Goossen, William

    2017-01-01

    The Dutch perinatal registry required a new architecture due to the large variability of the submitted data from midwives and hospitals. The purpose of this article is to describe the healthcare information architecture for the Dutch perinatal registry. requirements analysis, design, development and testing. The architecture is depicted for its components and preliminary test results. The data entry and storage work well, the Data Marts are under preparation.

  19. Joint Polar Satellite System (JPSS) Common Ground System (CGS) Architecture Overview and Technical Performance Measures

    NASA Astrophysics Data System (ADS)

    Grant, K. D.; Johnson, B. R.; Miller, S. W.; Jamilkowski, M. L.

    2014-12-01

    The National Oceanic and Atmospheric Administration (NOAA) and National Aeronautics and Space Administration (NASA) are jointly acquiring the next-generation civilian weather and environmental satellite system: the Joint Polar Satellite System (JPSS). The Joint Polar Satellite System will replace the afternoon orbit component and ground processing system of the current Polar-orbiting Operational Environmental Satellites (POES) managed by NOAA. The JPSS satellites will carry a suite of sensors designed to collect meteorological, oceanographic, climatological and geophysical observations of the Earth. The ground processing system for JPSS is known as the JPSS Common Ground System (JPSS CGS). Developed and maintained by Raytheon Intelligence, Information and Services (IIS), the CGS is a multi-mission enterprise system serving NOAA, NASA and their national and international partners. The CGS provides a wide range of support to a number of missions. Originally designed to support S-NPP and JPSS, the CGS has demonstrated its scalability and flexibility to incorporate all of these other important missions efficiently and with minimal cost, schedule and risk, while strengthening global partnerships in weather and environmental monitoring. The CGS architecture will be upgraded to Block 2.0 in 2015 to satisfy several key objectives, including: "operationalizing" S-NPP, which had originally been intended as a risk reduction mission; leveraging lessons learned to date in multi-mission support; taking advantage of newer, more reliable and efficient technologies; and satisfying new requirements and constraints due to the continually evolving budgetary environment. To ensure the CGS meets these needs, we have developed 48 Technical Performance Measures (TPMs) across 9 categories: Data Availability, Data Latency, Operational Availability, Margin, Scalability, Situational Awareness, Transition (between environments and sites), WAN Efficiency, and Data Recovery Processing. This paper will provide an overview of the CGS Block 2.0 architecture, with particular focus on the 9 TPM categories listed above. We will describe how we ensure the deployed architecture meets these TPMs to satisfy our multi-mission objectives with the deployment of Block 2.0 in 2015.

  20. Architecture for Survivable System Processing (ASSP)

    NASA Astrophysics Data System (ADS)

    Wood, Richard J.

    1991-11-01

    The Architecture for Survivable System Processing (ASSP) Program is a multi-phase effort to implement Department of Defense (DOD) and commercially developed high-tech hardware, software, and architectures for reliable space avionics and ground based systems. System configuration options provide processing capabilities to address Time Dependent Processing (TDP), Object Dependent Processing (ODP), and Mission Dependent Processing (MDP) requirements through Open System Architecture (OSA) alternatives that allow for the enhancement, incorporation, and capitalization of a broad range of development assets. High technology developments in hardware, software, and networking models, address technology challenges of long processor life times, fault tolerance, reliability, throughput, memories, radiation hardening, size, weight, power (SWAP) and security. Hardware and software design, development, and implementation focus on the interconnectivity/interoperability of an open system architecture and is being developed to apply new technology into practical OSA components. To insure for widely acceptable architecture capable of interfacing with various commercial and military components, this program provides for regular interactions with standardization working groups (e.g.) the International Standards Organization (ISO), American National Standards Institute (ANSI), Society of Automotive Engineers (SAE), and Institute of Electrical and Electronic Engineers (IEEE). Selection of a viable open architecture is based on the widely accepted standards that implement the ISO/OSI Reference Model.

  1. Architecture for Survivable System Processing (ASSP)

    NASA Technical Reports Server (NTRS)

    Wood, Richard J.

    1991-01-01

    The Architecture for Survivable System Processing (ASSP) Program is a multi-phase effort to implement Department of Defense (DOD) and commercially developed high-tech hardware, software, and architectures for reliable space avionics and ground based systems. System configuration options provide processing capabilities to address Time Dependent Processing (TDP), Object Dependent Processing (ODP), and Mission Dependent Processing (MDP) requirements through Open System Architecture (OSA) alternatives that allow for the enhancement, incorporation, and capitalization of a broad range of development assets. High technology developments in hardware, software, and networking models, address technology challenges of long processor life times, fault tolerance, reliability, throughput, memories, radiation hardening, size, weight, power (SWAP) and security. Hardware and software design, development, and implementation focus on the interconnectivity/interoperability of an open system architecture and is being developed to apply new technology into practical OSA components. To insure for widely acceptable architecture capable of interfacing with various commercial and military components, this program provides for regular interactions with standardization working groups (e.g.) the International Standards Organization (ISO), American National Standards Institute (ANSI), Society of Automotive Engineers (SAE), and Institute of Electrical and Electronic Engineers (IEEE). Selection of a viable open architecture is based on the widely accepted standards that implement the ISO/OSI Reference Model.

  2. Design integration for minimal energy and cost

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Halldane, J.E.

    The authors present requirements for creating alternative energy conserving designs including energy management and architectural, plumbing, mechanical, electrical, electronic and optical design. Parameters of power, energy, life cycle costs and benefit for resource for an evaluation by the interested parties are discussed. They present an analysis of power systems through a seasonal power distribution diagram. An analysis of cost systems includes capital cost from the power components, annual costs from the utility energy use, and finance costs with loans, taxes, settlement and design fees. Equations are transposed to the evaluative parameter and are uniquely explicit with consistent symbols, parameter definitions,more » dual and balanced units, unit conversions, criteria for operation, incorporated constants for rapid calculations, references to data in the handbook, other common terms, and instrumentation for the measurement. Each component equation has a key power diagram.« less

  3. A Core Plug and Play Architecture for Reusable Flight Software Systems

    NASA Technical Reports Server (NTRS)

    Wilmot, Jonathan

    2006-01-01

    The Flight Software Branch, at Goddard Space Flight Center (GSFC), has been working on a run-time approach to facilitate a formal software reuse process. The reuse process is designed to enable rapid development and integration of high-quality software systems and to more accurately predict development costs and schedule. Previous reuse practices have been somewhat successful when the same teams are moved from project to project. But this typically requires taking the software system in an all-or-nothing approach where useful components cannot be easily extracted from the whole. As a result, the system is less flexible and scalable with limited applicability to new projects. This paper will focus on the rationale behind, and implementation of the run-time executive. This executive is the core for the component-based flight software commonality and reuse process adopted at Goddard.

  4. Achieving Sub-Second Search in the CMR

    NASA Astrophysics Data System (ADS)

    Gilman, J.; Baynes, K.; Pilone, D.; Mitchell, A. E.; Murphy, K. J.

    2014-12-01

    The Common Metadata Repository (CMR) is the next generation Earth Science Metadata catalog for NASA's Earth Observing data. It joins together the holdings from the EOS Clearing House (ECHO) and the Global Change Master Directory (GCMD), creating a unified, authoritative source for EOSDIS metadata. The CMR allows ingest in many different formats while providing consistent search behavior and retrieval in any supported format. Performance is a critical component of the CMR, ensuring improved data discovery and client interactivity. The CMR delivers sub-second search performance for any of the common query conditions (including spatial) across hundreds of millions of metadata granules. It also allows the addition of new metadata concepts such as visualizations, parameter metadata, and documentation. The CMR's goals presented many challenges. This talk will describe the CMR architecture, design, and innovations that were made to achieve its goals. This includes: * Architectural features like immutability and backpressure. * Data management techniques such as caching and parallel loading that give big performance gains. * Open Source and COTS tools like Elasticsearch search engine. * Adoption of Clojure, a functional programming language for the Java Virtual Machine. * Development of a custom spatial search plugin for Elasticsearch and why it was necessary. * Introduction of a unified model for metadata that maps every supported metadata format to a consistent domain model.

  5. The perceptual control of goal-directed locomotion: a common control architecture for interception and navigation?

    PubMed

    Chardenon, A; Montagne, G; Laurent, M; Bootsma, R J

    2004-09-01

    Intercepting a moving object while locomoting is a highly complex and demanding ability. Notwithstanding the identification of several informational candidates, the role of perceptual variables in the control process underlying such skills remains an open question. In this study we used a virtual reality set-up for studying locomotor interception of a moving ball. The subject had to walk along a straight path and could freely modify forward velocity, if necessary, in order to intercept-with the head-a ball moving along a straight path that led it to cross the agent's displacement axis. In a series of experiments we manipulated a local (ball size) and a global (focus of expansion) component of the visual flow but also the egocentric orientation of the ball. The experimental observations are well captured by a dynamic model linking the locomotor acceleration to properties of both global flow and egocentric direction. More precisely the changes in locomotor velocity depend on a linear combination of the change in bearing angle and the change in egocentric orientation, allowing the emergence of adaptive behavior under a variety of circumstances. We conclude that the mechanisms underlying the control of different goal-directed locomotion tasks (i.e. steering and interceptive tasks) could share a common architecture.

  6. Architecture of a general purpose embedded Slow-Control Adapter ASIC for future high-energy physics experiments

    NASA Astrophysics Data System (ADS)

    Gabrielli, Alessandro; Loddo, Flavio; Ranieri, Antonio; De Robertis, Giuseppe

    2008-10-01

    This work is aimed at defining the architecture of a new digital ASIC, namely Slow-Control Adapter (SCA), which will be designed in a commercial 130-nm CMOS technology. This chip will be embedded within a high-speed data acquisition optical link (GBT) to control and monitor the front-end electronics in future high-energy physics experiments. The GBT link provides a transparent transport layer between the SCA and control electronics in the counting room. The proposed SCA supports a variety of common bus protocols to interface with end-user general-purpose electronics. Between the GBT and the SCA a standard 100 Mb/s IEEE-802.3 compatible protocol will be implemented. This standard protocol allows off-line tests of the prototypes using commercial components that support the same standard. The project is justified because embedded applications in modern large HEP experiments require particular care to assure the lowest possible power consumption, still offering the highest reliability demanded by very large particle detectors.

  7. Ontology-Based Architecture for Intelligent Transportation Systems Using a Traffic Sensor Network.

    PubMed

    Fernandez, Susel; Hadfi, Rafik; Ito, Takayuki; Marsa-Maestre, Ivan; Velasco, Juan R

    2016-08-15

    Intelligent transportation systems are a set of technological solutions used to improve the performance and safety of road transportation. A crucial element for the success of these systems is the exchange of information, not only between vehicles, but also among other components in the road infrastructure through different applications. One of the most important information sources in this kind of systems is sensors. Sensors can be within vehicles or as part of the infrastructure, such as bridges, roads or traffic signs. Sensors can provide information related to weather conditions and traffic situation, which is useful to improve the driving process. To facilitate the exchange of information between the different applications that use sensor data, a common framework of knowledge is needed to allow interoperability. In this paper an ontology-driven architecture to improve the driving environment through a traffic sensor network is proposed. The system performs different tasks automatically to increase driver safety and comfort using the information provided by the sensors.

  8. Recent advances in integrated photonic sensors.

    PubMed

    Passaro, Vittorio M N; de Tullio, Corrado; Troia, Benedetto; La Notte, Mario; Giannoccaro, Giovanni; De Leonardis, Francesco

    2012-11-09

    Nowadays, optical devices and circuits are becoming fundamental components in several application fields such as medicine, biotechnology, automotive, aerospace, food quality control, chemistry, to name a few. In this context, we propose a complete review on integrated photonic sensors, with specific attention to materials, technologies, architectures and optical sensing principles. To this aim, sensing principles commonly used in optical detection are presented, focusing on sensor performance features such as sensitivity, selectivity and rangeability. Since photonic sensors provide substantial benefits regarding compatibility with CMOS technology and integration on chips characterized by micrometric footprints, design and optimization strategies of photonic devices are widely discussed for sensing applications. In addition, several numerical methods employed in photonic circuits and devices, simulations and design are presented, focusing on their advantages and drawbacks. Finally, recent developments in the field of photonic sensing are reviewed, considering advanced photonic sensor architectures based on linear and non-linear optical effects and to be employed in chemical/biochemical sensing, angular velocity and electric field detection.

  9. Recent Advances in Integrated Photonic Sensors

    PubMed Central

    Passaro, Vittorio M. N.; de Tullio, Corrado; Troia, Benedetto; La Notte, Mario; Giannoccaro, Giovanni; De Leonardis, Francesco

    2012-01-01

    Nowadays, optical devices and circuits are becoming fundamental components in several application fields such as medicine, biotechnology, automotive, aerospace, food quality control, chemistry, to name a few. In this context, we propose a complete review on integrated photonic sensors, with specific attention to materials, technologies, architectures and optical sensing principles. To this aim, sensing principles commonly used in optical detection are presented, focusing on sensor performance features such as sensitivity, selectivity and rangeability. Since photonic sensors provide substantial benefits regarding compatibility with CMOS technology and integration on chips characterized by micrometric footprints, design and optimization strategies of photonic devices are widely discussed for sensing applications. In addition, several numerical methods employed in photonic circuits and devices, simulations and design are presented, focusing on their advantages and drawbacks. Finally, recent developments in the field of photonic sensing are reviewed, considering advanced photonic sensor architectures based on linear and non-linear optical effects and to be employed in chemical/biochemical sensing, angular velocity and electric field detection. PMID:23202223

  10. Ontology-Based Architecture for Intelligent Transportation Systems Using a Traffic Sensor Network

    PubMed Central

    Fernandez, Susel; Hadfi, Rafik; Ito, Takayuki; Marsa-Maestre, Ivan; Velasco, Juan R.

    2016-01-01

    Intelligent transportation systems are a set of technological solutions used to improve the performance and safety of road transportation. A crucial element for the success of these systems is the exchange of information, not only between vehicles, but also among other components in the road infrastructure through different applications. One of the most important information sources in this kind of systems is sensors. Sensors can be within vehicles or as part of the infrastructure, such as bridges, roads or traffic signs. Sensors can provide information related to weather conditions and traffic situation, which is useful to improve the driving process. To facilitate the exchange of information between the different applications that use sensor data, a common framework of knowledge is needed to allow interoperability. In this paper an ontology-driven architecture to improve the driving environment through a traffic sensor network is proposed. The system performs different tasks automatically to increase driver safety and comfort using the information provided by the sensors. PMID:27537878

  11. Towards a distributed information architecture for avionics data

    NASA Technical Reports Server (NTRS)

    Mattmann, Chris; Freeborn, Dana; Crichton, Dan

    2003-01-01

    Avionics data at the National Aeronautics and Space Administration's (NASA) Jet Propulsion Laboratory (JPL consists of distributed, unmanaged, and heterogeneous information that is hard for flight system design engineers to find and use on new NASA/JPL missions. The development of a systematic approach for capturing, accessing and sharing avionics data critical to the support of NASA/JPL missions and projects is required. We propose a general information architecture for managing the existing distributed avionics data sources and a method for querying and retrieving avionics data using the Object Oriented Data Technology (OODT) framework. OODT uses XML messaging infrastructure that profiles data products and their locations using the ISO-11179 data model for describing data products. Queries against a common data dictionary (which implements the ISO model) are translated to domain dependent source data models, and distributed data products are returned asynchronously through the OODT middleware. Further work will include the ability to 'plug and play' new manufacturer data sources, which are distributed at avionics component manufacturer locations throughout the United States.

  12. Extracellular Matrix Remodeling: The Common Denominator in Connective Tissue DiseasesPossibilities for Evaluation and Current Understanding of the Matrix as More Than a Passive Architecture, but a Key Player in Tissue Failure

    PubMed Central

    Nielsen, Mette J.; Sand, Jannie M.; Henriksen, Kim; Genovese, Federica; Bay-Jensen, Anne-Christine; Smith, Victoria; Adamkewicz, Joanne I.; Christiansen, Claus; Leeming, Diana J.

    2013-01-01

    Abstract Increased attention is paid to the structural components of tissues. These components are mostly collagens and various proteoglycans. Emerging evidence suggests that altered components and noncoded modifications of the matrix may be both initiators and drivers of disease, exemplified by excessive tissue remodeling leading to tissue stiffness, as well as by changes in the signaling potential of both intact matrix and fragments thereof. Although tissue structure until recently was viewed as a simple architecture anchoring cells and proteins, this complex grid may contain essential information enabling the maintenance of the structure and normal functioning of tissue. The aims of this review are to (1) discuss the structural components of the matrix and the relevance of their mutations to the pathology of diseases such as fibrosis and cancer, (2) introduce the possibility that post-translational modifications (PTMs), such as protease cleavage, citrullination, cross-linking, nitrosylation, glycosylation, and isomerization, generated during pathology, may be unique, disease-specific biochemical markers, (3) list and review the range of simple enzyme-linked immunosorbent assays (ELISAs) that have been developed for assessing the extracellular matrix (ECM) and detecting abnormal ECM remodeling, and (4) discuss whether some PTMs are the cause or consequence of disease. New evidence clearly suggests that the ECM at some point in the pathogenesis becomes a driver of disease. These pathological modified ECM proteins may allow insights into complicated pathologies in which the end stage is excessive tissue remodeling, and provide unique and more pathology-specific biochemical markers. PMID:23046407

  13. DICCCOL: Dense Individualized and Common Connectivity-Based Cortical Landmarks

    PubMed Central

    Zhu, Dajiang; Guo, Lei; Jiang, Xi; Zhang, Tuo; Zhang, Degang; Chen, Hanbo; Deng, Fan; Faraco, Carlos; Jin, Changfeng; Wee, Chong-Yaw; Yuan, Yixuan; Lv, Peili; Yin, Yan; Hu, Xiaolei; Duan, Lian; Hu, Xintao; Han, Junwei; Wang, Lihong; Shen, Dinggang; Miller, L Stephen

    2013-01-01

    Is there a common structural and functional cortical architecture that can be quantitatively encoded and precisely reproduced across individuals and populations? This question is still largely unanswered due to the vast complexity, variability, and nonlinearity of the cerebral cortex. Here, we hypothesize that the common cortical architecture can be effectively represented by group-wise consistent structural fiber connections and take a novel data-driven approach to explore the cortical architecture. We report a dense and consistent map of 358 cortical landmarks, named Dense Individualized and Common Connectivity–based Cortical Landmarks (DICCCOLs). Each DICCCOL is defined by group-wise consistent white-matter fiber connection patterns derived from diffusion tensor imaging (DTI) data. Our results have shown that these 358 landmarks are remarkably reproducible over more than one hundred human brains and possess accurate intrinsically established structural and functional cross-subject correspondences validated by large-scale functional magnetic resonance imaging data. In particular, these 358 cortical landmarks can be accurately and efficiently predicted in a new single brain with DTI data. Thus, this set of 358 DICCCOL landmarks comprehensively encodes the common structural and functional cortical architectures, providing opportunities for many applications in brain science including mapping human brain connectomes, as demonstrated in this work. PMID:22490548

  14. Supporting Undergraduate Computer Architecture Students Using a Visual MIPS64 CPU Simulator

    ERIC Educational Resources Information Center

    Patti, D.; Spadaccini, A.; Palesi, M.; Fazzino, F.; Catania, V.

    2012-01-01

    The topics of computer architecture are always taught using an Assembly dialect as an example. The most commonly used textbooks in this field use the MIPS64 Instruction Set Architecture (ISA) to help students in learning the fundamentals of computer architecture because of its orthogonality and its suitability for real-world applications. This…

  15. Scaling Impacts in Life Support Architecture and Technology Selection

    NASA Technical Reports Server (NTRS)

    Lange, Kevin

    2016-01-01

    For long-duration space missions outside of Earth orbit, reliability considerations will drive higher levels of redundancy and/or on-board spares for life support equipment. Component scaling will be a critical element in minimizing overall launch mass while maintaining an acceptable level of system reliability. Building on an earlier reliability study (AIAA 2012-3491), this paper considers the impact of alternative scaling approaches, including the design of technology assemblies and their individual components to maximum, nominal, survival, or other fractional requirements. The optimal level of life support system closure is evaluated for deep-space missions of varying duration using equivalent system mass (ESM) as the comparative basis. Reliability impacts are included in ESM by estimating the number of component spares required to meet a target system reliability. Common cause failures are included in the analysis. ISS and ISS-derived life support technologies are considered along with selected alternatives. This study focusses on minimizing launch mass, which may be enabling for deep-space missions.

  16. Processing and Structural Advantages of the Sylramic-iBN SiC Fiber for SiC/SiC Components

    NASA Technical Reports Server (NTRS)

    Yun, H. M.; Dicarlo, J. A.; Bhatt, R. T.; Hurst, J. B.

    2008-01-01

    The successful high-temperature application of complex-shaped SiC/SiC components will depend on achieving as high a fraction of the as-produced fiber strength as possible during component fabrication and service. Key issues center on a variety of component architecture, processing, and service-related factors that can reduce fiber strength, such as fiber-fiber abrasion during architecture shaping, surface chemical attack during interphase deposition and service, and intrinsic flaw growth during high-temperature matrix formation and composite creep. The objective of this paper is to show that the NASA-developed Sylramic-iBN SiC fiber minimizes many of these issues for state-of-the-art melt-infiltrated (MI) SiC/BN/SiC composites. To accomplish this, data from various mechanical tests are presented that compare how different high performance SiC fiber types retain strength during formation of complex architectures, during processing of BN interphases and MI matrices, and during simulated composite service at high temperatures.

  17. Open architecture design and approach for the Integrated Sensor Architecture (ISA)

    NASA Astrophysics Data System (ADS)

    Moulton, Christine L.; Krzywicki, Alan T.; Hepp, Jared J.; Harrell, John; Kogut, Michael

    2015-05-01

    Integrated Sensor Architecture (ISA) is designed in response to stovepiped integration approaches. The design, based on the principles of Service Oriented Architectures (SOA) and Open Architectures, addresses the problem of integration, and is not designed for specific sensors or systems. The use of SOA and Open Architecture approaches has led to a flexible, extensible architecture. Using these approaches, and supported with common data formats, open protocol specifications, and Department of Defense Architecture Framework (DoDAF) system architecture documents, an integration-focused architecture has been developed. ISA can help move the Department of Defense (DoD) from costly stovepipe solutions to a more cost-effective plug-and-play design to support interoperability.

  18. The NBS-LRR architectures of plant R-proteins and metazoan NLRs evolved in independent events

    PubMed Central

    Urbach, Jonathan M.; Ausubel, Frederick M.

    2017-01-01

    There are intriguing parallels between plants and animals, with respect to the structures of their innate immune receptors, that suggest universal principles of innate immunity. The cytosolic nucleotide binding site–leucine rich repeat (NBS-LRR) resistance proteins of plants (R-proteins) and the so-called NOD-like receptors of animals (NLRs) share a domain architecture that includes a STAND (signal transduction ATPases with numerous domains) family NTPase followed by a series of LRRs, suggesting inheritance from a common ancestor with that architecture. Focusing on the STAND NTPases of plant R-proteins, animal NLRs, and their homologs that represent the NB-ARC (nucleotide-binding adaptor shared by APAF-1, certain R gene products and CED-4) and NACHT (named for NAIP, CIIA, HET-E, and TEP1) subfamilies of the STAND NTPases, we analyzed the phylogenetic distribution of the NBS-LRR domain architecture, used maximum-likelihood methods to infer a phylogeny of the NTPase domains of R-proteins, and reconstructed the domain structure of the protein containing the common ancestor of the STAND NTPase domain of R-proteins and NLRs. Our analyses reject monophyly of plant R-proteins and NLRs and suggest that the protein containing the last common ancestor of the STAND NTPases of plant R-proteins and animal NLRs (and, by extension, all NB-ARC and NACHT domains) possessed a domain structure that included a STAND NTPase paired with a series of tetratricopeptide repeats. These analyses reject the hypothesis that the domain architecture of R-proteins and NLRs was inherited from a common ancestor and instead suggest the domain architecture evolved at least twice. It remains unclear whether the NBS-LRR architectures were innovations of plants and animals themselves or were acquired by one or both lineages through horizontal gene transfer. PMID:28096345

  19. CDC WONDER: a cooperative processing architecture for public health.

    PubMed Central

    Friede, A; Rosen, D H; Reid, J A

    1994-01-01

    CDC WONDER is an information management architecture designed for public health. It provides access to information and communications without the user's needing to know the location of data or communication pathways and mechanisms. CDC WONDER users have access to extractions from some 40 databases; electronic mail (e-mail); and surveillance data processing. System components include the Remote Client, the Communications Server, the Queue Managers, and Data Servers and Process Servers. The Remote Client software resides in the user's machine; other components are at the Centers for Disease Control and Prevention (CDC). The Remote Client, the Communications Server, and the Applications Server provide access to the information and functions in the Data Servers and Process Servers. The system architecture is based on cooperative processing, and components are coupled via pure message passing, using several protocols. This architecture allows flexibility in the choice of hardware and software. One system limitation is that final results from some subsystems are obtained slowly. Although designed for public health, CDC WONDER could be useful for other disciplines that need flexible, integrated information exchange. PMID:7719813

  20. The South African Astronomical Observatory instrumentation software architecture and the SHOC instruments

    NASA Astrophysics Data System (ADS)

    van Gend, Carel; Lombaard, Briehan; Sickafoose, Amanda; Whittal, Hamish

    2016-07-01

    Until recently, software for instruments on the smaller telescopes at the South African Astronomical Observatory (SAAO) has not been designed for remote accessibility and frequently has not been developed using modern software best-practice. We describe a software architecture we have implemented for use with new and upgraded instruments at the SAAO. The architecture was designed to allow for multiple components and to be fast, reliable, remotely- operable, support different user interfaces, employ as much non-proprietary software as possible, and to take future-proofing into consideration. Individual component drivers exist as standalone processes, communicating over a network. A controller layer coordinates the various components, and allows a variety of user interfaces to be used. The Sutherland High-speed Optical Cameras (SHOC) instruments incorporate an Andor electron-multiplying CCD camera, a GPS unit for accurate timing and a pair of filter wheels. We have applied the new architecture to the SHOC instruments, with the camera driver developed using Andor's software development kit. We have used this to develop an innovative web-based user-interface to the instrument.

  1. A Proposed Information Architecture for Telehealth System Interoperability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Craft, R.L.; Funkhouser, D.R.; Gallagher, L.K.

    1999-04-20

    We propose an object-oriented information architecture for telemedicine systems that promotes secure `plug-and-play' interaction between system components through standardized interfaces, communication protocols, messaging formats, and data definitions. In this architecture, each component functions as a black box, and components plug together in a ''lego-like'' fashion to achieve the desired device or system functionality. Introduction Telemedicine systems today rely increasingly on distributed, collaborative information technology during the care delivery process. While these leading-edge systems are bellwethers for highly advanced telemedicine, most are custom-designed and do not interoperate with other commercial offerings. Users are limited to a set of functionality that amore » single vendor provides and must often pay high prices to obtain this functionality, since vendors in this marketplace must deliver en- tire systems in order to compete. Besides increasing corporate research and development costs, this inhibits the ability of the user to make intelligent purchasing decisions regarding best-of-breed technologies. This paper proposes a reference architecture for plug-and-play telemedicine systems that addresses these issues.« less

  2. The software architecture of the camera for the ASTRI SST-2M prototype for the Cherenkov Telescope Array

    NASA Astrophysics Data System (ADS)

    Sangiorgi, Pierluca; Capalbi, Milvia; Gimenes, Renato; La Rosa, Giovanni; Russo, Francesco; Segreto, Alberto; Sottile, Giuseppe; Catalano, Osvaldo

    2016-07-01

    The purpose of this contribution is to present the current status of the software architecture of the ASTRI SST-2M Cherenkov Camera. The ASTRI SST-2M telescope is an end-to-end prototype for the Small Size Telescope of the Cherenkov Telescope Array. The ASTRI camera is an innovative instrument based on SiPM detectors and has several internal hardware components. In this contribution we will give a brief description of the hardware components of the camera of the ASTRI SST-2M prototype and of their interconnections. Then we will present the outcome of the software architectural design process that we carried out in order to identify the main structural components of the camera software system and the relationships among them. We will analyze the architectural model that describes how the camera software is organized as a set of communicating blocks. Finally, we will show where these blocks are deployed in the hardware components and how they interact. We will describe in some detail, the physical communication ports and external ancillary devices management, the high precision time-tag management, the fast data collection and the fast data exchange between different camera subsystems, and the interfacing with the external systems.

  3. A proposed clinical decision support architecture capable of supporting whole genome sequence information.

    PubMed

    Welch, Brandon M; Loya, Salvador Rodriguez; Eilbeck, Karen; Kawamoto, Kensaku

    2014-04-04

    Whole genome sequence (WGS) information may soon be widely available to help clinicians personalize the care and treatment of patients. However, considerable barriers exist, which may hinder the effective utilization of WGS information in a routine clinical care setting. Clinical decision support (CDS) offers a potential solution to overcome such barriers and to facilitate the effective use of WGS information in the clinic. However, genomic information is complex and will require significant considerations when developing CDS capabilities. As such, this manuscript lays out a conceptual framework for a CDS architecture designed to deliver WGS-guided CDS within the clinical workflow. To handle the complexity and breadth of WGS information, the proposed CDS framework leverages service-oriented capabilities and orchestrates the interaction of several independently-managed components. These independently-managed components include the genome variant knowledge base, the genome database, the CDS knowledge base, a CDS controller and the electronic health record (EHR). A key design feature is that genome data can be stored separately from the EHR. This paper describes in detail: (1) each component of the architecture; (2) the interaction of the components; and (3) how the architecture attempts to overcome the challenges associated with WGS information. We believe that service-oriented CDS capabilities will be essential to using WGS information for personalized medicine.

  4. A Proposed Clinical Decision Support Architecture Capable of Supporting Whole Genome Sequence Information

    PubMed Central

    Welch, Brandon M.; Rodriguez Loya, Salvador; Eilbeck, Karen; Kawamoto, Kensaku

    2014-01-01

    Whole genome sequence (WGS) information may soon be widely available to help clinicians personalize the care and treatment of patients. However, considerable barriers exist, which may hinder the effective utilization of WGS information in a routine clinical care setting. Clinical decision support (CDS) offers a potential solution to overcome such barriers and to facilitate the effective use of WGS information in the clinic. However, genomic information is complex and will require significant considerations when developing CDS capabilities. As such, this manuscript lays out a conceptual framework for a CDS architecture designed to deliver WGS-guided CDS within the clinical workflow. To handle the complexity and breadth of WGS information, the proposed CDS framework leverages service-oriented capabilities and orchestrates the interaction of several independently-managed components. These independently-managed components include the genome variant knowledge base, the genome database, the CDS knowledge base, a CDS controller and the electronic health record (EHR). A key design feature is that genome data can be stored separately from the EHR. This paper describes in detail: (1) each component of the architecture; (2) the interaction of the components; and (3) how the architecture attempts to overcome the challenges associated with WGS information. We believe that service-oriented CDS capabilities will be essential to using WGS information for personalized medicine. PMID:25411644

  5. A Electro-Optical Image Algebra Processing System for Automatic Target Recognition

    NASA Astrophysics Data System (ADS)

    Coffield, Patrick Cyrus

    The proposed electro-optical image algebra processing system is designed specifically for image processing and other related computations. The design is a hybridization of an optical correlator and a massively paralleled, single instruction multiple data processor. The architecture of the design consists of three tightly coupled components: a spatial configuration processor (the optical analog portion), a weighting processor (digital), and an accumulation processor (digital). The systolic flow of data and image processing operations are directed by a control buffer and pipelined to each of the three processing components. The image processing operations are defined in terms of basic operations of an image algebra developed by the University of Florida. The algebra is capable of describing all common image-to-image transformations. The merit of this architectural design is how it implements the natural decomposition of algebraic functions into spatially distributed, point use operations. The effect of this particular decomposition allows convolution type operations to be computed strictly as a function of the number of elements in the template (mask, filter, etc.) instead of the number of picture elements in the image. Thus, a substantial increase in throughput is realized. The implementation of the proposed design may be accomplished in many ways. While a hybrid electro-optical implementation is of primary interest, the benefits and design issues of an all digital implementation are also discussed. The potential utility of this architectural design lies in its ability to control a large variety of the arithmetic and logic operations of the image algebra's generalized matrix product. The generalized matrix product is the most powerful fundamental operation in the algebra, thus allowing a wide range of applications. No other known device or design has made this claim of processing speed and general implementation of a heterogeneous image algebra.

  6. EarthCube as an information resource marketplace; the GEAR Project conceptual design

    NASA Astrophysics Data System (ADS)

    Richard, S. M.; Zaslavsky, I.; Gupta, A.; Valentine, D.

    2015-12-01

    Geoscience Architecture for Research (GEAR) is approaching EarthCube design as a complex and evolving socio-technical federation of systems. EarthCube is intended to support the science research enterprise, for which there is no centralized command and control, requirements are a moving target, the function and behavior of the system must evolve and adapt as new scientific paradigms emerge, and system participants are conducting research that inherently implies seeking new ways of doing things. EarthCube must address evolving user requirements and enable domain and project systems developed under different management and for different purposes to work together. The EC architecture must focus on creating a technical environment that enables new capabilities by combining existing and newly developed resources in various ways, and encourages development of new resource designs intended for re-use and interoperability. In a sense, instead of a single architecture design, GEAR provides a way to accommodate multiple designs tuned to different tasks. This agile, adaptive, evolutionary software development style is based on a continuously updated portfolio of compatible components that enable new sub-system architecture. System users make decisions about which components to use in this marketplace based on performance, satisfaction, and impact metrics collected continuously to evaluate components, determine priorities, and guide resource allocation decisions by the system governance agency. EC is designed as a federation of independent systems, and although the coordinator of the EC system may be named an enterprise architect, the focus of the role needs to be organizing resources, assessing their readiness for interoperability with the existing EC component inventory, managing dependencies between transient subsystems, mechanisms of stakeholder engagement and inclusion, and negotiation of standard interfaces, rather than actual specification of components. Composition of components will be developed by projects that involve both domain scientists and CI experts for specific research problems. We believe an agile, marketplace type approach is an essential architectural strategy for EarthCube.

  7. Reliability and Productivity Modeling for the Optimization of Separated Spacecraft Interferometers

    NASA Technical Reports Server (NTRS)

    Kenny, Sean (Technical Monitor); Wertz, Julie

    2002-01-01

    As technological systems grow in capability, they also grow in complexity. Due to this complexity, it is no longer possible for a designer to use engineering judgement to identify the components that have the largest impact on system life cycle metrics, such as reliability, productivity, cost, and cost effectiveness. One way of identifying these key components is to build quantitative models and analysis tools that can be used to aid the designer in making high level architecture decisions. Once these key components have been identified, two main approaches to improving a system using these components exist: add redundancy or improve the reliability of the component. In reality, the most effective approach to almost any system will be some combination of these two approaches, in varying orders of magnitude for each component. Therefore, this research tries to answer the question of how to divide funds, between adding redundancy and improving the reliability of components, to most cost effectively improve the life cycle metrics of a system. While this question is relevant to any complex system, this research focuses on one type of system in particular: Separate Spacecraft Interferometers (SSI). Quantitative models are developed to analyze the key life cycle metrics of different SSI system architectures. Next, tools are developed to compare a given set of architectures in terms of total performance, by coupling different life cycle metrics together into one performance metric. Optimization tools, such as simulated annealing and genetic algorithms, are then used to search the entire design space to find the "optimal" architecture design. Sensitivity analysis tools have been developed to determine how sensitive the results of these analyses are to uncertain user defined parameters. Finally, several possibilities for the future work that could be done in this area of research are presented.

  8. ITS system specification

    DOT National Transportation Integrated Search

    1997-01-21

    The objective of the Polaris Project is to define an Intelligent Transportation Systems (ITS) architecture for the state of Minnesota. An architecture is a framework that defines how multiple ITS Components interrelate and contribute to the overall I...

  9. SME2EM: Smart mobile end-to-end monitoring architecture for life-long diseases.

    PubMed

    Serhani, Mohamed Adel; Menshawy, Mohamed El; Benharref, Abdelghani

    2016-01-01

    Monitoring life-long diseases requires continuous measurements and recording of physical vital signs. Most of these diseases are manifested through unexpected and non-uniform occurrences and behaviors. It is impractical to keep patients in hospitals, health-care institutions, or even at home for long periods of time. Monitoring solutions based on smartphones combined with mobile sensors and wireless communication technologies are a potential candidate to support complete mobility-freedom, not only for patients, but also for physicians. However, existing monitoring architectures based on smartphones and modern communication technologies are not suitable to address some challenging issues, such as intensive and big data, resource constraints, data integration, and context awareness in an integrated framework. This manuscript provides a novel mobile-based end-to-end architecture for live monitoring and visualization of life-long diseases. The proposed architecture provides smartness features to cope with continuous monitoring, data explosion, dynamic adaptation, unlimited mobility, and constrained devices resources. The integration of the architecture׳s components provides information about diseases׳ recurrences as soon as they occur to expedite taking necessary actions, and thus prevent severe consequences. Our architecture system is formally model-checked to automatically verify its correctness against designers׳ desirable properties at design time. Its components are fully implemented as Web services with respect to the SOA architecture to be easy to deploy and integrate, and supported by Cloud infrastructure and services to allow high scalability, availability of processes and data being stored and exchanged. The architecture׳s applicability is evaluated through concrete experimental scenarios on monitoring and visualizing states of epileptic diseases. The obtained theoretical and experimental results are very promising and efficiently satisfy the proposed architecture׳s objectives, including resource awareness, smart data integration and visualization, cost reduction, and performance guarantee. Copyright © 2015 Elsevier Ltd. All rights reserved.

  10. Hybridization of Architectural Styles for Integrated Enterprise Information Systems

    NASA Astrophysics Data System (ADS)

    Bagusyte, Lina; Lupeikiene, Audrone

    Current enterprise systems engineering theory does not provide adequate support for the development of information systems on demand. To say more precisely, it is forming. This chapter proposes the main architectural decisions that underlie the design of integrated enterprise information systems. This chapter argues for the extending service-oriented architecture - for merging it with component-based paradigm at the design stage and using connectors of different architectural styles. The suitability of general-purpose language SysML for the modeling of integrated enterprise information systems architectures is described and arguments pros are presented.

  11. Software architecture of INO340 telescope control system

    NASA Astrophysics Data System (ADS)

    Ravanmehr, Reza; Khosroshahi, Habib

    2016-08-01

    The software architecture plays an important role in distributed control system of astronomical projects because many subsystems and components must work together in a consistent and reliable way. We have utilized a customized architecture design approach based on "4+1 view model" in order to design INOCS software architecture. In this paper, after reviewing the top level INOCS architecture, we present the software architecture model of INOCS inspired by "4+1 model", for this purpose we provide logical, process, development, physical, and scenario views of our architecture using different UML diagrams and other illustrative visual charts. Each view presents INOCS software architecture from a different perspective. We finish the paper by science data operation of INO340 and the concluding remarks.

  12. Aquarius' Object-Oriented, Plug and Play Component-Based Flight Software

    NASA Technical Reports Server (NTRS)

    Murray, Alexander; Shahabuddin, Mohammad

    2013-01-01

    The Aquarius mission involves a combined radiometer and radar instrument in low-Earth orbit, providing monthly global maps of Sea Surface Salinity. Operating successfully in orbit since June, 2011, the spacecraft bus was furnished by the Argentine space agency, Comision Nacional de Actividades Espaciales (CONAE). The instrument, built jointly by NASA's Caltech/JPL and Goddard Space Flight Center, has been successfully producing expectation-exceeding data since it was powered on in August of 2011. In addition to the radiometer and scatterometer, the instrument contains an command & data-handling subsystem with a computer and flight software (FSW) that is responsible for managing the instrument, its operation, and its data. Aquarius' FSW is conceived and architected as a Component-based system, in which the running software consists of a set of Components, each playing a distinctive role in the subsystem, instantiated and connected together at runtime. Component architectures feature a well-defined set of interfaces between the Components, visible and analyzable at the architectural level (see [1]). As we will describe, this kind of an architecture offers significant advantages over more traditional FSW architectures, which often feature a monolithic runtime structure. Component-based software is enabled by Object-Oriented (OO) techniques and languages, the use of which again is not typical in space mission FSW. We will argue in this paper that the use of OO design methods and tools (especially the Unified Modeling Language), as well as the judicious usage of C++, are very well suited to FSW applications, and we will present Aquarius FSW, describing our methods, processes, and design, as a successful case in point.

  13. Development of high performance scientific components for interoperability of computing packages

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gulabani, Teena Pratap

    2008-01-01

    Three major high performance quantum chemistry computational packages, NWChem, GAMESS and MPQC have been developed by different research efforts following different design patterns. The goal is to achieve interoperability among these packages by overcoming the challenges caused by the different communication patterns and software design of each of these packages. A chemistry algorithm is hard to develop as well as being a time consuming process; integration of large quantum chemistry packages will allow resource sharing and thus avoid reinvention of the wheel. Creating connections between these incompatible packages is the major motivation of the proposed work. This interoperability is achievedmore » by bringing the benefits of Component Based Software Engineering through a plug-and-play component framework called Common Component Architecture (CCA). In this thesis, I present a strategy and process used for interfacing two widely used and important computational chemistry methodologies: Quantum Mechanics and Molecular Mechanics. To show the feasibility of the proposed approach the Tuning and Analysis Utility (TAU) has been coupled with NWChem code and its CCA components. Results show that the overhead is negligible when compared to the ease and potential of organizing and coping with large-scale software applications.« less

  14. Reference Specifications for SAVOIR Avionics Elements

    NASA Astrophysics Data System (ADS)

    Hult, Torbjorn; Lindskog, Martin; Roques, Remi; Planche, Luc; Brunjes, Bernhard; Dellandrea, Brice; Terraillon, Jean-Loup

    2012-08-01

    Space industry and Agencies have been recognizing already for quite some time the need to raise the level of standardisation in the spacecraft avionics systems in order to increase efficiency and reduce development cost and schedule. This also includes the aspect of increasing competition in global space business, which is a challenge that European space companies are facing at all stages of involvement in the international markets.A number of initiatives towards this vision are driven both by the industry and ESA’s R&D programmes. However, today an intensified coordination of these activities is required in order to achieve the necessary synergy and to ensure they converge towards the shared vision. It has been proposed to federate these initiatives under the common Space Avionics Open Interface Architecture (SAVOIR) initiative. Within this initiative, the approach based on reference architectures and building blocks plays a key role.Following the principles outlined above, the overall goal of the SAVOIR is to establish a streamlined onboard architecture in order to standardize the development of avionics systems for space programmes. This reflects the need to increase efficiency and cost-effectiveness in the development process as well as account the trend towards more functionality implemented by the onboard building blocks, i.e. HW and SW components, and more complexity for the overall space mission objectives.

  15. X-ray micro-computed tomography in willow reveals tissue patterning of reaction wood and delay in programmed cell death.

    PubMed

    Brereton, Nicholas James Beresford; Ahmed, Farah; Sykes, Daniel; Ray, Michael Jason; Shield, Ian; Karp, Angela; Murphy, Richard James

    2015-03-11

    Variation in the reaction wood (RW) response has been shown to be a principle component driving differences in lignocellulosic sugar yield from the bioenergy crop willow. The phenotypic cause(s) behind these differences in sugar yield, beyond their common elicitor, however, remain unclear. Here we use X-ray micro-computed tomography (μCT) to investigate RW-associated alterations in secondary xylem tissue patterning in three dimensions (3D). Major architectural alterations were successfully quantified in 3D and attributed to RW induction. Whilst the frequency of vessels was reduced in tension wood tissue (TW), the total vessel volume was significantly increased. Interestingly, a delay in programmed-cell-death (PCD) associated with TW was also clearly observed and readily quantified by μCT. The surprising degree to which the volume of vessels was increased illustrates the substantial xylem tissue remodelling involved in reaction wood formation. The remodelling suggests an important physiological compromise between structural and hydraulic architecture necessary for extensive alteration of biomass and helps to demonstrate the power of improving our perspective of cell and tissue architecture. The precise observation of xylem tissue development and quantification of the extent of delay in PCD provides a valuable and exciting insight into this bioenergy crop trait.

  16. Regulatory gene networks and the properties of the developmental process

    NASA Technical Reports Server (NTRS)

    Davidson, Eric H.; McClay, David R.; Hood, Leroy

    2003-01-01

    Genomic instructions for development are encoded in arrays of regulatory DNA. These specify large networks of interactions among genes producing transcription factors and signaling components. The architecture of such networks both explains and predicts developmental phenomenology. Although network analysis is yet in its early stages, some fundamental commonalities are already emerging. Two such are the use of multigenic feedback loops to ensure the progressivity of developmental regulatory states and the prevalence of repressive regulatory interactions in spatial control processes. Gene regulatory networks make it possible to explain the process of development in causal terms and eventually will enable the redesign of developmental regulatory circuitry to achieve different outcomes.

  17. Particulate Matter Filtration Design Considerations for Crewed Spacecraft Life Support Systems

    NASA Technical Reports Server (NTRS)

    Agui, Juan H.; Vijayakumar, R.; Perry, Jay L.

    2016-01-01

    Particulate matter filtration is a key component of crewed spacecraft cabin ventilation and life support system (LSS) architectures. The basic particulate matter filtration functional requirements as they relate to an exploration vehicle LSS architecture are presented. Particulate matter filtration concepts are reviewed and design considerations are discussed. A concept for a particulate matter filtration architecture suitable for exploration missions is presented. The conceptual architecture considers the results from developmental work and incorporates best practice design considerations.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Yao; Balaprakash, Prasanna; Meng, Jiayuan

    We present Raexplore, a performance modeling framework for architecture exploration. Raexplore enables rapid, automated, and systematic search of architecture design space by combining hardware counter-based performance characterization and analytical performance modeling. We demonstrate Raexplore for two recent manycore processors IBM Blue- Gene/Q compute chip and Intel Xeon Phi, targeting a set of scientific applications. Our framework is able to capture complex interactions between architectural components including instruction pipeline, cache, and memory, and to achieve a 3–22% error for same-architecture and cross-architecture performance predictions. Furthermore, we apply our framework to assess the two processors, and discover and evaluate a list ofmore » architectural scaling options for future processor designs.« less

  19. Joint Polar Satellite System (JPSS) Common Ground System (CGS) Current Technical Performance Measures

    NASA Astrophysics Data System (ADS)

    Cochran, S.; Panas, M.; Jamilkowski, M. L.; Miller, S. W.

    2015-12-01

    ABSTRACT The National Oceanic and Atmospheric Administration (NOAA) and National Aeronautics and Space Administration (NASA) are jointly acquiring the next-generation civilian weather and environmental satellite system: the Joint Polar Satellite System (JPSS). The Joint Polar Satellite System will replace the afternoon orbit component and ground processing system of the current Polar-orbiting Operational Environmental Satellites (POES) managed by NOAA. The JPSS satellites will carry a suite of sensors designed to collect meteorological, oceanographic, climatological and geophysical observations of the Earth. The ground processing system for JPSS is known as the JPSS Common Ground System (JPSS CGS). Developed and maintained by Raytheon Intelligence, Information and Services (IIS), the CGS is a multi-mission enterprise system serving NOAA, NASA and their national and international partners. The CGS has demonstrated its scalability and flexibility to incorporate multiple missions efficiently and with minimal cost, schedule and risk, while strengthening global partnerships in weather and environmental monitoring. The CGS architecture is being upgraded to Block 2.0 in 2015 to "operationalize" S-NPP, leverage lessons learned to date in multi-mission support, take advantage of more reliable and efficient technologies, and satisfy new requirements and constraints in the continually evolving budgetary environment. To ensure the CGS meets these needs, we have developed 49 Technical Performance Measures (TPMs) across 10 categories, such as data latency, operational availability and scalability. This paper will provide an overview of the CGS Block 2.0 architecture, with particular focus on the 10 TPM categories listed above. We will provide updates on how we ensure the deployed architecture meets these TPMs to satisfy our multi-mission objectives with the deployment of Block 2.0.

  20. Describing the genetic architecture of epilepsy through heritability analysis.

    PubMed

    Speed, Doug; O'Brien, Terence J; Palotie, Aarno; Shkura, Kirill; Marson, Anthony G; Balding, David J; Johnson, Michael R

    2014-10-01

    Epilepsy is a disease with substantial missing heritability; despite its high genetic component, genetic association studies have had limited success detecting common variants which influence susceptibility. In this paper, we reassess the role of common variants on epilepsy using extensions of heritability analysis. Our data set consists of 1258 UK patients with epilepsy, of which 958 have focal epilepsy, and 5129 population control subjects, with genotypes recorded for over 4 million common single nucleotide polymorphisms. Firstly, we show that on the liability scale, common variants collectively explain at least 26% (standard deviation 5%) of phenotypic variation for all epilepsy and 27% (standard deviation 5%) for focal epilepsy. Secondly we provide a new method for estimating the number of causal variants for complex traits; when applied to epilepsy, our most optimistic estimate suggests that at least 400 variants influence disease susceptibility, with potentially many thousands. Thirdly, we use bivariate analysis to assess how similar the genetic architecture of focal epilepsy is to that of non-focal epilepsy; we demonstrate both significant differences (P = 0.004) and significant similarities (P = 0.01) between the two subtypes, indicating that although the clinical definition of focal epilepsy does identify a genetically distinct epilepsy subtype, there is also scope to improve the classification of epilepsy by incorporating genotypic information. Lastly, we investigate the potential value in using genetic data to diagnose epilepsy following a single epileptic seizure; we find that a prediction model explaining 10% of phenotypic variation could have clinical utility for deciding which single-seizure individuals are likely to benefit from immediate anti-epileptic drug therapy. © The Author (2014). Published by Oxford University Press on behalf of the Guarantors of Brain.

  1. Software Productivity of Field Experiments Using the Mobile Agents Open Architecture with Workflow Interoperability

    NASA Technical Reports Server (NTRS)

    Clancey, William J.; Lowry, Michael R.; Nado, Robert Allen; Sierhuis, Maarten

    2011-01-01

    We analyzed a series of ten systematically developed surface exploration systems that integrated a variety of hardware and software components. Design, development, and testing data suggest that incremental buildup of an exploration system for long-duration capabilities is facilitated by an open architecture with appropriate-level APIs, specifically designed to facilitate integration of new components. This improves software productivity by reducing changes required for reconfiguring an existing system.

  2. Component architecture in drug discovery informatics.

    PubMed

    Smith, Peter M

    2002-05-01

    This paper reviews the characteristics of a new model of computing that has been spurred on by the Internet, known as Netcentric computing. Developments in this model led to distributed component architectures, which, although not new ideas, are now realizable with modern tools such as Enterprise Java. The application of this approach to scientific computing, particularly in pharmaceutical discovery research, is discussed and highlighted by a particular case involving the management of biological assay data.

  3. [A telemedicine electrocardiography system based on the component-architecture soft].

    PubMed

    Potapov, I V; Selishchev, S V

    2004-01-01

    The paper deals with a universal component-oriented architecture for creating the telemedicine applications. The worked-out system ensures the ECG reading, pressure measurements and pulsometry. The system design comprises a central database server and a client telemedicine module. Data can be transmitted via different interfaces--from an ordinary local network to digital satellite phones. The data protection is guaranteed by microchip charts that were used to realize the authentication 3DES algorithm.

  4. A UML-based ontology for describing hospital information system architectures.

    PubMed

    Winter, A; Brigl, B; Wendt, T

    2001-01-01

    To control the heterogeneity inherent to hospital information systems the information management needs appropriate hospital information systems modeling methods or techniques. This paper shows that, for several reasons, available modeling approaches are not able to answer relevant questions of information management. To overcome this major deficiency we offer an UML-based ontology for describing hospital information systems architectures. This ontology views at three layers: the domain layer, the logical tool layer, and the physical tool layer, and defines the relevant components. The relations between these components, especially between components of different layers make the answering of our information management questions possible.

  5. Uncoupling File System Components for Bridging Legacy and Modern Storage Architectures

    NASA Astrophysics Data System (ADS)

    Golpayegani, N.; Halem, M.; Tilmes, C.; Prathapan, S.; Earp, D. N.; Ashkar, J. S.

    2016-12-01

    Long running Earth Science projects can span decades of architectural changes in both processing and storage environments. As storage architecture designs change over decades such projects need to adjust their tools, systems, and expertise to properly integrate such new technologies with their legacy systems. Traditional file systems lack the necessary support to accommodate such hybrid storage infrastructure resulting in more complex tool development to encompass all possible storage architectures used for the project. The MODIS Adaptive Processing System (MODAPS) and the Level 1 and Atmospheres Archive and Distribution System (LAADS) is an example of a project spanning several decades which has evolved into a hybrid storage architecture. MODAPS/LAADS has developed the Lightweight Virtual File System (LVFS) which ensures a seamless integration of all the different storage architectures, including standard block based POSIX compliant storage disks, to object based architectures such as the S3 compliant HGST Active Archive System, and the Seagate Kinetic disks utilizing the Kinetic Protocol. With LVFS, all analysis and processing tools used for the project continue to function unmodified regardless of the underlying storage architecture enabling MODAPS/LAADS to easily integrate any new storage architecture without the costly need to modify existing tools to utilize such new systems. Most file systems are designed as a single application responsible for using metadata to organizing the data into a tree, determine the location for data storage, and a method of data retrieval. We will show how LVFS' unique approach of treating these components in a loosely coupled fashion enables it to merge different storage architectures into a single uniform storage system which bridges the underlying hybrid architecture.

  6. Crosstalk quantification, analysis, and trends in CMOS image sensors.

    PubMed

    Blockstein, Lior; Yadid-Pecht, Orly

    2010-08-20

    Pixel crosstalk (CTK) consists of three components, optical CTK (OCTK), electrical CTK (ECTK), and spectral CTK (SCTK). The CTK has been classified into two groups: pixel-architecture dependent and pixel-architecture independent. The pixel-architecture-dependent CTK (PADC) consists of the sum of two CTK components, i.e., the OCTK and the ECTK. This work presents a short summary of a large variety of methods for PADC reduction. Following that, this work suggests a clear quantifiable definition of PADC. Three complementary metal-oxide-semiconductor (CMOS) image sensors based on different technologies were empirically measured, using a unique scanning technology, the S-cube. The PADC is analyzed, and technology trends are shown.

  7. The Genetic Architecture of Major Depressive Disorder in Han Chinese Women.

    PubMed

    Peterson, Roseann E; Cai, Na; Bigdeli, Tim B; Li, Yihan; Reimers, Mark; Nikulova, Anna; Webb, Bradley T; Bacanu, Silviu-Alin; Riley, Brien P; Flint, Jonathan; Kendler, Kenneth S

    2017-02-01

    Despite the moderate, well-demonstrated heritability of major depressive disorder (MDD), there has been limited success in identifying replicable genetic risk loci, suggesting a complex genetic architecture. Research is needed to quantify the relative contribution of classes of genetic variation across the genome to inform future genetic studies of MDD. To apply aggregate genetic risk methods to clarify the genetic architecture of MDD by estimating and partitioning heritability by chromosome, minor allele frequency, and functional annotations and to test for enrichment of rare deleterious variants. The CONVERGE (China, Oxford, and Virginia Commonwealth University Experimental Research on Genetic Epidemiology) study collected data on 5278 patients with recurrent MDD from 58 provincial mental health centers and psychiatric departments of general medical hospitals in 45 cities and 23 provinces of China. Screened controls (n = 5196) were recruited from a range of locations, including general hospitals and local community centers. Data were collected from August 1, 2008, to October 31, 2012. Genetic risk for liability to recurrent MDD was partitioned using sparse whole-genome sequencing. In aggregate, common single-nucleotide polymorphisms (SNPs) explained between 20% and 29% of the variance in MDD risk, and the heritability in MDD explained by each chromosome was proportional to its length (r = 0.680; P = .0003), supporting a common polygenic etiology. Partitioning heritability by minor allele frequency indicated that the variance explained was distributed across the allelic frequency spectrum, although relatively common SNPs accounted for a disproportionate fraction of risk. Partitioning by genic annotation indicated a greater contribution of SNPs in protein-coding regions and within 3'-UTR regions of genes. Enrichment of SNPs associated with DNase I-hypersensitive sites was also found in many tissue types, including brain tissue. Examining burden scores from singleton exonic SNPs predicted to be deleterious indicated that cases had significantly more mutations than controls (odds ratio, 1.009; 95% CI, 1.003-1.014; P = .003), including those occurring in genes expressed in the brain (odds ratio, 1.011; 95% CI, 1.003-1.018; P = .004) and within nuclear-encoded genes with mitochondrial gene products (odds ratio, 1.075; 95% CI, 1.018-1.135; P = .009). Results support a complex etiology for MDD and highlight the value of analyzing components of heritability to clarify genetic architecture.

  8. The Genetic Architecture of Major Depressive Disorder in Han Chinese Women

    PubMed Central

    Peterson, Roseann E.; Cai, Na; Bigdeli, Tim B.; Li, Yihan; Reimers, Mark; Nikulova, Anna; Webb, Bradley T.; Bacanu, Silviu-Alin; Riley, Brien P.; Flint, Jonathan; Kendler, Kenneth S.

    2017-01-01

    IMPORTANCE Despite the moderate, well-demonstrated heritability of major depressive disorder (MDD), there has been limited success in identifying replicable genetic risk loci, suggesting a complex genetic architecture. Research is needed to quantify the relative contribution of classes of genetic variation across the genome to inform future genetic studies of MDD. OBJECTIVES To apply aggregate genetic risk methods to clarify the genetic architecture of MDD by estimating and partitioning heritability by chromosome, minor allele frequency, and functional annotations and to test for enrichment of rare deleterious variants. DESIGN, SETTING, AND PARTICIPANTS The CONVERGE (China, Oxford, and Virginia Commonwealth University Experimental Research on Genetic Epidemiology) study collected data on 5278 patients with recurrent MDD from 58 provincial mental health centers and psychiatric departments of general medical hospitals in 45 cities and 23 provinces of China. Screened controls (n = 5196) were recruited from a range of locations, including general hospitals and local community centers. Data were collected from August 1, 2008, to October 31, 2012. MAIN OUTCOMES AND MEASURES Genetic risk for liability to recurrent MDD was partitioned using sparse whole-genome sequencing. RESULTS In aggregate, common single-nucleotide polymorphisms (SNPs) explained between 20% and 29% of the variance in MDD risk, and the heritability in MDD explained by each chromosome was proportional to its length (r = 0.680; P = .0003), supporting a common polygenic etiology. Partitioning heritability by minor allele frequency indicated that the variance explained was distributed across the allelic frequency spectrum, although relatively common SNPs accounted for a disproportionate fraction of risk. Partitioning by genic annotation indicated a greater contribution of SNPs in protein-coding regions and within 3′-UTR regions of genes. Enrichment of SNPs associated with DNase I-hypersensitive sites was also found in many tissue types, including brain tissue. Examining burden scores from singleton exonic SNPs predicted to be deleterious indicated that cases had significantly more mutations than controls (odds ratio, 1.009; 95% CI, 1.003–1.014; P = .003), including those occurring in genes expressed in the brain (odds ratio, 1.011; 95% CI, 1.003–1.018; P = .004) and within nuclear-encoded genes with mitochondrial gene products (odds ratio, 1.075; 95% CI, 1.018–1.135; P = .009). CONCLUSIONS AND RELEVANCE Results support a complex etiology for MDD and highlight the value of analyzing components of heritability to clarify genetic architecture. PMID:28002544

  9. A Biologically Plausible Action Selection System for Cognitive Architectures: Implications of Basal Ganglia Anatomy for Learning and Decision-Making Models

    ERIC Educational Resources Information Center

    Stocco, Andrea

    2018-01-01

    Several attempts have been made previously to provide a biological grounding for cognitive architectures by relating their components to the computations of specific brain circuits. Often, the architecture's action selection system is identified with the basal ganglia. However, this identification overlooks one of the most important features of…

  10. Light-operated machines based on threaded molecular structures.

    PubMed

    Credi, Alberto; Silvi, Serena; Venturi, Margherita

    2014-01-01

    Rotaxanes and related species represent the most common implementation of the concept of artificial molecular machines, because the supramolecular nature of the interactions between the components and their interlocked architecture allow a precise control on the position and movement of the molecular units. The use of light to power artificial molecular machines is particularly valuable because it can play the dual role of "writing" and "reading" the system. Moreover, light-driven machines can operate without accumulation of waste products, and photons are the ideal inputs to enable autonomous operation mechanisms. In appropriately designed molecular machines, light can be used to control not only the stability of the system, which affects the relative position of the molecular components but also the kinetics of the mechanical processes, thereby enabling control on the direction of the movements. This step forward is necessary in order to make a leap from molecular machines to molecular motors.

  11. The advanced orbiting systems testbed program: Results to date

    NASA Technical Reports Server (NTRS)

    Newsome, Penny A.; Otranto, John F.

    1993-01-01

    The Consultative Committee for Space Data Systems Recommendations for Packet Telemetry and Advanced Orbiting Systems (AOS) propose standard solutions to data handling problems common to many types of space missions. The Recommendations address only space/ground and space/space data handling systems. Goddard Space Flight Center's AOS Testbed (AOST) Program was initiated to better understand the Recommendations and their impact on real-world systems, and to examine the extended domain of ground/ground data handling systems. Central to the AOST Program are the development of an end-to-end Testbed and its use in a comprehensive testing program. Other Program activities include flight-qualifiable component development, supporting studies, and knowledge dissemination. The results and products of the Program will reduce the uncertainties associated with the development of operational space and ground systems that implement the Recommendations. The results presented in this paper include architectural issues, a draft proposed standardized test suite and flight-qualifiable components.

  12. 3D Microstructural Architectures for Metal and Alloy Components Fabricated by 3D Printing/Additive Manufacturing Technologies

    NASA Astrophysics Data System (ADS)

    Martinez, E.; Murr, L. E.; Amato, K. N.; Hernandez, J.; Shindo, P. W.; Gaytan, S. M.; Ramirez, D. A.; Medina, F.; Wicker, R. B.

    The layer-by-layer building of monolithic, 3D metal components from selectively melted powder layers using laser or electron beams is a novel form of 3D printing or additive manufacturing. Microstructures created in these 3D products can involve novel, directional solidification structures which can include crystallographically oriented grains containing columnar arrays of precipitates characteristic of a microstructural architecture. These microstructural architectures are advantageously rendered in 3D image constructions involving light optical microscopy and scanning and transmission electron microscopy observations. Microstructural evolution can also be effectively examined through 3D image sequences which, along with x-ray diffraction (XRD) analysis in the x-y and x-z planes, can effectively characterize related crystallographic/texture variances. This paper compares 3D microstructural architectures in Co-base and Ni-base superalloys, columnar martensitic grain structures in 17-4 PH alloy, and columnar copper oxides and dislocation arrays in copper.

  13. GNC Architecture Design for ARES Simulation. Revision 3.0. Revision 3.0

    NASA Technical Reports Server (NTRS)

    Gay, Robert

    2006-01-01

    The purpose of this document is to describe the GNC architecture and associated interfaces for all ARES simulations. Establishing a common architecture facilitates development across the ARES simulations and provides an efficient mechanism for creating an end-to-end simulation capability. In general, the GNC architecture is the frame work in which all GNC development takes place, including sensor and effector models. All GNC software applications have a standard location within the architecture making integration easier and, thus more efficient.

  14. Technology advances and market forces: Their impact on high performance architectures

    NASA Technical Reports Server (NTRS)

    Best, D. R.

    1978-01-01

    Reasonable projections into future supercomputer architectures and technology require an analysis of the computer industry market environment, the current capabilities and trends within the component industry, and the research activities on computer architecture in the industrial and academic communities. Management, programmer, architect, and user must cooperate to increase the efficiency of supercomputer development efforts. Care must be taken to match the funding, compiler, architecture and application with greater attention to testability, maintainability, reliability, and usability than supercomputer development programs of the past.

  15. Specifying structural constraints of architectural patterns in the ARCHERY language

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sanchez, Alejandro; HASLab INESC TEC and Universidade do Minho, Campus de Gualtar, 4710-057 Braga; Barbosa, Luis S.

    ARCHERY is an architectural description language for modelling and reasoning about distributed, heterogeneous and dynamically reconfigurable systems in terms of architectural patterns. The language supports the specification of architectures and their reconfiguration. This paper introduces a language extension for precisely describing the structural design decisions that pattern instances must respect in their (re)configurations. The extension is a propositional modal logic with recursion and nominals referencing components, i.e., a hybrid µ-calculus. Its expressiveness allows specifying safety and liveness constraints, as well as paths and cycles over structures. Refinements of classic architectural patterns are specified.

  16. Distinct Cell Wall Architectures in Seed Endosperms in Representatives of the Brassicaceae and Solanaceae1[C][W][OA

    PubMed Central

    Lee, Kieran J.D.; Dekkers, Bas J.W.; Steinbrecher, Tina; Walsh, Cherie T.; Bacic, Antony; Bentsink, Leónie; Leubner-Metzger, Gerhard; Knox, J. Paul

    2012-01-01

    In some species, a crucial role has been demonstrated for the seed endosperm during germination. The endosperm has been shown to integrate environmental cues with hormonal networks that underpin dormancy and seed germination, a process that involves the action of cell wall remodeling enzymes (CWREs). Here, we examine the cell wall architectures of the endosperms of two related Brassicaceae, Arabidopsis (Arabidopsis thaliana) and the close relative Lepidium (Lepidium sativum), and that of the Solanaceous species, tobacco (Nicotiana tabacum). The Brassicaceae species have a similar cell wall architecture that is rich in pectic homogalacturonan, arabinan, and xyloglucan. Distinctive features of the tobacco endosperm that are absent in the Brassicaceae representatives are major tissue asymmetries in cell wall structural components that reflect the future site of radicle emergence and abundant heteromannan. Cell wall architecture of the micropylar endosperm of tobacco seeds has structural components similar to those seen in Arabidopsis and Lepidium endosperms. In situ and biomechanical analyses were used to study changes in endosperms during seed germination and suggest a role for mannan degradation in tobacco. In the case of the Brassicaceae representatives, the structurally homogeneous cell walls of the endosperm can be acted on by spatially regulated CWRE expression. Genetic manipulations of cell wall components present in the Arabidopsis seed endosperm demonstrate the impact of cell wall architectural changes on germination kinetics. PMID:22961130

  17. Semantic interoperability--HL7 Version 3 compared to advanced architecture standards.

    PubMed

    Blobel, B G M E; Engel, K; Pharow, P

    2006-01-01

    To meet the challenge for high quality and efficient care, highly specialized and distributed healthcare establishments have to communicate and co-operate in a semantically interoperable way. Information and communication technology must be open, flexible, scalable, knowledge-based and service-oriented as well as secure and safe. For enabling semantic interoperability, a unified process for defining and implementing the architecture, i.e. structure and functions of the cooperating systems' components, as well as the approach for knowledge representation, i.e. the used information and its interpretation, algorithms, etc. have to be defined in a harmonized way. Deploying the Generic Component Model, systems and their components, underlying concepts and applied constraints must be formally modeled, strictly separating platform-independent from platform-specific models. As HL7 Version 3 claims to represent the most successful standard for semantic interoperability, HL7 has been analyzed regarding the requirements for model-driven, service-oriented design of semantic interoperable information systems, thereby moving from a communication to an architecture paradigm. The approach is compared with advanced architectural approaches for information systems such as OMG's CORBA 3 or EHR systems such as GEHR/openEHR and CEN EN 13606 Electronic Health Record Communication. HL7 Version 3 is maturing towards an architectural approach for semantic interoperability. Despite current differences, there is a close collaboration between the teams involved guaranteeing a convergence between competing approaches.

  18. Increasing the Automation and Autonomy for Spacecraft Operations with Criteria Action Table

    NASA Technical Reports Server (NTRS)

    Li, Zhen-Ping; Savki, Cetin

    2005-01-01

    The Criteria Action Table (CAT) is an automation tool developed for monitoring real time system messages for specific events and processes in order to take user defined actions based on a set of user-defined rules. CAT was developed by Lockheed Martin Space Operations as a part of a larger NASA effort at the Goddard Space Flight Center (GSFC) to create a component-based, middleware-based, and standard-based general purpose ground system architecture referred as GMSEC - the GSFC Mission Services Evolution Center. CAT has been integrated into the upgraded ground systems for Tropical Rainfall Measuring Mission (TRMM) and Small Explorer (SMEX) satellites and it plays the central role in their automation effort to reduce the cost and increase the reliability for spacecraft operations. The GMSEC architecture provides a standard communication interface and protocol for components to publish/describe messages to an information bus. It also provides a standard message definition so components can send and receive messages to the bus interface rather than each other, thus reducing the component-to-component coupling, interface, protocols, and link (socket) management. With the GMSEC architecture, components can publish standard event messages to the bus for all nominal, significant, and surprising events in regard to satellite, celestial, ground system, or any other activity. In addition to sending standard event messages, each GMSEC compliant component is required to accept and process GMSEC directive request messages.

  19. The dynamic relationship between plant architecture and competition

    PubMed Central

    Ford, E. David

    2014-01-01

    In this review, structural and functional changes are described in single-species, even-aged, stands undergoing competition for light. Theories of the competition process as interactions between whole plants have been advanced but have not been successful in explaining these changes and how they vary between species or growing conditions. This task now falls to researchers in plant architecture. Research in plant architecture has defined three important functions of individual plants that determine the process of canopy development and competition: (i) resource acquisition plasticity; (ii) morphogenetic plasticity; (iii) architectural variation in efficiency of interception and utilization of light. In this review, this research is synthesized into a theory for competition based on five groups of postulates about the functioning of plants in stands. Group 1: competition for light takes place at the level of component foliage and branches. Group 2: the outcome of competition is determined by the dynamic interaction between processes that exert dominance and processes that react to suppression. Group 3: species differences may affect both exertion of dominance and reaction to suppression. Group 4: individual plants may simultaneously exhibit, in different component parts, resource acquisition and morphogenetic plasticity. Group 5: mortality is a time-delayed response to suppression. Development of architectural models when combined with field investigations is identifying research needed to develop a theory of architectural influences on the competition process. These include analyses of the integration of foliage and branch components into whole-plant growth and precise definitions of environmental control of morphogenetic plasticity and its interaction with acquisition of carbon for plant growth. PMID:24987396

  20. The dynamic relationship between plant architecture and competition.

    PubMed

    Ford, E David

    2014-01-01

    In this review, structural and functional changes are described in single-species, even-aged, stands undergoing competition for light. Theories of the competition process as interactions between whole plants have been advanced but have not been successful in explaining these changes and how they vary between species or growing conditions. This task now falls to researchers in plant architecture. Research in plant architecture has defined three important functions of individual plants that determine the process of canopy development and competition: (i) resource acquisition plasticity; (ii) morphogenetic plasticity; (iii) architectural variation in efficiency of interception and utilization of light. In this review, this research is synthesized into a theory for competition based on five groups of postulates about the functioning of plants in stands. Group 1: competition for light takes place at the level of component foliage and branches. Group 2: the outcome of competition is determined by the dynamic interaction between processes that exert dominance and processes that react to suppression. Group 3: species differences may affect both exertion of dominance and reaction to suppression. Group 4: individual plants may simultaneously exhibit, in different component parts, resource acquisition and morphogenetic plasticity. Group 5: mortality is a time-delayed response to suppression. Development of architectural models when combined with field investigations is identifying research needed to develop a theory of architectural influences on the competition process. These include analyses of the integration of foliage and branch components into whole-plant growth and precise definitions of environmental control of morphogenetic plasticity and its interaction with acquisition of carbon for plant growth.

  1. Engineering interfacial photo-induced charge transfer based on nanobamboo array architecture for efficient solar-to-chemical energy conversion.

    PubMed

    Wang, Xiaotian; Liow, Chihao; Bisht, Ankit; Liu, Xinfeng; Sum, Tze Chien; Chen, Xiaodong; Li, Shuzhou

    2015-04-01

    Engineering interfacial photo-induced charge transfer for highly synergistic photocatalysis is successfully realized based on nanobamboo array architecture. Programmable assemblies of various components and heterogeneous interfaces, and, in turn, engineering of the energy band structure along the charge transport pathways, play a critical role in generating excellent synergistic effects of multiple components for promoting photocatalytic efficiency. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  2. Microwave Photonic Architecture for Direction Finding of LPI Emitters: Front End Analog Circuit Design and Component Characterization

    DTIC Science & Technology

    2016-09-01

    design to control the phase shifters was complex, and the calibration process was time consuming. During the redesign process, we carried out...signals in time domain with a maximum sampling frequency of 20 Giga samples per second. In the previous tests of the design , the performance of...PHOTONIC ARCHITECTURE FOR DIRECTION FINDING OF LPI EMITTERS: FRONT-END ANALOG CIRCUIT DESIGN AND COMPONENT CHARACTERIZATION by Chew K. Tan

  3. A Facility and Architecture for Autonomy Research

    NASA Technical Reports Server (NTRS)

    Pisanich, Greg; Clancy, Daniel (Technical Monitor)

    2002-01-01

    Autonomy is a key enabling factor in the advancement of the remote robotic exploration. There is currently a large gap between autonomy software at the research level and software that is ready for insertion into near-term space missions. The Mission Simulation Facility (MST) will bridge this gap by providing a simulation framework and suite of simulation tools to support research in autonomy for remote exploration. This system will allow developers of autonomy software to test their models in a high-fidelity simulation and evaluate their system's performance against a set of integrated, standardized simulations. The Mission Simulation ToolKit (MST) uses a distributed architecture with a communication layer that is built on top of the standardized High Level Architecture (HLA). This architecture enables the use of existing high fidelity models, allows mixing simulation components from various computing platforms and enforces the use of a standardized high-level interface among components. The components needed to achieve a realistic simulation can be grouped into four categories: environment generation (terrain, environmental features), robotic platform behavior (robot dynamics), instrument models (camera/spectrometer/etc.), and data analysis. The MST will provide basic components in these areas but allows users to plug-in easily any refined model by means of a communication protocol. Finally, a description file defines the robot and environment parameters for easy configuration and ensures that all the simulation models share the same information.

  4. Component-Level Electronic-Assembly Repair (CLEAR) Operational Concept

    NASA Technical Reports Server (NTRS)

    Oeftering, Richard C.; Bradish, Martin A.; Juergens, Jeffrey R.; Lewis, Michael J.; Vrnak, Daniel R.

    2011-01-01

    This Component-Level Electronic-Assembly Repair (CLEAR) Operational Concept document was developed as a first step in developing the Component-Level Electronic-Assembly Repair (CLEAR) System Architecture (NASA/TM-2011-216956). The CLEAR operational concept defines how the system will be used by the Constellation Program and what needs it meets. The document creates scenarios for major elements of the CLEAR architecture. These scenarios are generic enough to apply to near-Earth, Moon, and Mars missions. The CLEAR operational concept involves basic assumptions about the overall program architecture and interactions with the CLEAR system architecture. The assumptions include spacecraft and operational constraints for near-Earth orbit, Moon, and Mars missions. This document addresses an incremental development strategy where capabilities evolve over time, but it is structured to prevent obsolescence. The approach minimizes flight hardware by exploiting Internet-like telecommunications that enables CLEAR capabilities to remain on Earth and to be uplinked as needed. To minimize crew time and operational cost, CLEAR exploits offline development and validation to support online teleoperations. Operational concept scenarios are developed for diagnostics, repair, and functional test operations. Many of the supporting functions defined in these operational scenarios are further defined as technologies in NASA/TM-2011-216956.

  5. Using a virtual world for robot planning

    NASA Astrophysics Data System (ADS)

    Benjamin, D. Paul; Monaco, John V.; Lin, Yixia; Funk, Christopher; Lyons, Damian

    2012-06-01

    We are building a robot cognitive architecture that constructs a real-time virtual copy of itself and its environment, including people, and uses the model to process perceptual information and to plan its movements. This paper describes the structure of this architecture. The software components of this architecture include PhysX for the virtual world, OpenCV and the Point Cloud Library for visual processing, and the Soar cognitive architecture that controls the perceptual processing and task planning. The RS (Robot Schemas) language is implemented in Soar, providing the ability to reason about concurrency and time. This Soar/RS component controls visual processing, deciding which objects and dynamics to render into PhysX, and the degree of detail required for the task. As the robot runs, its virtual model diverges from physical reality, and errors grow. The Match-Mediated Difference component monitors these errors by comparing the visual data with corresponding data from virtual cameras, and notifies Soar/RS of significant differences, e.g. a new object that appears, or an object that changes direction unexpectedly. Soar/RS can then run PhysX much faster than real-time and search among possible future world paths to plan the robot's actions. We report experimental results in indoor environments.

  6. In-Space Cryogenic Propellant Depot (ISCPD) Architecture Definitions and Systems Studies

    NASA Technical Reports Server (NTRS)

    Fikes, John C.; Howell, Joe T.; Henley, Mark

    2006-01-01

    The objectives of the ISCPD Architecture Definitions and Systems Studies were to determine high leverage propellant depot architecture concepts, system configuration trades, and related technologies to enable more ambitious and affordable human and robotic exploration of the Earth Neighborhood and beyond. This activity identified architectures and concepts that preposition and store propellants in space for exploration and commercial space activities, consistent with Exploration Systems Research and Technology (ESR&T) objectives. Commonalities across mission scenarios for these architecture definitions, depot concepts, technologies, and operations were identified that also best satisfy the Vision of Space Exploration. Trade studies were conducted, technology development needs identified and assessments performed to drive out the roadmap for obtaining an in-space cryogenic propellant depot capability. The Boeing Company supported the NASA Marshall Space Flight Center (MSFC) by conducting this Depot System Architecture Development Study. The primary objectives of this depot architecture study were: (1) determine high leverage propellant depot concepts and related technologies; (2) identify commonalities across mission scenarios of depot concepts, technologies, and operations; (3) determine the best depot concepts and key technology requirements and (4) identify technology development needs including definition of ground and space test article requirements.

  7. Bio-inspired adaptive feedback error learning architecture for motor control.

    PubMed

    Tolu, Silvia; Vanegas, Mauricio; Luque, Niceto R; Garrido, Jesús A; Ros, Eduardo

    2012-10-01

    This study proposes an adaptive control architecture based on an accurate regression method called Locally Weighted Projection Regression (LWPR) and on a bio-inspired module, such as a cerebellar-like engine. This hybrid architecture takes full advantage of the machine learning module (LWPR kernel) to abstract an optimized representation of the sensorimotor space while the cerebellar component integrates this to generate corrective terms in the framework of a control task. Furthermore, we illustrate how the use of a simple adaptive error feedback term allows to use the proposed architecture even in the absence of an accurate analytic reference model. The presented approach achieves an accurate control with low gain corrective terms (for compliant control schemes). We evaluate the contribution of the different components of the proposed scheme comparing the obtained performance with alternative approaches. Then, we show that the presented architecture can be used for accurate manipulation of different objects when their physical properties are not directly known by the controller. We evaluate how the scheme scales for simulated plants of high Degrees of Freedom (7-DOFs).

  8. Common modeling system for digital simulation

    NASA Technical Reports Server (NTRS)

    Painter, Rick

    1994-01-01

    The Joint Modeling and Simulation System is a tri-service investigation into a common modeling framework for the development digital models. The basis for the success of this framework is an X-window-based, open systems architecture, object-based/oriented methodology, standard interface approach to digital model construction, configuration, execution, and post processing. For years Department of Defense (DOD) agencies have produced various weapon systems/technologies and typically digital representations of the systems/technologies. These digital representations (models) have also been developed for other reasons such as studies and analysis, Cost Effectiveness Analysis (COEA) tradeoffs, etc. Unfortunately, there have been no Modeling and Simulation (M&S) standards, guidelines, or efforts towards commonality in DOD M&S. The typical scenario is an organization hires a contractor to build hardware and in doing so an digital model may be constructed. Until recently, this model was not even obtained by the organization. Even if it was procured, it was on a unique platform, in a unique language, with unique interfaces, and, with the result being UNIQUE maintenance required. Additionally, the constructors of the model expended more effort in writing the 'infrastructure' of the model/simulation (e.g. user interface, database/database management system, data journalizing/archiving, graphical presentations, environment characteristics, other components in the simulation, etc.) than in producing the model of the desired system. Other side effects include: duplication of efforts; varying assumptions; lack of credibility/validation; and decentralization in policy and execution. J-MASS provides the infrastructure, standards, toolset, and architecture to permit M&S developers and analysts to concentrate on the their area of interest.

  9. Microchannel cross load array with dense parallel input

    DOEpatents

    Swierkowski, Stefan P.

    2004-04-06

    An architecture or layout for microchannel arrays using T or Cross (+) loading for electrophoresis or other injection and separation chemistry that are performed in microfluidic configurations. This architecture enables a very dense layout of arrays of functionally identical shaped channels and it also solves the problem of simultaneously enabling efficient parallel shapes and biasing of the input wells, waste wells, and bias wells at the input end of the separation columns. One T load architecture uses circular holes with common rows, but not columns, which allows the flow paths for each channel to be identical in shape, using multiple mirror image pieces. Another T load architecture enables the access hole array to be formed on a biaxial, collinear grid suitable for EDM micromachining (square holes), with common rows and columns.

  10. 47 CFR 25.254 - Special requirements for ancillary terrestrial components operating in the 1610-1626.5 MHz/2483.5...

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... architecture. To the extent that a 1.6/2.4 GHz Mobile-Satellite Service licensee is able to demonstrate that the use of different system architectures would produce no greater potential interference than would... authorization based on another system architecture. [68 FR 33653, June 5, 2003, as amended at 69 FR 18803, Apr...

  11. 47 CFR 25.254 - Special requirements for ancillary terrestrial components operating in the 1610-1626.5 MHz/2483.5...

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... architecture. To the extent that a 1.6/2.4 GHz Mobile-Satellite Service licensee is able to demonstrate that the use of different system architectures would produce no greater potential interference than would... authorization based on another system architecture. [68 FR 33653, June 5, 2003, as amended at 69 FR 18803, Apr...

  12. 47 CFR 25.254 - Special requirements for ancillary terrestrial components operating in the 1610-1626.5 MHz/2483.5...

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ...: The preceding rules of § 25.254 are based on cdma2000 and IS-95 system architecture. To the extent that a Big LEO MSS licensee is able to demonstrate that the use of different system architectures would... section, an MSS licensee is permitted to apply for ATC authorization based on another system architecture...

  13. 47 CFR 25.254 - Special requirements for ancillary terrestrial components operating in the 1610-1626.5 MHz/2483.5...

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ...: The preceding rules of § 25.254 are based on cdma2000 and IS-95 system architecture. To the extent that a Big LEO MSS licensee is able to demonstrate that the use of different system architectures would... section, an MSS licensee is permitted to apply for ATC authorization based on another system architecture...

  14. Electro-Optic Computing Architectures: Volume II. Components and System Design and Analysis

    DTIC Science & Technology

    1998-02-01

    The objective of the Electro - Optic Computing Architecture (EOCA) program was to develop multi-function electro - optic interfaces and optical...interconnect units to enhance the performance of parallel processor systems and form the building blocks for future electro - optic computing architectures...Specifically, three multi-function interface modules were targeted for development - an Electro - Optic Interface (EOI), an Optical Interconnection Unit

  15. Simulation system architecture design for generic communications link

    NASA Technical Reports Server (NTRS)

    Tsang, Chit-Sang; Ratliff, Jim

    1986-01-01

    This paper addresses a computer simulation system architecture design for generic digital communications systems. It addresses the issues of an overall system architecture in order to achieve a user-friendly, efficient, and yet easily implementable simulation system. The system block diagram and its individual functional components are described in detail. Software implementation is discussed with the VAX/VMS operating system used as a target environment.

  16. ITS component specification. Appendix A, Requirements per component

    DOT National Transportation Integrated Search

    1997-01-01

    The objective of the Polaris Project is to define an Intelligent Transportation Systems (ITS) architecture for the state of Minnesota. This appendix lists the requirements that have been allocated to each component. The requirements for each componen...

  17. A Multi-mission Event-Driven Component-Based System for Support of Flight Software Development, ATLO, and Operations first used by the Mars Science Laboratory (MSL) Project

    NASA Technical Reports Server (NTRS)

    Dehghani, Navid; Tankenson, Michael

    2006-01-01

    This viewgraph presentation reviews the architectural description of the Mission Data Processing and Control System (MPCS). MPCS is an event-driven, multi-mission ground data processing components providing uplink, downlink, and data management capabilities which will support the Mars Science Laboratory (MSL) project as its first target mission. MPCS is designed with these factors (1) Enabling plug and play architecture (2) MPCS has strong inheritance from GDS components that have been developed for other Flight Projects (MER, MRO, DAWN, MSAP), and are currently being used in operations and ATLO, and (3) MPCS components are Java-based, platform independent, and are designed to consume and produce XML-formatted data

  18. Open architectures for formal reasoning and deductive technologies for software development

    NASA Technical Reports Server (NTRS)

    Mccarthy, John; Manna, Zohar; Mason, Ian; Pnueli, Amir; Talcott, Carolyn; Waldinger, Richard

    1994-01-01

    The objective of this project is to develop an open architecture for formal reasoning systems. One goal is to provide a framework with a clear semantic basis for specification and instantiation of generic components; construction of complex systems by interconnecting components; and for making incremental improvements and tailoring to specific applications. Another goal is to develop methods for specifying component interfaces and interactions to facilitate use of existing and newly built systems as 'off the shelf' components, thus helping bridge the gap between producers and consumers of reasoning systems. In this report we summarize results in several areas: our data base of reasoning systems; a theory of binding structures; a theory of components of open systems; a framework for specifying components of open reasoning system; and an analysis of the integration of rewriting and linear arithmetic modules in Boyer-Moore using the above framework.

  19. Common Board Design for the OBC I/O Unit and The OBC CCSDS Unit of The Stuttgart University Satellite "Flying Laptop"

    NASA Astrophysics Data System (ADS)

    Eickhoff, Jens; Cook, Barry; Walker, Paul; Habinc, Sadi; Witt, Rouven; Roser, Hans-Peter

    2011-08-01

    As already published in another paper at DASIA 2010 in Budapest [1] the University of Stuttgart, Germany, is developing an advanced 3-axis stabilized small satellite applying industry standards for command/control techniques, onboard software design and onboard computer components.The satellite has a launch mass of approx. 120kg and is foreseen to be launched end 2013 as piggy back payload on an Indian PSLV launcher.During phase C the main challenge was the conceptual design for an ultra compact and performant onboard computer (OBC), which is able to support an industry standard operating system, a PUS standard based onboard software (OBSW) and CCSDS standard based ground/space communication. The developed architecture is based on 4 main elements (see [1] and Figure 4):• the OBC core board (single board computer based on LEON3 FT architecture),• an I/O Board for all OBC digital interfaces to S/C equipment,• a CCSDS TC/TM pre-processor board,• CPDU being embedded in the PCDU.The EM for the OBC core meanwhile has been shipped to the University by the supplier Aeroflex Colorado Springs, USA and is in use in Stuttgart since January 2011. Figure 2 and Figure 3 provide brief impressions. This paper concentrates on the common design of the I/O board and the CCSDS processor boards.

  20. GMPLS-based control plane for optical networks: early implementation experience

    NASA Astrophysics Data System (ADS)

    Liu, Hang; Pendarakis, Dimitrios; Komaee, Nooshin; Saha, Debanjan

    2002-07-01

    Generalized Multi-Protocol Label Switching (GMPLS) extends MPLS signaling and Internet routing protocols to provide a scalable, interoperable, distributed control plane, which is applicable to multiple network technologies such as optical cross connects (OXCs), photonic switches, IP routers, ATM switches, SONET and DWDM systems. It is intended to facilitate automatic service provisioning and dynamic neighbor and topology discovery across multi-vendor intelligent transport networks, as well as their clients. Efforts to standardize such a distributed common control plane have reached various stages in several bodies such as the IETF, ITU and OIF. This paper describes the design considerations and architecture of a GMPLS-based control plane that we have prototyped for core optical networks. Functional components of GMPLS signaling and routing are integrated in this architecture with an application layer controller module. Various requirements including bandwidth, network protection and survivability, traffic engineering, optimal utilization of network resources, and etc. are taken into consideration during path computation and provisioning. Initial experiments with our prototype demonstrate the feasibility and main benefits of GMPLS as a distributed control plane for core optical networks. In addition to such feasibility results, actual adoption and deployment of GMPLS as a common control plane for intelligent transport networks will depend on the successful completion of relevant standardization activities, extensive interoperability testing as well as the strengthening of appropriate business drivers.

  1. Autonomic Intelligent Cyber Sensor to Support Industrial Control Network Awareness

    DOE PAGES

    Vollmer, Todd; Manic, Milos; Linda, Ondrej

    2013-06-01

    The proliferation of digital devices in a networked industrial ecosystem, along with an exponential growth in complexity and scope, has resulted in elevated security concerns and management complexity issues. This paper describes a novel architecture utilizing concepts of Autonomic computing and a SOAP based IF-MAP external communication layer to create a network security sensor. This approach simplifies integration of legacy software and supports a secure, scalable, self-managed framework. The contribution of this paper is two-fold: 1) A flexible two level communication layer based on Autonomic computing and Service Oriented Architecture is detailed and 2) Three complementary modules that dynamically reconfiguremore » in response to a changing environment are presented. One module utilizes clustering and fuzzy logic to monitor traffic for abnormal behavior. Another module passively monitors network traffic and deploys deceptive virtual network hosts. These components of the sensor system were implemented in C++ and PERL and utilize a common internal D-Bus communication mechanism. A proof of concept prototype was deployed on a mixed-use test network showing the possible real world applicability. In testing, 45 of the 46 network attached devices were recognized and 10 of the 12 emulated devices were created with specific Operating System and port configurations. Additionally the anomaly detection algorithm achieved a 99.9% recognition rate. All output from the modules were correctly distributed using the common communication structure.« less

  2. THE ACTIVITY/SPACE, A LEAST COMMON DENOMINATOR FOR ARCHITECTURAL PROGRAMMING.

    ERIC Educational Resources Information Center

    HAVILAND, DAVID S.

    TWO INTERRELATED PROBLEM AREAS OF ARCHITECTURAL PROGRAMING ARE DISCUSSED--(1) "NEEDS DEFINITION," AND (2) "NEEDS DOCUMENTATION AND COMMUNICATION". FUNDAMENTAL ISSUES AND WORK OF THE CENTER FOR ARCHITECTURAL RESEARCH ARE PRESENTED. ISSUES ARE THE FAILURE TO RECOGNIZE HOW, WHEN, AND IN WHAT FORM THE NEED WILL BE USED. CRITERIA FORMULATION MUST BE…

  3. Preservation of micro-architecture and angiogenic potential in a pulmonary acellular matrix obtained using intermittent intra-tracheal flow of detergent enzymatic treatment.

    PubMed

    Maghsoudlou, Panagiotis; Georgiades, Fanourios; Tyraskis, Athanasios; Totonelli, Giorgia; Loukogeorgakis, Stavros P; Orlando, Giuseppe; Shangaris, Panicos; Lange, Peggy; Delalande, Jean-Marie; Burns, Alan J; Cenedese, Angelo; Sebire, Neil J; Turmaine, Mark; Guest, Brogan N; Alcorn, John F; Atala, Anthony; Birchall, Martin A; Elliott, Martin J; Eaton, Simon; Pierro, Agostino; Gilbert, Thomas W; De Coppi, Paolo

    2013-09-01

    Tissue engineering of autologous lung tissue aims to become a therapeutic alternative to transplantation. Efforts published so far in creating scaffolds have used harsh decellularization techniques that damage the extracellular matrix (ECM), deplete its components and take up to 5 weeks to perform. The aim of this study was to create a lung natural acellular scaffold using a method that will reduce the time of production and better preserve scaffold architecture and ECM components. Decellularization of rat lungs via the intratracheal route removed most of the nuclear material when compared to the other entry points. An intermittent inflation approach that mimics lung respiration yielded an acellular scaffold in a shorter time with an improved preservation of pulmonary micro-architecture. Electron microscopy demonstrated the maintenance of an intact alveolar network, with no evidence of collapse or tearing. Pulsatile dye injection via the vasculature indicated an intact capillary network in the scaffold. Morphometry analysis demonstrated a significant increase in alveolar fractional volume, with alveolar size analysis confirming that alveolar dimensions were maintained. Biomechanical testing of the scaffolds indicated an increase in resistance and elastance when compared to fresh lungs. Staining and quantification for ECM components showed a presence of collagen, elastin, GAG and laminin. The intratracheal intermittent decellularization methodology could be translated to sheep lungs, demonstrating a preservation of ECM components, alveolar and vascular architecture. Decellularization treatment and methodology preserves lung architecture and ECM whilst reducing the production time to 3 h. Cell seeding and in vivo experiments are necessary to proceed towards clinical translation. Copyright © 2013 Elsevier Ltd. All rights reserved.

  4. Summary of International Border Crossings Roundtable Meeting Held in Norfolk, Virginia, June 11, 1993

    DOT National Transportation Integrated Search

    2002-04-01

    This document is an executive summary that describes the National Intelligent Transportation System (ITS) Architecture. This document covers the following major topics: (1) ITS Opportunity - need for the architecture; (2) main components of the Na...

  5. South Florida Freight Advanced Traveler Information System : architecture and implementation options summary report.

    DOT National Transportation Integrated Search

    2013-07-01

    This Final Architecture and Design report has been prepared to describe the structure and design of all the system components for the South Florida FRATIS Demonstration Project. More specifically, this document provides: Detailed descriptions of ...

  6. ITS system specification. Appendix B, requirements by service/function/subfunction

    DOT National Transportation Integrated Search

    1997-01-01

    The objective of the Polaris Project is to define an Intelligent Transportation Systems (ITS) architecture for the state of Minnesota. An architecture is a framework that defines how multiple ITS Components interrelate and contribute to the overall I...

  7. Molecular basis of angiosperm tree architecture

    USDA-ARS?s Scientific Manuscript database

    The shoot architecture of trees greatly impacts orchard and forest management methods. Amassing greater knowledge of the molecular genetics behind tree form can benefit these industries as well as contribute to basic knowledge of plant developmental biology. This review covers basic components of ...

  8. ESPC Common Model Architecture Earth System Modeling Framework (ESMF) Software and Application Development

    DTIC Science & Technology

    2015-09-30

    originate from NASA , NOAA , and community modeling efforts, and support for creation of the suite was shared by sponsors from other agencies. ESPS...Framework (ESMF) Software and Application Development Cecelia Deluca NESII/CIRES/ NOAA Earth System Research Laboratory 325 Broadway Boulder, CO...Capability (NUOPC) was established between NOAA and Navy to develop a common software architecture for easy and efficient interoperability. The

  9. caGrid 1.0: a Grid enterprise architecture for cancer research.

    PubMed

    Oster, Scott; Langella, Stephen; Hastings, Shannon; Ervin, David; Madduri, Ravi; Kurc, Tahsin; Siebenlist, Frank; Covitz, Peter; Shanbhag, Krishnakant; Foster, Ian; Saltz, Joel

    2007-10-11

    caGrid is the core Grid architecture of the NCI-sponsored cancer Biomedical Informatics Grid (caBIG) program. The current release, caGrid version 1.0, is developed as the production Grid software infrastructure of caBIG. Based on feedback from adopters of the previous version (caGrid 0.5), it has been significantly enhanced with new features and improvements to existing components. This paper presents an overview of caGrid 1.0, its main components, and enhancements over caGrid 0.5.

  10. Digital visual communications using a Perceptual Components Architecture

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.

    1991-01-01

    The next era of space exploration will generate extraordinary volumes of image data, and management of this image data is beyond current technical capabilities. We propose a strategy for coding visual information that exploits the known properties of early human vision. This Perceptual Components Architecture codes images and image sequences in terms of discrete samples from limited bands of color, spatial frequency, orientation, and temporal frequency. This spatiotemporal pyramid offers efficiency (low bit rate), variable resolution, device independence, error-tolerance, and extensibility.

  11. The J-2X Upper Stage Engine: From Heritage to Hardware

    NASA Technical Reports Server (NTRS)

    Byrd, THomas

    2008-01-01

    NASA's Global Exploration Strategy requires safe, reliable, robust, efficient transportation to support sustainable operations from Earth to orbit and into the far reaches of the solar system. NASA selected the Ares I crew launch vehicle and the Ares V cargo launch vehicle to provide that transportation. Guiding principles in creating the architecture represented by the Ares vehicles were the maximum use of heritage hardware and legacy knowledge, particularly Space Shuttle assets, and commonality between the Ares vehicles where possible to streamline the hardware development approach and reduce programmatic, technical, and budget risks. The J-2X exemplifies those goals. It was selected by the Exploration Systems Architecture Study (ESAS) as the upper stage propulsion for the Ares I Upper Stage and the Ares V Earth Departure Stage (EDS). The J-2X is an evolved version ofthe historic J-2 engine that successfully powered the second stage of the Saturn I launch vehicle and the second and third stages of the Saturn V launch vehicle. The Constellation architecture, however, requires performance greater than its predecessor. The new architecture calls for larger payloads delivered to the Moon and demands greater loss of mission reliability and numerous other requirements associated with human rating that were not applied to the original J-2. As a result, the J-2X must operate at much higher temperatures, pressures, and flow rates than the heritage J-2, making it one of the highest performing gas generator cycle engines ever built, approaching the efficiency of more complex stage combustion engines. Development is focused on early risk mitigation, component and subassembly test, and engine system test. The development plans include testing engine components, including the subscale injector, main igniter, powerpack assembly (turbopumps, gas generator and associated ducting and structural mounts), full-scale gas generator, valves, and control software with hardware-in-the-loop. Testing expanded in 2007, accompanied by the refinement of the design through several key milestones. This paper discusses those 2007 tests and milestones, as well as updates key developments in 2008.

  12. Comparative Analysis of Wolbachia Genomes Reveals Streamlining and Divergence of Minimalist Two-Component Systems

    PubMed Central

    Christensen, Steen; Serbus, Laura Renee

    2015-01-01

    Two-component regulatory systems are commonly used by bacteria to coordinate intracellular responses with environmental cues. These systems are composed of functional protein pairs consisting of a sensor histidine kinase and cognate response regulator. In contrast to the well-studied Caulobacter crescentus system, which carries dozens of these pairs, the streamlined bacterial endosymbiont Wolbachia pipientis encodes only two pairs: CckA/CtrA and PleC/PleD. Here, we used bioinformatic tools to compare characterized two-component system relays from C. crescentus, the related Anaplasmataceae species Anaplasma phagocytophilum and Ehrlichia chaffeensis, and 12 sequenced Wolbachia strains. We found the core protein pairs and a subset of interacting partners to be highly conserved within Wolbachia and these other Anaplasmataceae. Genes involved in two-component signaling were positioned differently within the various Wolbachia genomes, whereas the local context of each gene was conserved. Unlike Anaplasma and Ehrlichia, Wolbachia two-component genes were more consistently found clustered with metabolic genes. The domain architecture and key functional residues standard for two-component system proteins were well-conserved in Wolbachia, although residues that specify cognate pairing diverged substantially from other Anaplasmataceae. These findings indicate that Wolbachia two-component signaling pairs share considerable functional overlap with other α-proteobacterial systems, whereas their divergence suggests the potential for regulatory differences and cross-talk. PMID:25809075

  13. Study of Selected Components of Architectural Environment of Primary Schools - Preferences of Adults and Analysis of the Specialist Literature

    NASA Astrophysics Data System (ADS)

    Halarewicz, Aleksandra

    2017-10-01

    The school is one of the oldest social institutions designed to prepare a young man for an adult life. It performs a teaching and educational function in child’s life. It is a place where, apart from home, the child spends most of the time in a day, therefore it is one of the most important institutions in the life of a young person. The school environment has a direct impact on the student's personality and ambition, and it shapes an attitude of the young person. Therefore, the design process preceding the establishment of school facilities is extremely responsible and should be conducted in a conscious and thoughtful way. This article is a summary and an attempt to synthesize the data obtained from the survey carried out by the author in the context of the design guidelines contained in the specialist literature. The questionnaire survey was designed to make an attempt to determine adult’s preferences, opinions and perceptions about selected components of the primary school environment, including the factors which determine the choice of school for children, the priorities of architecture components made for early childhood use, also to specify the type and the scale of existing drawbacks and problems in the school construction industry, as well as expectations about the contemporary architecture of primary schools and its future changes. Moreover, in the article, based on the analysis of the available specialist’s literature, the following are broadly discussed: the general division and characterization of school spaces, issues related to the influence of selected components of the architectural environment on the physical, mental and psychological safety of children. Furthermore, the author raises the subject of the influence of the architectural interiors and furniture on the mood, emotions or comfort of children in the early school age, based on the anthropometric characteristics of children and issues related to the perception of space with an extra attention on the subject of perception of colours and the influence of the architectural space components on the frame of mind, fettle or comfort of children in the given age. Although there exists a limited body of literature on the subject and the results of the study show that the aspects of elementary school architecture relevant to adults, including parents of children, are different from those described in the literature, the analysis was necessary to show these differences and to highlight the different values and priorities of users and designers. The paper is also an introduction, which identify qualities and social preferences on educational architecture, to the deeper research aimed at developing the criteria of designing and shaping the architectural environment for primary school children, which takes into account the regularity and developmental needs of children at studied age.

  14. Functional Performance of an Enabling Atmosphere Revitalization Subsystem Architecture for Deep Space Exploration Missions

    NASA Technical Reports Server (NTRS)

    Perry, Jay L.; Abney, Morgan B.; Frederick, Kenneth R.; Greenwood, Zachary W.; Kayatin, Matthew J.; Newton, Robert L.; Parrish, Keith J.; Roman, Monsi C.; Takada, Kevin C.; Miller, Lee A.; hide

    2013-01-01

    A subsystem architecture derived from the International Space Station's (ISS) Atmosphere Revitalization Subsystem (ARS) has been functionally demonstrated. This ISS-derived architecture features re-arranged unit operations for trace contaminant control and carbon dioxide removal functions, a methane purification component as a precursor to enhance resource recovery over ISS capability, operational modifications to a water electrolysis-based oxygen generation assembly, and an alternative major atmospheric constituent monitoring concept. Results from this functional demonstration are summarized and compared to the performance observed during ground-based testing conducted on an ISS-like subsystem architecture. Considerations for further subsystem architecture and process technology development are discussed.

  15. An Architecture for Continuous Data Quality Monitoring in Medical Centers.

    PubMed

    Endler, Gregor; Schwab, Peter K; Wahl, Andreas M; Tenschert, Johannes; Lenz, Richard

    2015-01-01

    In the medical domain, data quality is very important. Since requirements and data change frequently, continuous and sustainable monitoring and improvement of data quality is necessary. Working together with managers of medical centers, we developed an architecture for a data quality monitoring system. The architecture enables domain experts to adapt the system during runtime to match their specifications using a built-in rule system. It also allows arbitrarily complex analyses to be integrated into the monitoring cycle. We evaluate our architecture by matching its components to the well-known data quality methodology TDQM.

  16. Microcomponent sheet architecture

    DOEpatents

    Wegeng, Robert S.; Drost, M. Kevin; McDonald, Carolyn E.

    1997-01-01

    The invention is a microcomponent sheet architecture wherein macroscale unit processes are performed by microscale components. The sheet architecture may be a single laminate with a plurality of separate microcomponent sections or the sheet architecture may be a plurality of laminates with one or more microcomponent sections on each laminate. Each microcomponent or plurality of like microcomponents perform at least one unit operation. A first laminate having a plurality of like first microcomponents is combined with at least a second laminate having a plurality of like second microcomponents thereby combining at least two unit operations to achieve a system operation.

  17. S-layer and cytoplasmic membrane - exceptions from the typical archaeal cell wall with a focus on double membranes.

    PubMed

    Klingl, Andreas

    2014-01-01

    The common idea of typical cell wall architecture in archaea consists of a pseudo-crystalline proteinaceous surface layer (S-layer), situated upon the cytoplasmic membrane. This is true for the majority of described archaea, hitherto. Within the crenarchaea, the S-layer often represents the only cell wall component, but there are various exceptions from this wall architecture. Beside (glycosylated) S-layers in (hyper)thermophilic cren- and euryarchaea as well as halophilic archaea, one can find a great variety of other cell wall structures like proteoglycan-like S-layers (Halobacteria), glutaminylglycan (Natronococci), methanochondroitin (Methanosarcina) or double layered cell walls with pseudomurein (Methanothermus and Methanopyrus). The presence of an outermost cellular membrane in the crenarchaeal species Ignicoccus hospitalis already gave indications for an outer membrane similar to Gram-negative bacteria. Although there is just limited data concerning their biochemistry and ultrastructure, recent studies on the euryarchaeal methanogen Methanomassiliicoccus luminyensis, cells of the ARMAN group, and the SM1 euryarchaeon delivered further examples for this exceptional cell envelope type consisting of two membranes.

  18. Advanced texture filtering: a versatile framework for reconstructing multi-dimensional image data on heterogeneous architectures

    NASA Astrophysics Data System (ADS)

    Zellmann, Stefan; Percan, Yvonne; Lang, Ulrich

    2015-01-01

    Reconstruction of 2-d image primitives or of 3-d volumetric primitives is one of the most common operations performed by the rendering components of modern visualization systems. Because this operation is often aided by GPUs, reconstruction is typically restricted to first-order interpolation. With the advent of in situ visualization, the assumption that rendering algorithms are in general executed on GPUs is however no longer adequate. We thus propose a framework that provides versatile texture filtering capabilities: up to third-order reconstruction using various types of cubic filtering and interpolation primitives; cache-optimized algorithms that integrate seamlessly with GPGPU rendering or with software rendering that was optimized for cache-friendly "Structure of Array" (SoA) access patterns; a memory management layer (MML) that gracefully hides the complexities of extra data copies necessary for memory access optimizations such as swizzling, for rendering on GPGPUs, or for reconstruction schemes that rely on pre-filtered data arrays. We prove the effectiveness of our software architecture by integrating it into and validating it using the open source direct volume rendering (DVR) software DeskVOX.

  19. ITS system specification. Appendix C, data flows by function for ITS services

    DOT National Transportation Integrated Search

    1997-01-01

    The objective of the Polaris Project is to define an Intelligent Transportation Systems (ITS) architecture for the state of Minnesota. An architecture is a framework that defines how multiple ITS Components interrelate and contribute to the overall I...

  20. Freight Advanced Traveler Information System (FRATIS) Dallas-Fort Worth : software architecture design and implementation options.

    DOT National Transportation Integrated Search

    2013-05-01

    This document describes the Software Architecture Design and Implementation Options for FRATIS system. The demonstration component of this task will serve to test the technical feasibility of the FRATIS prototype while also facilitating the collectio...

  1. Modeling and Simulation Roadmap to Enhance Electrical Energy Security of U.S. Naval Bases

    DTIC Science & Technology

    2012-03-01

    evaluating power system architectures and technologies and, therefore, can become a valuable tool for the implementation of the described plan for Navy...a well validated and consistent process for evaluating power system architectures and technologies and, therefore, can be a valuable tool for the...process for evaluating power system architectures and component technologies is needed to support the development and implementation of these new

  2. System architecture and operational analysis of medium displacement unmanned surface vehicle sea hunter as a surface warfare component of distributed lethality

    DTIC Science & Technology

    2017-06-01

    students in a war- gaming class , and working in tandem with a NPS distance...surface mode ability provides a threat suppression method against small craft attacks and boarding attempts. b. Vulnerability As a sea-going surface...Design Architecture With a proposed CONOPS established, the physical architecture can proceed to a more detailed design. For the purpose of

  3. Innovative on board payload optical architecture for high throughput satellites

    NASA Astrophysics Data System (ADS)

    Baudet, D.; Braux, B.; Prieur, O.; Hughes, R.; Wilkinson, M.; Latunde-Dada, K.; Jahns, J.; Lohmann, U.; Fey, D.; Karafolas, N.

    2017-11-01

    For the next generation of HighThroughPut (HTP) Telecommunications Satellites, space end users' needs will result in higher link speeds and an increase in the number of channels; up to 512 channels running at 10Gbits/s. By keeping electrical interconnections based on copper, the constraints in term of power dissipation, number of electrical wires and signal integrity will become too demanding. The replacement of the electrical links by optical links is the most adapted solution as it provides high speed links with low power consumption and no EMC/EMI. But replacing all electrical links by optical links of an On Board Payload (OBP) is challenging. It is not simply a matter of replacing electrical components with optical but rather the whole concept and architecture have to be rethought to achieve a high reliability and high performance optical solution. In this context, this paper will present the concept of an Innovative OBP Optical Architecture. The optical architecture was defined to meet the critical requirements of the application: signal speed, number of channels, space reliability, power dissipation, optical signals crossing and components availability. The resulting architecture is challenging and the need for new developments is highlighted. But this innovative optically interconnected architecture will substantially outperform standard electrical ones.

  4. The Ozone Widget Framework: towards modularity of C2 human interfaces

    NASA Astrophysics Data System (ADS)

    Hellar, David Benjamin; Vega, Laurian C.

    2012-05-01

    The Ozone Widget Framework (OWF) is a common webtop environment for distribution across the enterprise. A key mission driver for OWF is to enable rapid capability delivery by lowering time-to-market with lightweight components. OWF has been released as Government Open Source Software and has been deployed in a variety of C2 net-centric contexts ranging from real-time analytics, cyber-situational awareness, to strategic and operational planning. This paper discusses the current and future evolution of OWF including the availability of the OZONE Marketplace (OMP), useractivity driven metrics, and architecture enhancements for accessibility. Together, OWF is moving towards the rapid delivery of modular human interfaces supporting modern and future command and control contexts.

  5. Performance comparison of optical interference cancellation system architectures.

    PubMed

    Lu, Maddie; Chang, Matt; Deng, Yanhua; Prucnal, Paul R

    2013-04-10

    The performance of three optics-based interference cancellation systems are compared and contrasted with each other, and with traditional electronic techniques for interference cancellation. The comparison is based on a set of common performance metrics that we have developed for this purpose. It is shown that thorough evaluation of our optical approaches takes into account the traditional notions of depth of cancellation and dynamic range, along with notions of link loss and uniformity of cancellation. Our evaluation shows that our use of optical components affords performance that surpasses traditional electronic approaches, and that the optimal choice for an optical interference canceller requires taking into account the performance metrics discussed in this paper.

  6. Hybrid massively parallel fast sweeping method for static Hamilton-Jacobi equations

    NASA Astrophysics Data System (ADS)

    Detrixhe, Miles; Gibou, Frédéric

    2016-10-01

    The fast sweeping method is a popular algorithm for solving a variety of static Hamilton-Jacobi equations. Fast sweeping algorithms for parallel computing have been developed, but are severely limited. In this work, we present a multilevel, hybrid parallel algorithm that combines the desirable traits of two distinct parallel methods. The fine and coarse grained components of the algorithm take advantage of heterogeneous computer architecture common in high performance computing facilities. We present the algorithm and demonstrate its effectiveness on a set of example problems including optimal control, dynamic games, and seismic wave propagation. We give results for convergence, parallel scaling, and show state-of-the-art speedup values for the fast sweeping method.

  7. Evaluation of Human and AutomationRobotics Integration Needs for Future Human Exploration Missions

    NASA Technical Reports Server (NTRS)

    Marquez, Jessica J.; Adelstein, Bernard D.; Ellis, Stephen; Chang, Mai Lee; Howard, Robert

    2016-01-01

    NASA employs Design Reference Missions (DRMs) to define potential architectures for future human exploration missions to deep space, the Moon, and Mars. While DRMs to these destinations share some components, each mission has different needs. This paper focuses on the human and automation/robotic integration needs for these future missions, evaluating them with respect to NASA research gaps in the area of space human factors engineering. The outcomes of our assessment is a human and automation/robotic (HAR) task list for each of the four DRMs that we reviewed (i.e., Deep Space Sortie, Lunar Visit/Habitation, Deep Space Habitation, and Planetary), a list of common critical HAR factors that drive HAR design.

  8. Fast data transmission in dynamic data acquisition system for plasma diagnostics

    NASA Astrophysics Data System (ADS)

    Byszuk, Adrian; Poźniak, Krzysztof; Zabołotny, Wojciech M.; Kasprowicz, Grzegorz; Wojeński, Andrzej; Cieszewski, Radosław; Juszczyk, Bartłomiej; Kolasiński, Piotr; Zienkiewicz, Paweł; Chernyshova, Maryna; Czarski, Tomasz

    2014-11-01

    This paper describes architecture of a new data acquisition system (DAQ) targeted mainly at plasma diagnostic experiments. Modular architecture, in combination with selected hardware components, allows for straightforward reconfiguration of the whole system, both offline and online. Main emphasis will be put into the implementation of data transmission subsystem in said system. One of the biggest advantages of described system is modular architecture with well defined boundaries between main components: analog frontend (AFE), digital backplane and acquisition/control software. Usage of a FPGA chips allows for a high flexibility in design of analog frontends, including ADC <--> FPGA interface. Data transmission between backplane boards and user software was accomplished with the use of industry-standard PCI Express (PCIe) technology. PCIe implementation includes both FPGA firmware and Linux device driver. High flexibility of PCIe connections was accomplished due to use of configurable PCIe switch. Whenever it's possible, described DAQ system tries to make use of standard off-the-shelf (OTF) components, including typical x86 CPU & motherboard (acting as PCIe controller) and cabling.

  9. An adaptable product for material processing and life science missions

    NASA Technical Reports Server (NTRS)

    Wassick, Gregory; Dobbs, Michael

    1995-01-01

    The Experiment Control System II (ECS-II) is designed to make available to the microgravity research community the same tools and mode of automated experimentation that their ground-based counterparts have enjoyed for the last two decades. The design goal was accomplished by combining commercial automation tools familiar to the experimenter community with system control components that interface with the on-orbit platform in a distributed architecture. The architecture insulates the tools necessary for managing a payload. By using commercial software and hardware components whenever possible, development costs were greatly reduced when compared to traditional space development projects. Using commercial-off-the-shelf (COTS) components also improved the usability documentation, and reducing the need for training of the system by providing familiar user interfaces, providing a wealth of readily available documentation, and reducing the need for training on system-specific details. The modularity of the distributed architecture makes it very amenable for modification to different on-orbit experiments requiring robotics-based automation.

  10. Developing Historic Building Information Modelling Guidelines and Procedures for Architectural Heritage in Ireland

    NASA Astrophysics Data System (ADS)

    Murphy, M.; Corns, A.; Cahill, J.; Eliashvili, K.; Chenau, A.; Pybus, C.; Shaw, R.; Devlin, G.; Deevy, A.; Truong-Hong, L.

    2017-08-01

    Cultural heritage researchers have recently begun applying Building Information Modelling (BIM) to historic buildings. The model is comprised of intelligent objects with semantic attributes which represent the elements of a building structure and are organised within a 3D virtual environment. Case studies in Ireland are used to test and develop the suitable systems for (a) data capture/digital surveying/processing (b) developing library of architectural components and (c) mapping these architectural components onto the laser scan or digital survey to relate the intelligent virtual representation of a historic structure (HBIM). While BIM platforms have the potential to create a virtual and intelligent representation of a building, its full exploitation and use is restricted to narrow set of expert users with access to costly hardware, software and skills. The testing of open BIM approaches in particular IFCs and the use of game engine platforms is a fundamental component for developing much wider dissemination. The semantically enriched model can be transferred into a WEB based game engine platform.

  11. NDARC-NASA Design and Analysis of Rotorcraft Theoretical Basis and Architecture

    NASA Technical Reports Server (NTRS)

    Johnson, Wayne

    2010-01-01

    The theoretical basis and architecture of the conceptual design tool NDARC (NASA Design and Analysis of Rotorcraft) are described. The principal tasks of NDARC are to design (or size) a rotorcraft to satisfy specified design conditions and missions, and then analyze the performance of the aircraft for a set of off-design missions and point operating conditions. The aircraft consists of a set of components, including fuselage, rotors, wings, tails, and propulsion. For each component, attributes such as performance, drag, and weight can be calculated. The aircraft attributes are obtained from the sum of the component attributes. NDARC provides a capability to model general rotorcraft configurations, and estimate the performance and attributes of advanced rotor concepts. The software has been implemented with low-fidelity models, typical of the conceptual design environment. Incorporation of higher-fidelity models will be possible, as the architecture of the code accommodates configuration flexibility, a hierarchy of models, and ultimately multidisciplinary design, analysis and optimization.

  12. Onboard data-processing architecture of the soft X-ray imager (SXI) on NeXT satellite

    NASA Astrophysics Data System (ADS)

    Ozaki, Masanobu; Dotani, Tadayasu; Tsunemi, Hiroshi; Hayashida, Kiyoshi; Tsuru, Takeshi G.

    2004-09-01

    NeXT is the X-ray satellite proposed for the next Japanese space science mission. While the satellite total mass and the launching vehicle are similar to the prior satellite Astro-E2, the sensitivity is much improved; it requires all the components to be lighter and faster than previous architecture. This paper shows the data processing architecture of the X-ray CCD camera system SXI (Soft X-ray Imager), which is the top half of the WXI (Wide-band X-ray Imager) of the sensitivity in 0.2-80keV. The system is basically a variation of Astro-E2 XIS, but event extraction speed is much faster than it to fulfill the requirements coming from the large effective area and fast exposure period. At the same time, data transfer lines between components are redesigned in order to reduce the number and mass of the wire harnesses that limit the flexibility of the component distribution.

  13. GITEWS, an extensible and open integration platform for manifold sensor systems and processing components based on Sensor Web Enablement and the principles of Service Oriented Architectures

    NASA Astrophysics Data System (ADS)

    Haener, Rainer; Waechter, Joachim; Fleischer, Jens; Herrnkind, Stefan; Schwarting, Herrmann

    2010-05-01

    The German Indonesian Tsunami Early Warning System (GITEWS) is a multifaceted system consisting of various sensor types like seismometers, sea level sensors or GPS stations, and processing components, all with their own system behavior and proprietary data structure. To operate a warning chain, beginning from measurements scaling up to warning products, all components have to interact in a correct way, both syntactically and semantically. Designing the system great emphasis was laid on conformity to the Sensor Web Enablement (SWE) specification by the Open Geospatial Consortium (OGC). The technical infrastructure, the so called Tsunami Service Bus (TSB) follows the blueprint of Service Oriented Architectures (SOA). The TSB is an integration concept (SWE) where functionality (observe, task, notify, alert, and process) is grouped around business processes (Monitoring, Decision Support, Sensor Management) and packaged as interoperable services (SAS, SOS, SPS, WNS). The benefits of using a flexible architecture together with SWE lead to an open integration platform: • accessing and controlling heterogeneous sensors in a uniform way (Functional Integration) • assigns functionality to distinct services (Separation of Concerns) • allows resilient relationship between systems (Loose Coupling) • integrates services so that they can be accessed from everywhere (Location Transparency) • enables infrastructures which integrate heterogeneous applications (Encapsulation) • allows combination of services (Orchestration) and data exchange within business processes Warning systems will evolve over time: New sensor types might be added, old sensors will be replaced and processing components will be improved. From a collection of few basic services it shall be possible to compose more complex functionality essential for specific warning systems. Given these requirements a flexible infrastructure is a prerequisite for sustainable systems and their architecture must be tailored for evolution. The use of well-known techniques and widely used open source software implementing industrial standards reduces the impact of service modifications allowing the evolution of a system as a whole. GITEWS implemented a solution to feed sensor raw data from any (remote) system into the infrastructure. Specific dispatchers enable plugging in sensor-type specific processing without changing the architecture. Client components don't need to be adjusted if new sensor-types or individuals are added to the system, because they access them via standardized services. One of the outstanding features of service-oriented architectures is the possibility to compose new services from existing ones. The so called orchestration, allows the definition of new warning processes which can be adapted easily to new requirements. This approach has following advantages: • With implementing SWE it is possible to establish the "detection" and integration of sensors via the internet. Thus a system of systems combining early warning functionality at different levels of detail is feasible. • Any institution could add both its own components as well as components from third parties if they are developed in conformance to SOA principles. In a federation an institution keeps the ownership of its data and decides which data are provided by a service and when. • A system can be deployed at minor costs as a core for own development at any institution and thus enabling autonomous early warning- or monitoring systems. The presentation covers both design and various instantiations (live demonstration) of the GITEWS architecture. Experiences concerning the design and complexity of SWE will be addressed in detail. A substantial amount of attention is laid on the techniques and methods of extending the architecture, adapting proprietary components to SWE services and encoding, and their orchestration in high level workflows and processes. Furthermore the potential of the architecture concerning adaptive behavior, collaboration across boundaries and semantic interoperability will be addressed.

  14. EPA's Information Architecture and Web Taxonomy

    EPA Pesticide Factsheets

    EPA's Information Architecture creates a topical organization of our website, instead of an ownership-based organization. The EPA Web Taxonomy allows audiences easy access to relevant information from EPA programs, by using a common vocabulary.

  15. Issues in Defining Software Architectures in a GIS Environment

    NASA Technical Reports Server (NTRS)

    Acosta, Jesus; Alvorado, Lori

    1997-01-01

    The primary mission of the Pan-American Center for Earth and Environmental Studies (PACES) is to advance the research areas that are relevant to NASA's Mission to Planet Earth program. One of the activities at PACES is the establishment of a repository for geographical, geological and environmental information that covers various regions of Mexico and the southwest region of the U.S. and that is acquired from NASA and other sources through remote sensing, ground studies or paper-based maps. The center will be providing access of this information to other government entities in the U.S. and Mexico, and research groups from universities, national laboratories and industry. Geographical Information Systems(GIS) provide the means to manage, manipulate, analyze and display geographically referenced information that will be managed by PACES. Excellent off-the-shelf software exists for a complete GIS as well as software for storing and managing spatial databases, processing images, networking and viewing maps with layered information. This allows the user flexibility in combining systems to create a GIS or to mix these software packages with custom-built application programs. Software architectural languages provide the ability to specify the computational components and interactions among these components, an important topic in the domain of GIS because of the need to integrate numerous software packages. This paper discusses the characteristics that architectural languages address with respect to the issues relating to the data that must be communicated between software systems and components when systems interact. The paper presents a background on GIS in section 2. Section 3 gives an overview of software architecture and architectural languages. Section 4 suggests issues that may be of concern when defining the software architecture of a GIS. The last section discusses the future research effort and finishes with a summary.

  16. Flexible distributed architecture for semiconductor process control and experimentation

    NASA Astrophysics Data System (ADS)

    Gower, Aaron E.; Boning, Duane S.; McIlrath, Michael B.

    1997-01-01

    Semiconductor fabrication requires an increasingly expensive and integrated set of tightly controlled processes, driving the need for a fabrication facility with fully computerized, networked processing equipment. We describe an integrated, open system architecture enabling distributed experimentation and process control for plasma etching. The system was developed at MIT's Microsystems Technology Laboratories and employs in-situ CCD interferometry based analysis in the sensor-feedback control of an Applied Materials Precision 5000 Plasma Etcher (AME5000). Our system supports accelerated, advanced research involving feedback control algorithms, and includes a distributed interface that utilizes the internet to make these fabrication capabilities available to remote users. The system architecture is both distributed and modular: specific implementation of any one task does not restrict the implementation of another. The low level architectural components include a host controller that communicates with the AME5000 equipment via SECS-II, and a host controller for the acquisition and analysis of the CCD sensor images. A cell controller (CC) manages communications between these equipment and sensor controllers. The CC is also responsible for process control decisions; algorithmic controllers may be integrated locally or via remote communications. Finally, a system server images connections from internet/intranet (web) based clients and uses a direct link with the CC to access the system. Each component communicates via a predefined set of TCP/IP socket based messages. This flexible architecture makes integration easier and more robust, and enables separate software components to run on the same or different computers independent of hardware or software platform.

  17. The genetic architecture of resistance to virus infection in Drosophila.

    PubMed

    Cogni, Rodrigo; Cao, Chuan; Day, Jonathan P; Bridson, Calum; Jiggins, Francis M

    2016-10-01

    Variation in susceptibility to infection has a substantial genetic component in natural populations, and it has been argued that selection by pathogens may result in it having a simpler genetic architecture than many other quantitative traits. This is important as models of host-pathogen co-evolution typically assume resistance is controlled by a small number of genes. Using the Drosophila melanogaster multiparent advanced intercross, we investigated the genetic architecture of resistance to two naturally occurring viruses, the sigma virus and DCV (Drosophila C virus). We found extensive genetic variation in resistance to both viruses. For DCV resistance, this variation is largely caused by two major-effect loci. Sigma virus resistance involves more genes - we mapped five loci, and together these explained less than half the genetic variance. Nonetheless, several of these had a large effect on resistance. Models of co-evolution typically assume strong epistatic interactions between polymorphisms controlling resistance, but we were only able to detect one locus that altered the effect of the main effect loci we had mapped. Most of the loci we mapped were probably at an intermediate frequency in natural populations. Overall, our results are consistent with major-effect genes commonly affecting susceptibility to infectious diseases, with DCV resistance being a near-Mendelian trait. © 2016 The Authors. Molecular Ecology Published by John Wiley & Sons Ltd.

  18. A study of the selection of microcomputer architectures to automate planetary spacecraft power systems

    NASA Technical Reports Server (NTRS)

    Nauda, A.

    1982-01-01

    Performance and reliability models of alternate microcomputer architectures as a methodology for optimizing system design were examined. A methodology for selecting an optimum microcomputer architecture for autonomous operation of planetary spacecraft power systems was developed. Various microcomputer system architectures are analyzed to determine their application to spacecraft power systems. It is suggested that no standardization formula or common set of guidelines exists which provides an optimum configuration for a given set of specifications.

  19. User-driven generation of standard data services

    NASA Astrophysics Data System (ADS)

    Díaz, Laura; Granell, Carlos; Gould, Michael; Huerta, Joaquín.

    2010-05-01

    Geospatial Information systems are experiencing the shift from monolithic to distributed environments (Bernard, 2003). Current research trends for discover and access of geospatial resources, in these distributed environments, are being addressed by deployment of interconnected Spatial Data Infrastructure (SDI) nodes at different scales to build a global spatial information infrastructure (Masser et al., 2008; Rajabifard et al., 2002). One of the challenges for implementing these global and multiscale SDIs is to agree with common standards in consideration with heterogeneity of various stakeholders [Masser 2005]. In Europe, the European Commission took the INSPIRE initiative to monitor the development of European SDIs. INSPIRE Directive addresses the need for web services to discover, view, transform, invoke, and download geospatial resources, which enable various stakeholders to share resources in an interoperable manner [INSPIRE 2007]. Such web services require technical specifications for the interoperability and harmonization of their SDIs [INSPIRE 2007]. Moreover, interoperability is ensured by a number of specification efforts, in the geo domain most prominently by ISO/TC 211 and the OpenGIS Consortium (OGC) (Bernard, 2003). Other research challenges regarding SDI are on one hand how to handle complexity by users in charge of maintaining SDIs as they grow, and on the other hand the fact the SDI maintenance and evolution should be guided (Bejar et al, 2009). So there is a motivation to improve the complex deployment mechanisms in SDI since there is a need of expertise and time to deploy resources and integrate them by means of standard services. In this context we present an architecture following the INSPIRE technical guidelines and therefore based on SDI principles. This architecture supports distributed applications and provides components to assist users in deploying and updating SDI resources. Therefore mechanisms and components for the automatic generation and publication of standard geospatial are proposed. These mechanisms deal with the fact of hiding the underlying technology and let stakeholders wrap resources as standard services to share these resources in a transparent manner. These components are integrated in our architecture within the Service Framework node (module). PIC Figure 1: Figure 1. Architecture components diagram Figure 1 shows the components of the architecture: The Application Node provides the entry point for users to run distributed applications. This software component has the user interface and the application logic. The Service Connector component provides the ability to connect to the services available in the middleware layer of SDI. This node acts as a socket to OGC Web Services. For instance we appreciate the WMS component implementing the OGC WMS specification as it is the standard recommended by the INSPIRE implementation rules as View Service Type.The Service Framework node contains several components. The Service Framework main functionality is to assist users in wrapping and sharing geospatial resources. It implements the proposed mechanisms to improve the availability and visibility of geospatial resources. The main components of this framework are the Data wrapper, the Process Wrapper and the Service Publisher. The Data Wrapper and Process Wrapper components guide users to wrap data and tools as standard services according with INSPIRE implementing rules (availability). The Service Publisher component aims at creating service metadata and publishing them in catalogues (visibility). Roughly speaking, all of these components are concerned with the idea of acting as a service generator and publisher, i.e., they get a resource (data or process) and return an INSPIRE service that will be published in catalogue services. References Béjar, R., Latre, M. Á., Nogueras-Iso, J., Muro-Medrano, P. R., Zarazaga-Soria, F. J. 2009. International Journal of Geographical Information Science, 23(3), 271-294. Bernard, L, U Einspanier, M Lutz & C Portele. Interoperability in GI Service Chains The Way Forward. In: M. Gould, R. Laurini & S. Coulondre (Eds.). 6th AGILE Conference on Geographic Information Science 2003, Lyon: 179-188. INSPIRE. Directive 2007/2/EC of the European Parliament and of the Council of 14 March 2007 establishing an Infrastructure for Spatial Information in the European Community. (2007) Masser, I. GIS Worlds: Creating Spatial Data Infrastructures. Redlands, California. ESRI Press. (2005) Masser, I., Rajabifard, A., Williamson, I. 2008. Spatially enabling governments through SDI implementation. International Journal of Geographical Information Science. Vol. 22, No. 1, (2008) 5-20 Rajabifard, A., Feeney, M-E. F., Williamson, I. P. 2002. Future directions for SDI development. International Journal of Applied Earth Observation and Geoinformation 4 (2002) 11-22

  20. Freight Advanced Traveler Information System (FRATIS) – Dallas-Fort Worth : as-built system architecture and design.

    DOT National Transportation Integrated Search

    2015-03-01

    This document describes the As-Built System Architecture and Design for the FRATIS Dallas-Fort Worth DFW prototype system. The FRATIS prototype in DFW consisted of the following components: optimization algorithm, terminal wait time, route specific n...

  1. Los Angeles-Gateway Freight Advanced Traveler Information System : final system design and architecture for FRATIS prototype.

    DOT National Transportation Integrated Search

    2013-05-01

    This Final Architecture and Design report has been prepared to describe the structure and design of all the system components for the LA-Gateway FRATIS Demonstration Project. More specifically, this document provides: Detailed descriptions of the...

  2. A component-based, distributed object services architecture for a clinical workstation.

    PubMed

    Chueh, H C; Raila, W F; Pappas, J J; Ford, M; Zatsman, P; Tu, J; Barnett, G O

    1996-01-01

    Attention to an architectural framework in the development of clinical applications can promote reusability of both legacy systems as well as newly designed software. We describe one approach to an architecture for a clinical workstation application which is based on a critical middle tier of distributed object-oriented services. This tier of network-based services provides flexibility in the creation of both the user interface and the database tiers. We developed a clinical workstation for ambulatory care using this architecture, defining a number of core services including those for vocabulary, patient index, documents, charting, security, and encounter management. These services can be implemented through proprietary or more standard distributed object interfaces such as CORBA and OLE. Services are accessed over the network by a collection of user interface components which can be mixed and matched to form a variety of interface styles. These services have also been reused with several applications based on World Wide Web browser interfaces.

  3. A component-based, distributed object services architecture for a clinical workstation.

    PubMed Central

    Chueh, H. C.; Raila, W. F.; Pappas, J. J.; Ford, M.; Zatsman, P.; Tu, J.; Barnett, G. O.

    1996-01-01

    Attention to an architectural framework in the development of clinical applications can promote reusability of both legacy systems as well as newly designed software. We describe one approach to an architecture for a clinical workstation application which is based on a critical middle tier of distributed object-oriented services. This tier of network-based services provides flexibility in the creation of both the user interface and the database tiers. We developed a clinical workstation for ambulatory care using this architecture, defining a number of core services including those for vocabulary, patient index, documents, charting, security, and encounter management. These services can be implemented through proprietary or more standard distributed object interfaces such as CORBA and OLE. Services are accessed over the network by a collection of user interface components which can be mixed and matched to form a variety of interface styles. These services have also been reused with several applications based on World Wide Web browser interfaces. PMID:8947744

  4. Control System Architectures, Technologies and Concepts for Near Term and Future Human Exploration of Space

    NASA Technical Reports Server (NTRS)

    Boulanger, Richard; Overland, David

    2004-01-01

    Technologies that facilitate the design and control of complex, hybrid, and resource-constrained systems are examined. This paper focuses on design methodologies, and system architectures, not on specific control methods that may be applied to life support subsystems. Honeywell and Boeing have estimated that 60-80Y0 of the effort in developing complex control systems is software development, and only 20-40% is control system development. It has also been shown that large software projects have failure rates of as high as 50-65%. Concepts discussed include the Unified Modeling Language (UML) and design patterns with the goal of creating a self-improving, self-documenting system design process. Successful architectures for control must not only facilitate hardware to software integration, but must also reconcile continuously changing software with much less frequently changing hardware. These architectures rely on software modules or components to facilitate change. Architecting such systems for change leverages the interfaces between these modules or components.

  5. Assessment of modularity architecture for recovery process of electric vehicle in supporting sustainable design

    NASA Astrophysics Data System (ADS)

    Baroroh, D. K.; Alfiah, D.

    2018-05-01

    The electric vehicle is one of the innovations to reduce the pollution of the vehicle. Nevertheless, it still has a problem, especially for disposal stage. In supporting product design and development strategy, which is the idea of sustainable design or problem solving of disposal stage, assessment of modularity architecture from electric vehicle in recovery process needs to be done. This research used Design Structure Matrix (DSM) approach to deciding interaction of components and assessment of modularity architecture using the calculation of value from 3 variables, namely Module Independence (MI), Module Similarity (MS), and Modularity for End of Life Stage (MEOL). The result of this research shows that existing design of electric vehicles has the architectural design which has a high value of modularity for recovery process on disposal stage. Accordingly, so it can be reused and recycled in component level or module without disassembly process to support the product that is environmentally friendly (sustainable design) and able reduce disassembly cost.

  6. PICNIC Architecture.

    PubMed

    Saranummi, Niilo

    2005-01-01

    The PICNIC architecture aims at supporting inter-enterprise integration and the facilitation of collaboration between healthcare organisations. The concept of a Regional Health Economy (RHE) is introduced to illustrate the varying nature of inter-enterprise collaboration between healthcare organisations collaborating in providing health services to citizens and patients in a regional setting. The PICNIC architecture comprises a number of PICNIC IT Services, the interfaces between them and presents a way to assemble these into a functioning Regional Health Care Network meeting the needs and concerns of its stakeholders. The PICNIC architecture is presented through a number of views relevant to different stakeholder groups. The stakeholders of the first view are national and regional health authorities and policy makers. The view describes how the architecture enables the implementation of national and regional health policies, strategies and organisational structures. The stakeholders of the second view, the service viewpoint, are the care providers, health professionals, patients and citizens. The view describes how the architecture supports and enables regional care delivery and process management including continuity of care (shared care) and citizen-centred health services. The stakeholders of the third view, the engineering view, are those that design, build and implement the RHCN. The view comprises four sub views: software engineering, IT services engineering, security and data. The proposed architecture is founded into the main stream of how distributed computing environments are evolving. The architecture is realised using the web services approach. A number of well established technology platforms and generic standards exist that can be used to implement the software components. The software components that are specified in PICNIC are implemented in Open Source.

  7. Implications of Responsive Space on the Flight Software Architecture

    NASA Technical Reports Server (NTRS)

    Wilmot, Jonathan

    2006-01-01

    The Responsive Space initiative has several implications for flight software that need to be addressed not only within the run-time element, but the development infrastructure and software life-cycle process elements as well. The runtime element must at a minimum support Plug & Play, while the development and process elements need to incorporate methods to quickly generate the needed documentation, code, tests, and all of the artifacts required of flight quality software. Very rapid response times go even further, and imply little or no new software development, requiring instead, using only predeveloped and certified software modules that can be integrated and tested through automated methods. These elements have typically been addressed individually with significant benefits, but it is when they are combined that they can have the greatest impact to Responsive Space. The Flight Software Branch at NASA's Goddard Space Flight Center has been developing the runtime, infrastructure and process elements needed for rapid integration with the Core Flight software System (CFS) architecture. The CFS architecture consists of three main components; the core Flight Executive (cFE), the component catalog, and the Integrated Development Environment (DE). This paper will discuss the design of the components, how they facilitate rapid integration, and lessons learned as the architecture is utilized for an upcoming spacecraft.

  8. MEDIC: medical embedded device for individualized care.

    PubMed

    Wu, Winston H; Bui, Alex A T; Batalin, Maxim A; Au, Lawrence K; Binney, Jonathan D; Kaiser, William J

    2008-02-01

    Presented work highlights the development and initial validation of a medical embedded device for individualized care (MEDIC), which is based on a novel software architecture, enabling sensor management and disease prediction capabilities, and commercially available microelectronic components, sensors and conventional personal digital assistant (PDA) (or a cell phone). In this paper, we present a general architecture for a wearable sensor system that can be customized to an individual patient's needs. This architecture is based on embedded artificial intelligence that permits autonomous operation, sensor management and inference, and may be applied to a general purpose wearable medical diagnostics. A prototype of the system has been developed based on a standard PDA and wireless sensor nodes equipped with commercially available Bluetooth radio components, permitting real-time streaming of high-bandwidth data from various physiological and contextual sensors. We also present the results of abnormal gait diagnosis using the complete system from our evaluation, and illustrate how the wearable system and its operation can be remotely configured and managed by either enterprise systems or medical personnel at centralized locations. By using commercially available hardware components and software architecture presented in this paper, the MEDIC system can be rapidly configured, providing medical researchers with broadband sensor data from remote patients and platform access to best adapt operation for diagnostic operation objectives.

  9. A Multi-mission Event-Driven Component-Based System for Support of Flight Software Development, ATLO, and Operations first used by the Mars Science Laboratory (MSL) Project

    NASA Technical Reports Server (NTRS)

    Dehghani, Navid; Tankenson, Michael

    2006-01-01

    This paper details an architectural description of the Mission Data Processing and Control System (MPCS), an event-driven, multi-mission ground data processing components providing uplink, downlink, and data management capabilities which will support the Mars Science Laboratory (MSL) project as its first target mission. MPCS is developed based on a set of small reusable components, implemented in Java, each designed with a specific function and well-defined interfaces. An industry standard messaging bus is used to transfer information among system components. Components generate standard messages which are used to capture system information, as well as triggers to support the event-driven architecture of the system. Event-driven systems are highly desirable for processing high-rate telemetry (science and engineering) data, and for supporting automation for many mission operations processes.

  10. Adaptive method with intercessory feedback control for an intelligent agent

    DOEpatents

    Goldsmith, Steven Y.

    2004-06-22

    An adaptive architecture method with feedback control for an intelligent agent provides for adaptively integrating reflexive and deliberative responses to a stimulus according to a goal. An adaptive architecture method with feedback control for multiple intelligent agents provides for coordinating and adaptively integrating reflexive and deliberative responses to a stimulus according to a goal. Re-programming of the adaptive architecture is through a nexus which coordinates reflexive and deliberator components.

  11. Modular VO oriented Java EE service deployer

    NASA Astrophysics Data System (ADS)

    Molinaro, Marco; Cepparo, Francesco; De Marco, Marco; Knapic, Cristina; Apollo, Pietro; Smareglia, Riccardo

    2014-07-01

    The International Virtual Observatory Alliance (IVOA) has produced many standards and recommendations whose aim is to generate an architecture that starts from astrophysical resources, in a general sense, and ends up in deployed consumable services (that are themselves astrophysical resources). Focusing on the Data Access Layer (DAL) system architecture, that these standards define, in the last years a web based application has been developed and maintained at INAF-OATs IA2 (Italian National institute for Astrophysics - Astronomical Observatory of Trieste, Italian center of Astronomical Archives) to try to deploy and manage multiple VO (Virtual Observatory) services in a uniform way: VO-Dance. However a set of criticalities have arisen since when the VO-Dance idea has been produced, plus some major changes underwent and are undergoing at the IVOA DAL layer (and related standards): this urged IA2 to identify a new solution for its own service layer. Keeping on the basic ideas from VO-Dance (simple service configuration, service instantiation at call time and modularity) while switching to different software technologies (e.g. dismissing Java Reflection in favour of Enterprise Java Bean, EJB, based solution), the new solution has been sketched out and tested for feasibility. Here we present the results originating from this test study. The main constraints for this new project come from various fields. A better homogenized solution rising from IVOA DAL standards: for example the new DALI (Data Access Layer Interface) specification that acts as a common interface system for previous and oncoming access protocols. The need for a modular system where each component is based upon a single VO specification allowing services to rely on common capabilities instead of homogenizing them inside service components directly. The search for a scalable system that takes advantage from distributed systems. The constraints find answer in the adopted solutions hereafter sketched. The development of the new system using Java Enterprise technologies can better benefit from existing libraries to build up the single tokens implementing the IVOA standards. Each component can be built from single standards and each deployed service (i.e. service components instantiations) can consume the other components' exposed methods and services without the need of homogenizing them in dedicated libraries. Scalability can be achieved in an easier way by deploying components or sets of services on a distributed environment and using JNDI (Java Naming and Directory Interface) and RMI (Remote Method Invocation) technologies. Single service configuration will not be significantly different from the VO-Dance solution given that Java class instantiation that benefited from Java Reflection will only be moved to Java EJB pooling (and not, e.g. embedded in bundles for subsequent deployment).

  12. Hybrid architecture for building secure sensor networks

    NASA Astrophysics Data System (ADS)

    Owens, Ken R., Jr.; Watkins, Steve E.

    2012-04-01

    Sensor networks have various communication and security architectural concerns. Three approaches are defined to address these concerns for sensor networks. The first area is the utilization of new computing architectures that leverage embedded virtualization software on the sensor. Deploying a small, embedded virtualization operating system on the sensor nodes that is designed to communicate to low-cost cloud computing infrastructure in the network is the foundation to delivering low-cost, secure sensor networks. The second area focuses on securing the sensor. Sensor security components include developing an identification scheme, and leveraging authentication algorithms and protocols that address security assurance within the physical, communication network, and application layers. This function will primarily be accomplished through encrypting the communication channel and integrating sensor network firewall and intrusion detection/prevention components to the sensor network architecture. Hence, sensor networks will be able to maintain high levels of security. The third area addresses the real-time and high priority nature of the data that sensor networks collect. This function requires that a quality-of-service (QoS) definition and algorithm be developed for delivering the right data at the right time. A hybrid architecture is proposed that combines software and hardware features to handle network traffic with diverse QoS requirements.

  13. Indigenous Architecture for Expeditionary Installations

    DTIC Science & Technology

    2006-03-01

    through a thorough study of available texts and articles related to indigenous construction techniques of southwest Native Americans and desert cultures...common elements between the indigenous architecture of Native Americans and the Arabs of the Middle East highlighted their effectiveness. Three of these...Overview In the course of this research, noted similarities between indigenous architecture of southwestern Native Americans and Arabs of the Middle East

  14. Functional convergence in hydraulic architecture and water relations of tropical savanna trees: from leaf to whole plant.

    Treesearch

    S.J. Bucci; G. Goldstein; F.C. Meinzer; F.G. Scholz; A.C. France; M. Bustamante

    2004-01-01

    Functional convergence in hydraulic architecture and water relations, and potential trade-offs in resource allocation were investigated in six dominant neotropical savanna tree species from central Brazil during the peak of the dry season. Common relationships between wood density and several aspects of plant water relations and hydraulic architecture were observed....

  15. How architecture wins technology wars.

    PubMed

    Morris, C R; Ferguson, C H

    1993-01-01

    Signs of revolutionary transformation in the global computer industry are everywhere. A roll call of the major industry players reads like a waiting list in the emergency room. The usual explanations for the industry's turmoil are at best inadequate. Scale, friendly government policies, manufacturing capabilities, a strong position in desktop markets, excellent software, top design skills--none of these is sufficient, either by itself or in combination, to ensure competitive success in information technology. A new paradigm is required to explain patterns of success and failure. Simply stated, success flows to the company that manages to establish proprietary architectural control over a broad, fast-moving, competitive space. Architectural strategies have become crucial to information technology because of the astonishing rate of improvement in microprocessors and other semiconductor components. Since no single vendor can keep pace with the outpouring of cheap, powerful, mass-produced components, customers insist on stitching together their own local systems solutions. Architectures impose order on the system and make the interconnections possible. The architectural controller is the company that controls the standard by which the entire information package is assembled. Microsoft's Windows is an excellent example of this. Because of the popularity of Windows, companies like Lotus must conform their software to its parameters in order to compete for market share. In the 1990s, proprietary architectural control is not only possible but indispensable to competitive success. What's more, it has broader implications for organizational structure: architectural competition is giving rise to a new form of business organization.

  16. Towards the Architecture of an Instructional Multimedia Database.

    ERIC Educational Resources Information Center

    Verhagen, Plin W.; Bestebreurtje, R.

    1994-01-01

    Discussion of multimedia databases in education focuses on the development of an adaptable database in The Netherlands that uses optical storage media to hold the audiovisual components. Highlights include types of applications; types of users; accessibility; adaptation; an object-oriented approach; levels of the database architecture; and…

  17. Real-Time Cognitive Computing Architecture for Data Fusion in a Dynamic Environment

    NASA Technical Reports Server (NTRS)

    Duong, Tuan A.; Duong, Vu A.

    2012-01-01

    A novel cognitive computing architecture is conceptualized for processing multiple channels of multi-modal sensory data streams simultaneously, and fusing the information in real time to generate intelligent reaction sequences. This unique architecture is capable of assimilating parallel data streams that could be analog, digital, synchronous/asynchronous, and could be programmed to act as a knowledge synthesizer and/or an "intelligent perception" processor. In this architecture, the bio-inspired models of visual pathway and olfactory receptor processing are combined as processing components, to achieve the composite function of "searching for a source of food while avoiding the predator." The architecture is particularly suited for scene analysis from visual data and odorant.

  18. Ultrastructure of the extracellular matrix of bovine dura mater, optic nerve sheath and sclera.

    PubMed

    Raspanti, M; Marchini, M; Della Pasqua, V; Strocchi, R; Ruggeri, A

    1992-10-01

    The sclera, the outermost sheath of the optic nerve and the dura mater have been investigated histologically and ultrastructurally. Although these tissues appear very similar under the light microscope, being dense connective tissues mainly composed of collagen bundles and a limited amount of cells and elastic fibres, they exhibit subtle differences on electron microscopy. In the dura and sclera collagen appears in the form of large, nonuniform fibrils, similar to those commonly found in tendons, while in the optic nerve sheath the fibrils appear smaller and uniform, similar to those commonly observed in reticular tissues, vessel walls and skin. Freeze-fracture also reveals these fibrils to have different subfibrillar architectures, straight or helical, which correspond to 2 distinct forms of collagen fibril previously described (Raspanti et al. 1989). The other extracellular matrix components also vary with the particular collagen fibril structure. Despite their common embryological derivation, the dura mater, optic nerve sheath and sclera exhibit diversification of their extracellular matrix consistent with the mechanical loads to which these tissues are subjected. Our observations indicate that the outermost sheath of the optic nerve resembles the epineurium of peripheral nerves rather than the dura to which it is commonly likened.

  19. caGrid 1.0: A Grid Enterprise Architecture for Cancer Research

    PubMed Central

    Oster, Scott; Langella, Stephen; Hastings, Shannon; Ervin, David; Madduri, Ravi; Kurc, Tahsin; Siebenlist, Frank; Covitz, Peter; Shanbhag, Krishnakant; Foster, Ian; Saltz, Joel

    2007-01-01

    caGrid is the core Grid architecture of the NCI-sponsored cancer Biomedical Informatics Grid (caBIGTM) program. The current release, caGrid version 1.0, is developed as the production Grid software infrastructure of caBIGTM. Based on feedback from adopters of the previous version (caGrid 0.5), it has been significantly enhanced with new features and improvements to existing components. This paper presents an overview of caGrid 1.0, its main components, and enhancements over caGrid 0.5. PMID:18693901

  20. Evolutionary multidimensional access architecture featuring cost-reduced components

    NASA Astrophysics Data System (ADS)

    Farjady, Farsheed; Parker, Michael C.; Walker, Stuart D.

    1998-12-01

    We describe a three-stage wavelength-routed optical access network, utilizing coarse passband-flattened arrayed- waveguide grating routers. An N-dimensional addressing strategy enables 6912 customers to be bi-directionally addressed with multi-Gb/s data using only 24 wavelengths spaced by 1.6 nm. Coarse wavelength separation allows use of increased tolerance WDM components at the exchange and customer premises. The architecture is designed to map onto standard access network topologies, allowing elegant upgradability from legacy PON infrastructures at low cost. Passband-flattening of the routers is achieved through phase apodization.

  1. Alternative electrical distribution system architectures for automobiles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Afridi, K.K.; Tabors, R.D.; Kassakian, J.G.

    At present most automobiles use a 12 V electrical system with point-to-point wiring. The capability of this architecture in meeting the needs of future electrical loads is questionable. Furthermore, with the development of electric vehicles (EVs) there is a greater need for a better architecture. In this paper the authors outline the limitations of the conventional architecture and identify alternatives. They also present a multi-attribute trade-off methodology which compares these alternatives, and identifies a set of Pareto optimal architectures. The system attributes traded off are cost, weight, losses and probability of failure. These are calculated by a computer program thatmore » has built-in component attribute models. System attributes of a few dozen architectures are also reported and the results analyzed. 17 refs.« less

  2. CMC Property Variability and Life Prediction Methods for Turbine Engine Component Application

    NASA Technical Reports Server (NTRS)

    Cheplak, Matthew L.

    2004-01-01

    The ever increasing need for lower density and higher temperature-capable materials for aircraft engines has led to the development of Ceramic Matrix Composites (CMCs). Today's aircraft engines operate with >3000"F gas temperatures at the entrance to the turbine section, but unless heavily cooled, metallic components cannot operate above approx.2000 F. CMCs attempt to push component capability to nearly 2700 F with much less cooling, which can help improve engine efficiency and performance in terms of better fuel efficiency, higher thrust, and reduced emissions. The NASA Glenn Research Center has been researching the benefits of the SiC/SiC CMC for engine applications. A CMC is made up of a matrix material, fibers, and an interphase, which is a protective coating over the fibers. There are several methods or architectures in which the orientation of the fibers can be manipulated to achieve a particular material property objective as well as a particular component geometric shape and size. The required shape manipulation can be a limiting factor in the design and performance of the component if there is a lack of bending capability of the fiber as making the fiber more flexible typically sacrifices strength and other fiber properties. Various analysis codes are available (pcGINA, CEMCAN) that can predict the effective Young's Moduli, thermal conductivities, coefficients of thermal expansion (CTE), and various other properties of a CMC. There are also various analysis codes (NASAlife) that can be used to predict the life of CMCs under expected engine service conditions. The objective of this summer study is to utilize and optimize these codes for examining the tradeoffs between CMC properties and the complex fiber architectures that will be needed for several different component designs. For example, for the pcGINA code, there are six variations of architecture available. Depending on which architecture is analyzed, the user is able to specify the fiber tow size, tow spacing, weave parameter, and angle of orientation of fibers. By holding the volume fraction of the fibers constant, variations in tow spacing can be explored for different architectures. The CMC material properties are usually calculated assuming the component is manufactured perfectly. However, this is typically not the case so that a quantification of the material property variability is needed to account for processing and/or manufacturing imperfections. The overall inputs and outputs are presented using a regression software to rapidly investigate the tradeoffs associated with fiber architecture, material properties, and ultimately cost. This information is then propagated through lifing models and Larson-Miller data to assess timehemperature-dependent CMC strength. In addition, a first order cost estimation will be quantified from a current qualitative perspective. This cost estimation includes the manufacturing challenges, such as tooling, as well as the component cost for a particular application. Ultimately, a cost to performance ratio should be established that compares the effectiveness of CMCs to their current rival, nickel superalloys.

  3. A multi-agent architecture for geosimulation of moving agents

    NASA Astrophysics Data System (ADS)

    Vahidnia, Mohammad H.; Alesheikh, Ali A.; Alavipanah, Seyed Kazem

    2015-10-01

    In this paper, a novel architecture is proposed in which an axiomatic derivation system in the form of first-order logic facilitates declarative explanation and spatial reasoning. Simulation of environmental perception and interaction between autonomous agents is designed with a geographic belief-desire-intention and a request-inform-query model. The architecture has a complementary quantitative component that supports collaborative planning based on the concept of equilibrium and game theory. This new architecture presents a departure from current best practices geographic agent-based modelling. Implementation tasks are discussed in some detail, as well as scenarios for fleet management and disaster management.

  4. Microcomponent chemical process sheet architecture

    DOEpatents

    Wegeng, Robert S.; Drost, M. Kevin; Call, Charles J.; Birmingham, Joseph G.; McDonald, Carolyn Evans; Kurath, Dean E.; Friedrich, Michele

    1998-01-01

    The invention is a microcomponent sheet architecture wherein macroscale unit processes are performed by microscale components. The sheet architecture may be a single laminate with a plurality of separate microcomponent sections or the sheet architecture may be a plurality of laminates with one or more microcomponent sections on each laminate. Each microcomponent or plurality of like microcomponents perform at least one chemical process unit operation. A first laminate having a plurality of like first microcomponents is combined with at least a second laminate having a plurality of like second microcomponents thereby combining at least two unit operations to achieve a system operation.

  5. Microcomponent sheet architecture

    DOEpatents

    Wegeng, R.S.; Drost, M.K..; McDonald, C.E.

    1997-03-18

    The invention is a microcomponent sheet architecture wherein macroscale unit processes are performed by microscale components. The sheet architecture may be a single laminate with a plurality of separate microcomponent sections or the sheet architecture may be a plurality of laminates with one or more microcomponent sections on each laminate. Each microcomponent or plurality of like microcomponents perform at least one unit operation. A first laminate having a plurality of like first microcomponents is combined with at least a second laminate having a plurality of like second microcomponents thereby combining at least two unit operations to achieve a system operation. 14 figs.

  6. Microcomponent chemical process sheet architecture

    DOEpatents

    Wegeng, R.S.; Drost, M.K.; Call, C.J.; Birmingham, J.G.; McDonald, C.E.; Kurath, D.E.; Friedrich, M.

    1998-09-22

    The invention is a microcomponent sheet architecture wherein macroscale unit processes are performed by microscale components. The sheet architecture may be a single laminate with a plurality of separate microcomponent sections or the sheet architecture may be a plurality of laminates with one or more microcomponent sections on each laminate. Each microcomponent or plurality of like microcomponents perform at least one chemical process unit operation. A first laminate having a plurality of like first microcomponents is combined with at least a second laminate having a plurality of like second microcomponents thereby combining at least two unit operations to achieve a system operation. 26 figs.

  7. The Contribution of Visualization to Learning Computer Architecture

    ERIC Educational Resources Information Center

    Yehezkel, Cecile; Ben-Ari, Mordechai; Dreyfus, Tommy

    2007-01-01

    This paper describes a visualization environment and associated learning activities designed to improve learning of computer architecture. The environment, EasyCPU, displays a model of the components of a computer and the dynamic processes involved in program execution. We present the results of a research program that analysed the contribution of…

  8. An Auto-Configuration System for the GMSEC Architecture and API

    NASA Technical Reports Server (NTRS)

    Moholt, Joseph; Mayorga, Arturo

    2007-01-01

    A viewgraph presentation on an automated configuration concept for The Goddard Mission Services Evolution Center (GMSEC) architecture and Application Program Interface (API) is shown. The topics include: 1) The Goddard Mission Services Evolution Center (GMSEC); 2) Automated Configuration Concept; 3) Implementation Approach; and 4) Key Components and Benefits.

  9. Information Architecture without Internal Theory: An Inductive Design Process.

    ERIC Educational Resources Information Center

    Haverty, Marsha

    2002-01-01

    Suggests that information architecture design is primarily an inductive process, partly because it lacks internal theory and partly because it is an activity that supports emergent phenomena (user experiences) from basic design components. Suggests a resemblance to Constructive Induction, a design process that locates the best representational…

  10. Toxic and nontoxic components of botulinum neurotoxin complex are evolved from a common ancestral zinc protein

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Inui, Ken; Japan Society for the Promotion of Science, 1-8 Chiyoda-ku, Tokyo 102-8472; Sagane, Yoshimasa

    2012-03-16

    Highlights: Black-Right-Pointing-Pointer BoNT and NTNHA proteins share a similar protein architecture. Black-Right-Pointing-Pointer NTNHA and BoNT were both identified as zinc-binding proteins. Black-Right-Pointing-Pointer NTNHA does not have a classical HEXXH zinc-coordinating motif similar to that found in all serotypes of BoNT. Black-Right-Pointing-Pointer Homology modeling implied probable key residues involved in zinc coordination. -- Abstract: Zinc atoms play an essential role in a number of enzymes. Botulinum neurotoxin (BoNT), the most potent toxin known in nature, is a zinc-dependent endopeptidase. Here we identify the nontoxic nonhemagglutinin (NTNHA), one of the BoNT-complex constituents, as a zinc-binding protein, along with BoNT. A protein structuremore » classification database search indicated that BoNT and NTNHA share a similar domain architecture, comprising a zinc-dependent metalloproteinase-like, BoNT coiled-coil motif and concanavalin A-like domains. Inductively coupled plasma-mass spectrometry analysis demonstrated that every single NTNHA molecule contains a single zinc atom. This is the first demonstration of a zinc atom in this protein, as far as we know. However, the NTNHA molecule does not possess any known zinc-coordinating motif, whereas all BoNT serotypes possess the classical HEXXH motif. Homology modeling of the NTNHA structure implied that a consensus K-C-L-I-K-X{sub 35}-D sequence common among all NTNHA serotype molecules appears to coordinate a single zinc atom. These findings lead us to propose that NTNHA and BoNT may have evolved distinct functional specializations following their branching out from a common ancestral zinc protein.« less

  11. ELISA, a demonstrator environment for information systems architecture design

    NASA Technical Reports Server (NTRS)

    Panem, Chantal

    1994-01-01

    This paper describes an approach of reusability of software engineering technology in the area of ground space system design. System engineers have lots of needs similar to software developers: sharing of a common data base, capitalization of knowledge, definition of a common design process, communication between different technical domains. Moreover system designers need to simulate dynamically their system as early as possible. Software development environments, methods and tools now become operational and widely used. Their architecture is based on a unique object base, a set of common management services and they host a family of tools for each life cycle activity. In late '92, CNES decided to develop a demonstrative software environment supporting some system activities. The design of ground space data processing systems was chosen as the application domain. ELISA (Integrated Software Environment for Architectures Specification) was specified as a 'demonstrator', i.e. a sufficient basis for demonstrations, evaluation and future operational enhancements. A process with three phases was implemented: system requirements definition, design of system architectures models, and selection of physical architectures. Each phase is composed of several activities that can be performed in parallel, with the provision of Commercial Off the Shelves Tools. ELISA has been delivered to CNES in January 94, currently used for demonstrations and evaluations on real projects (e.g. SPOT4 Satellite Control Center). It is on the way of new evolutions.

  12. Definition of architectural ideotypes for good yield capacity in Coffea canephora.

    PubMed

    Cilas, Christian; Bar-Hen, Avner; Montagnon, Christophe; Godin, Christophe

    2006-03-01

    Yield capacity is a target trait for selection of agronomically desirable lines; it is preferred to simple yields recorded over different harvests. Yield capacity is derived using certain architectural parameters used to measure the components of yield capacity. Observation protocols for describing architecture and yield capacity were applied to six clones of coffee trees (Coffea canephora) in a comparative trial. The observations were used to establish architectural databases, which were explored using AMAPmod, a software dedicated to the analyses of plant architecture data. The traits extracted from the database were used to identify architectural parameters for predicting the yield of the plant material studied. Architectural traits are highly heritable and some display strong genetic correlations with cumulated yield. In particular, the proportion of fruiting nodes at plagiotropic level 15 counting from the top of the tree proved to be a good predictor of yield over two fruiting cycles.

  13. A Reference Stack for PHM Architectures

    DTIC Science & Technology

    2014-10-02

    components, fault modes and prognostics such as that described by MIMOSA (2009) and ISO 13374-3:2012 (2012). Section 2.6 described a semantic...architecture, and the use of a SOA is further discussed in Section 3.3.2. MIMOSA is a stack-oriented data architecture. Figure 11 shows its stack of...format (US Army PEWG, 2011). The tagging in ABCD format respects the data layers that are found in the MIMOSA standard ( MIMOSA , 2009) and in ISO

  14. A Role for Semantic Web Technologies in Patient Record Data Collection

    NASA Astrophysics Data System (ADS)

    Ogbuji, Chimezie

    Business Process Management Systems (BPMS) are a component of the stack of Web standards that comprise Service Oriented Architecture (SOA). Such systems are representative of the architectural framework of modern information systems built in an enterprise intranet and are in contrast to systems built for deployment on the larger World Wide Web. The REST architectural style is an emerging style for building loosely coupled systems based purely on the native HTTP protocol. It is a coordinated set of architectural constraints with a goal to minimize latency, maximize the independence and scalability of distributed components, and facilitate the use of intermediary processors.Within the development community for distributed, Web-based systems, there has been a debate regarding themerits of both approaches. In some cases, there are legitimate concerns about the differences in both architectural styles. In other cases, the contention seems to be based on concerns that are marginal at best. In this chapter, we will attempt to contribute to this debate by focusing on a specific, deployed use case that emphasizes the role of the Semantic Web, a simple Web application architecture that leverages the use of declarative XML processing, and the needs of a workflow system. The use case involves orchestrating a work process associated with the data entry of structured patient record content into a research registry at the Cleveland Clinic's Clinical Investigation department in the Heart and Vascular Institute.

  15. Method and system for training dynamic nonlinear adaptive filters which have embedded memory

    NASA Technical Reports Server (NTRS)

    Rabinowitz, Matthew (Inventor)

    2002-01-01

    Described herein is a method and system for training nonlinear adaptive filters (or neural networks) which have embedded memory. Such memory can arise in a multi-layer finite impulse response (FIR) architecture, or an infinite impulse response (IIR) architecture. We focus on filter architectures with separate linear dynamic components and static nonlinear components. Such filters can be structured so as to restrict their degrees of computational freedom based on a priori knowledge about the dynamic operation to be emulated. The method is detailed for an FIR architecture which consists of linear FIR filters together with nonlinear generalized single layer subnets. For the IIR case, we extend the methodology to a general nonlinear architecture which uses feedback. For these dynamic architectures, we describe how one can apply optimization techniques which make updates closer to the Newton direction than those of a steepest descent method, such as backpropagation. We detail a novel adaptive modified Gauss-Newton optimization technique, which uses an adaptive learning rate to determine both the magnitude and direction of update steps. For a wide range of adaptive filtering applications, the new training algorithm converges faster and to a smaller value of cost than both steepest-descent methods such as backpropagation-through-time, and standard quasi-Newton methods. We apply the algorithm to modeling the inverse of a nonlinear dynamic tracking system 5, as well as a nonlinear amplifier 6.

  16. Benefits of Using a Mars Forward Strategy for Lunar Surface Systems

    NASA Technical Reports Server (NTRS)

    Mulqueen, Jack; Griffin, Brand; Smitherman, David; Maples, Dauphne

    2009-01-01

    This paper identifies potential risk reduction, cost savings and programmatic procurement benefits of a Mars Forward Lunar Surface System architecture that provides commonality or evolutionary development paths for lunar surface system elements applicable to Mars surface systems. The objective of this paper is to identify the potential benefits for incorporating a Mars Forward development strategy into the planned Project Constellation Lunar Surface System Architecture. The benefits include cost savings, technology readiness, and design validation of systems that would be applicable to lunar and Mars surface systems. The paper presents a survey of previous lunar and Mars surface systems design concepts and provides an assessment of previous conclusions concerning those systems in light of the current Project Constellation Exploration Architectures. The operational requirements for current Project Constellation lunar and Mars surface system elements are compared and evaluated to identify the potential risk reduction strategies that build on lunar surface systems to reduce the technical and programmatic risks for Mars exploration. Risk reduction for rapidly evolving technologies is achieved through systematic evolution of technologies and components based on Moore's Law superimposed on the typical NASA systems engineering project development "V-cycle" described in NASA NPR 7120.5. Risk reduction for established or slowly evolving technologies is achieved through a process called the Mars-Ready Platform strategy in which incremental improvements lead from the initial lunar surface system components to Mars-Ready technologies. The potential programmatic benefits of the Mars Forward strategy are provided in terms of the transition from the lunar exploration campaign to the Mars exploration campaign. By utilizing a sequential combined procurement strategy for lunar and Mars exploration surface systems, the overall budget wedges for exploration systems are reduced and the costly technological development gap between the lunar and Mars programs can be eliminated. This provides a sustained level of technological competitiveness as well as maintaining a stable engineering and manufacturing capability throughout the entire duration of Project Constellation.

  17. Considerations on the Use of Custom Accelerators for Big Data Analytics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Castellana, Vito G.; Tumeo, Antonino; Minutoli, Marco

    Accelerators, including Graphic Processing Units (GPUs) for gen- eral purpose computation, many-core designs with wide vector units (e.g., Intel Phi), have become a common component of many high performance clusters. The appearance of more stable and reliable tools tools that can automatically convert code written in high-level specifications with annotations (such as C or C++) to hardware de- scription languages (High-Level Synthesis - HLS), is also setting the stage for a broader use of reconfigurable devices (e.g., Field Pro- grammable Gate Arrays - FPGAs) in high performance system for the implementation of custom accelerators, helped by the fact that newmore » processors include advanced cache-coherent interconnects for these components. In this chapter, we briefly survey the status of the use of accelerators in high performance systems targeted at big data analytics applications. We argue that, although the progress in the use of accelerators for this class of applications has been sig- nificant, differently from scientific simulations there still are gaps to close. This is particularly true for the ”irregular” behaviors exhibited by no-SQL, graph databases. We focus our attention on the limits of HLS tools for data analytics and graph methods, and discuss a new architectural template that better fits the requirement of this class of applications. We validate the new architectural templates by mod- ifying the Graph Engine for Multithreaded System (GEMS) frame- work to support accelerators generated with such a methodology, and testing with queries coming from the Lehigh University Benchmark (LUBM). The architectural template enables better supporting the task and memory level parallelism present in graph methods by sup- porting a new control model and a enhanced memory interface. We show that out solution allows generating parallel accelerators, pro- viding speed ups with respect to conventional HLS flows. We finally draw conclusions and present a perspective on the use of reconfig- urable devices and Design Automation tools for data analytics.« less

  18. Streamlining ITS planning : identifying common ITS needs : national ITS architecture

    DOT National Transportation Integrated Search

    1999-01-01

    This brochure gives an overview of the National Intelligent Transportation Systems (ITS) Architecture. The objects of the program include: to aid in the purchase of compatible equipment and services; to guide multilevel efforts in implementing compat...

  19. Feedback loops and temporal misalignment in component-based hydrologic modeling

    NASA Astrophysics Data System (ADS)

    Elag, Mostafa M.; Goodall, Jonathan L.; Castronova, Anthony M.

    2011-12-01

    In component-based modeling, a complex system is represented as a series of loosely integrated components with defined interfaces and data exchanges that allow the components to be coupled together through shared boundary conditions. Although the component-based paradigm is commonly used in software engineering, it has only recently been applied for modeling hydrologic and earth systems. As a result, research is needed to test and verify the applicability of the approach for modeling hydrologic systems. The objective of this work was therefore to investigate two aspects of using component-based software architecture for hydrologic modeling: (1) simulation of feedback loops between components that share a boundary condition and (2) data transfers between temporally misaligned model components. We investigated these topics using a simple case study where diffusion of mass is modeled across a water-sediment interface. We simulated the multimedia system using two model components, one for the water and one for the sediment, coupled using the Open Modeling Interface (OpenMI) standard. The results were compared with a more conventional numerical approach for solving the system where the domain is represented by a single multidimensional array. Results showed that the component-based approach was able to produce the same results obtained with the more conventional numerical approach. When the two components were temporally misaligned, we explored the use of different interpolation schemes to minimize mass balance error within the coupled system. The outcome of this work provides evidence that component-based modeling can be used to simulate complicated feedback loops between systems and guidance as to how different interpolation schemes minimize mass balance error introduced when components are temporally misaligned.

  20. Standardizing the information architecture for spacecraft operations

    NASA Technical Reports Server (NTRS)

    Easton, C. R.

    1994-01-01

    This paper presents an information architecture developed for the Space Station Freedom as a model from which to derive an information architecture standard for advanced spacecraft. The information architecture provides a way of making information available across a program, and among programs, assuming that the information will be in a variety of local formats, structures and representations. It provides a format that can be expanded to define all of the physical and logical elements that make up a program, add definitions as required, and import definitions from prior programs to a new program. It allows a spacecraft and its control center to work in different representations and formats, with the potential for supporting existing spacecraft from new control centers. It supports a common view of data and control of all spacecraft, regardless of their own internal view of their data and control characteristics, and of their communications standards, protocols and formats. This information architecture is central to standardizing spacecraft operations, in that it provides a basis for information transfer and translation, such that diverse spacecraft can be monitored and controlled in a common way.

  1. A Systems Approach to Developing an Affordable Space Ground Transportation Architecture using a Commonality Approach

    NASA Technical Reports Server (NTRS)

    Garcia, Jerry L.; McCleskey, Carey M.; Bollo, Timothy R.; Rhodes, Russel E.; Robinson, John W.

    2012-01-01

    This paper presents a structured approach for achieving a compatible Ground System (GS) and Flight System (FS) architecture that is affordable, productive and sustainable. This paper is an extension of the paper titled "Approach to an Affordable and Productive Space Transportation System" by McCleskey et al. This paper integrates systems engineering concepts and operationally efficient propulsion system concepts into a structured framework for achieving GS and FS compatibility in the mid-term and long-term time frames. It also presents a functional and quantitative relationship for assessing system compatibility called the Architecture Complexity Index (ACI). This paper: (1) focuses on systems engineering fundamentals as it applies to improving GS and FS compatibility; (2) establishes mid-term and long-term spaceport goals; (3) presents an overview of transitioning a spaceport to an airport model; (4) establishes a framework for defining a ground system architecture; (5) presents the ACI concept; (6) demonstrates the approach by presenting a comparison of different GS architectures; and (7) presents a discussion on the benefits of using this approach with a focus on commonality.

  2. Advanced Design and Implementation of a Control Architecture for Long Range Autonomous Planetary Rovers

    NASA Technical Reports Server (NTRS)

    Martin-Alvarez, A.; Hayati, S.; Volpe, R.; Petras, R.

    1999-01-01

    An advanced design and implementation of a Control Architecture for Long Range Autonomous Planetary Rovers is presented using a hierarchical top-down task decomposition, and the common structure of each design is presented based on feedback control theory. Graphical programming is presented as a common intuitive language for the design when a large design team is composed of managers, architecture designers, engineers, programmers, and maintenance personnel. The whole design of the control architecture consists in the classic control concepts of cyclic data processing and event-driven reaction to achieve all the reasoning and behaviors needed. For this purpose, a commercial graphical tool is presented that includes the mentioned control capabilities. Messages queues are used for inter-communication among control functions, allowing Artificial Intelligence (AI) reasoning techniques based on queue manipulation. Experimental results show a highly autonomous control system running in real time on top the JPL micro-rover Rocky 7 controlling simultaneously several robotic devices. This paper validates the sinergy between Artificial Intelligence and classic control concepts in having in advanced Control Architecture for Long Range Autonomous Planetary Rovers.

  3. Baseline Architecture of ITER Control System

    NASA Astrophysics Data System (ADS)

    Wallander, A.; Di Maio, F.; Journeaux, J.-Y.; Klotz, W.-D.; Makijarvi, P.; Yonekawa, I.

    2011-08-01

    The control system of ITER consists of thousands of computers processing hundreds of thousands of signals. The control system, being the primary tool for operating the machine, shall integrate, control and coordinate all these computers and signals and allow a limited number of staff to operate the machine from a central location with minimum human intervention. The primary functions of the ITER control system are plant control, supervision and coordination, both during experimental pulses and 24/7 continuous operation. The former can be split in three phases; preparation of the experiment by defining all parameters; executing the experiment including distributed feed-back control and finally collecting, archiving, analyzing and presenting all data produced by the experiment. We define the control system as a set of hardware and software components with well defined characteristics. The architecture addresses the organization of these components and their relationship to each other. We distinguish between physical and functional architecture, where the former defines the physical connections and the latter the data flow between components. In this paper, we identify the ITER control system based on the plant breakdown structure. Then, the control system is partitioned into a workable set of bounded subsystems. This partition considers at the same time the completeness and the integration of the subsystems. The components making up subsystems are identified and defined, a naming convention is introduced and the physical networks defined. Special attention is given to timing and real-time communication for distributed control. Finally we discuss baseline technologies for implementing the proposed architecture based on analysis, market surveys, prototyping and benchmarking carried out during the last year.

  4. Laying the Groundwork for Enterprise-Wide Medical Language Processing Services: Architecture and Process

    PubMed Central

    Chen, Elizabeth S.; Maloney, Francine L.; Shilmayster, Eugene; Goldberg, Howard S.

    2009-01-01

    A systematic and standard process for capturing information within free-text clinical documents could facilitate opportunities for improving quality and safety of patient care, enhancing decision support, and advancing data warehousing across an enterprise setting. At Partners HealthCare System, the Medical Language Processing (MLP) services project was initiated to establish a component-based architectural model and processes to facilitate putting MLP functionality into production for enterprise consumption, promote sharing of components, and encourage reuse. Key objectives included exploring the use of an open-source framework called the Unstructured Information Management Architecture (UIMA) and leveraging existing MLP-related efforts, terminology, and document standards. This paper describes early experiences in defining the infrastructure and standards for extracting, encoding, and structuring clinical observations from a variety of clinical documents to serve enterprise-wide needs. PMID:20351830

  5. Laying the groundwork for enterprise-wide medical language processing services: architecture and process.

    PubMed

    Chen, Elizabeth S; Maloney, Francine L; Shilmayster, Eugene; Goldberg, Howard S

    2009-11-14

    A systematic and standard process for capturing information within free-text clinical documents could facilitate opportunities for improving quality and safety of patient care, enhancing decision support, and advancing data warehousing across an enterprise setting. At Partners HealthCare System, the Medical Language Processing (MLP) services project was initiated to establish a component-based architectural model and processes to facilitate putting MLP functionality into production for enterprise consumption, promote sharing of components, and encourage reuse. Key objectives included exploring the use of an open-source framework called the Unstructured Information Management Architecture (UIMA) and leveraging existing MLP-related efforts, terminology, and document standards. This paper describes early experiences in defining the infrastructure and standards for extracting, encoding, and structuring clinical observations from a variety of clinical documents to serve enterprise-wide needs.

  6. Reducing Development and Operations Costs using NASA's "GMSEC" Systems Architecture

    NASA Technical Reports Server (NTRS)

    Smith, Dan; Bristow, John; Crouse, Patrick

    2007-01-01

    This viewgraph presentation reviews the role of Goddard Mission Services Evolution Center (GMSEC) in reducing development and operation costs in handling the massive data from NASA missions. The goals of GMSEC systems architecture development are to (1) Simplify integration and development, (2)Facilitate technology infusion over time, (3) Support evolving operational concepts, and (4) All for mix of heritage, COTS and new components. First 3 missions (i.e., Tropical Rainforest Measuring Mission (TRMM), Small Explorer (SMEX) missions - SWAS, TRACE, SAMPEX, and ST5 3-Satellite Constellation System) each selected a different telemetry and command system. These results show that GMSEC's message-bus component-based framework architecture is well proven and provides significant benefits over traditional flight and ground data system designs. The missions benefit through increased set of product options, enhanced automation, lower cost and new mission-enabling operations concept options .

  7. Additive genetic contribution to symptom dimensions in major depressive disorder.

    PubMed

    Pearson, Rahel; Palmer, Rohan H C; Brick, Leslie A; McGeary, John E; Knopik, Valerie S; Beevers, Christopher G

    2016-05-01

    Major depressive disorder (MDD) is a phenotypically heterogeneous disorder with a complex genetic architecture. In this study, genomic-relatedness-matrix restricted maximum-likelihood analysis (GREML) was used to investigate the extent to which variance in depression symptoms/symptom dimensions can be explained by variation in common single nucleotide polymorphisms (SNPs) in a sample of individuals with MDD (N = 1,558) who participated in the National Institute of Mental Health Sequenced Treatment Alternatives to Relieve Depression (STAR*D) study. A principal components analysis of items from the Hamilton Rating Scale for Depression (HRSD) obtained prior to treatment revealed 4 depression symptom components: (a) appetite, (b) core depression symptoms (e.g., depressed mood, anhedonia), (c) insomnia, and (d) anxiety. These symptom dimensions were associated with SNP-based heritability (hSNP2) estimates of 30%, 14%, 30%, and 5%, respectively. Results indicated that the genetic contribution of common SNPs to depression symptom dimensions were not uniform. Appetite and insomnia symptoms in MDD had a relatively strong genetic contribution whereas the genetic contribution was relatively small for core depression and anxiety symptoms. While in need of replication, these results suggest that future gene discovery efforts may strongly benefit from parsing depression into its constituent parts. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  8. Framework for the Parametric System Modeling of Space Exploration Architectures

    NASA Technical Reports Server (NTRS)

    Komar, David R.; Hoffman, Jim; Olds, Aaron D.; Seal, Mike D., II

    2008-01-01

    This paper presents a methodology for performing architecture definition and assessment prior to, or during, program formulation that utilizes a centralized, integrated architecture modeling framework operated by a small, core team of general space architects. This framework, known as the Exploration Architecture Model for IN-space and Earth-to-orbit (EXAMINE), enables: 1) a significantly larger fraction of an architecture trade space to be assessed in a given study timeframe; and 2) the complex element-to-element and element-to-system relationships to be quantitatively explored earlier in the design process. Discussion of the methodology advantages and disadvantages with respect to the distributed study team approach typically used within NASA to perform architecture studies is presented along with an overview of EXAMINE s functional components and tools. An example Mars transportation system architecture model is used to demonstrate EXAMINE s capabilities in this paper. However, the framework is generally applicable for exploration architecture modeling with destinations to any celestial body in the solar system.

  9. Photonics for aerospace sensors

    NASA Astrophysics Data System (ADS)

    Pellegrino, John; Adler, Eric D.; Filipov, Andree N.; Harrison, Lorna J.; van der Gracht, Joseph; Smith, Dale J.; Tayag, Tristan J.; Viveiros, Edward A.

    1992-11-01

    The maturation in the state-of-the-art of optical components is enabling increased applications for the technology. Most notable is the ever-expanding market for fiber optic data and communications links, familiar in both commercial and military markets. The inherent properties of optics and photonics, however, have suggested that components and processors may be designed that offer advantages over more commonly considered digital approaches for a variety of airborne sensor and signal processing applications. Various academic, industrial, and governmental research groups have been actively investigating and exploiting these properties of high bandwidth, large degree of parallelism in computation (e.g., processing in parallel over a two-dimensional field), and interconnectivity, and have succeeded in advancing the technology to the stage of systems demonstration. Such advantages as computational throughput and low operating power consumption are highly attractive for many computationally intensive problems. This review covers the key devices necessary for optical signal and image processors, some of the system application demonstration programs currently in progress, and active research directions for the implementation of next-generation architectures.

  10. A MoTe2 based light emitting diode and photodetector for silicon photonic integrated circuits

    NASA Astrophysics Data System (ADS)

    Bie, Ya-Qing; Heuck, M.; Grosso, G.; Furchi, M.; Cao, Y.; Zheng, J.; Navarro-Moratalla, E.; Zhou, L.; Taniguchi, T.; Watanabe, K.; Kong, J.; Englund, D.; Jarillo-Herrero, P.

    A key challenge in photonics today is to address the interconnects bottleneck in high-speed computing systems. Silicon photonics has emerged as a leading architecture, partly because many components such as waveguides, interferometers and modulators, could be integrated on silicon-based processors. However, light sources and photodetectors present continued challenges. Common approaches for light source include off-chip or wafer-bonded lasers based on III-V materials, but studies show advantages for directly modulated light sources. The most advanced photodetectors in silicon photonics are based on germanium growth which increases system cost. The emerging two dimensional transition metal dichalcogenides (TMDs) offer a path for optical interconnects components that can be integrated with the CMOS processing by back-end-of-the-line processing steps. Here we demonstrate a silicon waveguide-integrated light source and photodetector based on a p-n junction of bilayer MoTe2, a TMD semiconductor with infrared band gap. The state-of-the-art fabrication technology provides new opportunities for integrated optoelectronic systems.

  11. Finite Element Analysis of Film Stack Architecture for Complementary Metal-Oxide-Semiconductor Image Sensors.

    PubMed

    Wu, Kuo-Tsai; Hwang, Sheng-Jye; Lee, Huei-Huang

    2017-05-02

    Image sensors are the core components of computer, communication, and consumer electronic products. Complementary metal oxide semiconductor (CMOS) image sensors have become the mainstay of image-sensing developments, but are prone to leakage current. In this study, we simulate the CMOS image sensor (CIS) film stacking process by finite element analysis. To elucidate the relationship between the leakage current and stack architecture, we compare the simulated and measured leakage currents in the elements. Based on the analysis results, we further improve the performance by optimizing the architecture of the film stacks or changing the thin-film material. The material parameters are then corrected to improve the accuracy of the simulation results. The simulated and experimental results confirm a positive correlation between measured leakage current and stress. This trend is attributed to the structural defects induced by high stress, which generate leakage. Using this relationship, we can change the structure of the thin-film stack to reduce the leakage current and thereby improve the component life and reliability of the CIS components.

  12. Finite Element Analysis of Film Stack Architecture for Complementary Metal-Oxide–Semiconductor Image Sensors

    PubMed Central

    Wu, Kuo-Tsai; Hwang, Sheng-Jye; Lee, Huei-Huang

    2017-01-01

    Image sensors are the core components of computer, communication, and consumer electronic products. Complementary metal oxide semiconductor (CMOS) image sensors have become the mainstay of image-sensing developments, but are prone to leakage current. In this study, we simulate the CMOS image sensor (CIS) film stacking process by finite element analysis. To elucidate the relationship between the leakage current and stack architecture, we compare the simulated and measured leakage currents in the elements. Based on the analysis results, we further improve the performance by optimizing the architecture of the film stacks or changing the thin-film material. The material parameters are then corrected to improve the accuracy of the simulation results. The simulated and experimental results confirm a positive correlation between measured leakage current and stress. This trend is attributed to the structural defects induced by high stress, which generate leakage. Using this relationship, we can change the structure of the thin-film stack to reduce the leakage current and thereby improve the component life and reliability of the CIS components. PMID:28468324

  13. A high performance parallel computing architecture for robust image features

    NASA Astrophysics Data System (ADS)

    Zhou, Renyan; Liu, Leibo; Wei, Shaojun

    2014-03-01

    A design of parallel architecture for image feature detection and description is proposed in this article. The major component of this architecture is a 2D cellular network composed of simple reprogrammable processors, enabling the Hessian Blob Detector and Haar Response Calculation, which are the most computing-intensive stage of the Speeded Up Robust Features (SURF) algorithm. Combining this 2D cellular network and dedicated hardware for SURF descriptors, this architecture achieves real-time image feature detection with minimal software in the host processor. A prototype FPGA implementation of the proposed architecture achieves 1318.9 GOPS general pixel processing @ 100 MHz clock and achieves up to 118 fps in VGA (640 × 480) image feature detection. The proposed architecture is stand-alone and scalable so it is easy to be migrated into VLSI implementation.

  14. Automated Synthesis of Architecture of Avionic Systems

    NASA Technical Reports Server (NTRS)

    Chau, Savio; Xu, Joseph; Dang, Van; Lu, James F.

    2006-01-01

    The Architecture Synthesis Tool (AST) is software that automatically synthesizes software and hardware architectures of avionic systems. The AST is expected to be most helpful during initial formulation of an avionic-system design, when system requirements change frequently and manual modification of architecture is time-consuming and susceptible to error. The AST comprises two parts: (1) an architecture generator, which utilizes a genetic algorithm to create a multitude of architectures; and (2) a functionality evaluator, which analyzes the architectures for viability, rejecting most of the non-viable ones. The functionality evaluator generates and uses a viability tree a hierarchy representing functions and components that perform the functions such that the system as a whole performs system-level functions representing the requirements for the system as specified by a user. Architectures that survive the functionality evaluator are further evaluated by the selection process of the genetic algorithm. Architectures found to be most promising to satisfy the user s requirements and to perform optimally are selected as parents to the next generation of architectures. The foregoing process is iterated as many times as the user desires. The final output is one or a few viable architectures that satisfy the user s requirements.

  15. The Common Evolution of Geometry and Architecture from a Geodetic Point of View

    NASA Astrophysics Data System (ADS)

    Bellone, T.; Fiermonte, F.; Mussio, L.

    2017-05-01

    Throughout history the link between geometry and architecture has been strong and while architects have used mathematics to construct their buildings, geometry has always been the essential tool allowing them to choose spatial shapes which are aesthetically appropriate. Sometimes it is geometry which drives architectural choices, but at other times it is architectural innovation which facilitates the emergence of new ideas in geometry. Among the best known types of geometry (Euclidean, projective, analytical, Topology, descriptive, fractal,…) those most frequently employed in architectural design are: - Euclidean Geometry - Projective Geometry - The non-Euclidean geometries. Entire architectural periods are linked to specific types of geometry. Euclidean geometry, for example, was the basis for architectural styles from Antiquity through to the Romanesque period. Perspective and Projective geometry, for their part, were important from the Gothic period through the Renaissance and into the Baroque and Neo-classical eras, while non-Euclidean geometries characterize modern architecture.

  16. Hybrid massively parallel fast sweeping method for static Hamilton–Jacobi equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Detrixhe, Miles, E-mail: mdetrixhe@engineering.ucsb.edu; University of California Santa Barbara, Santa Barbara, CA, 93106; Gibou, Frédéric, E-mail: fgibou@engineering.ucsb.edu

    The fast sweeping method is a popular algorithm for solving a variety of static Hamilton–Jacobi equations. Fast sweeping algorithms for parallel computing have been developed, but are severely limited. In this work, we present a multilevel, hybrid parallel algorithm that combines the desirable traits of two distinct parallel methods. The fine and coarse grained components of the algorithm take advantage of heterogeneous computer architecture common in high performance computing facilities. We present the algorithm and demonstrate its effectiveness on a set of example problems including optimal control, dynamic games, and seismic wave propagation. We give results for convergence, parallel scaling,more » and show state-of-the-art speedup values for the fast sweeping method.« less

  17. Capability and Technology Performance Goals for the Next Step in Affordable Human Exploration of Space

    NASA Technical Reports Server (NTRS)

    Linne, Diane L.; Sanders, Gerald B.; Taminger, Karen M.

    2015-01-01

    The capability for living off the land, commonly called in-situ resource utilization, is finally gaining traction in space exploration architectures. Production of oxygen from the Martian atmosphere is called an enabling technology for human return from Mars, and a flight demonstration to be flown on the Mars 2020 robotic lander is in development. However, many of the individual components still require technical improvements, and system-level trades will be required to identify the best combination of technology options. Based largely on work performed for two recent roadmap activities, this paper defines the capability and technology requirements that will need to be achieved before this game-changing capability can reach its full potential.

  18. Autonomous Closed-Loop Tasking, Acquisition, Processing, and Evaluation for Situational Awareness Feedback

    NASA Technical Reports Server (NTRS)

    Frye, Stuart; Mandl, Dan; Cappelaere, Pat

    2016-01-01

    This presentation describes the closed loop satellite autonomy methods used to connect users and the assets on Earth Orbiter- 1 (EO-1) and similar satellites. The base layer is a distributed architecture based on Goddard Mission Services Evolution Concept (GMSEC) thus each asset still under independent control. Situational awareness is provided by a middleware layer through common Application Programmer Interface (API) to GMSEC components developed at GSFC. Users setup their own tasking requests, receive views into immediate past acquisitions in their area of interest, and into future feasibilities for acquisition across all assets. Automated notifications via pubsub feeds are returned to users containing published links to image footprints, algorithm results, and full data sets. Theme-based algorithms are available on-demand for processing.

  19. Design, Development, Test, and Evaluation of Atmosphere Revitalization and Environmental Monitoring Systems for Long Duration Missions

    NASA Technical Reports Server (NTRS)

    Roman, Monsi C.; Perry, Jay L.; Jan, Darrell L.

    2012-01-01

    The Advanced Exploration Systems Program's Atmosphere Resource Recovery and Environmental Monitoring (ARREM) project is working to mature optimum atmosphere revitalization and environmental monitoring system architectures. It is the project's objective to enable exploration beyond Lower Earth Orbit (LEO) and improve affordability by focusing on three primary goals: 1) achieving high reliability, 2) reducing dependence on a ground-based logistics resupply model, and 3) maximizing commonality between atmosphere revitalization subsystem components and those needed to support other exploration elements. The ARREM project's strengths include using existing developmental hardware and testing facilities, when possible, and and a well-coordinated effort among the NASA field centers that contributed to past ARS and EMS technology development projects.

  20. Worldwide telemedicine services based on distributed multimedia electronic patient records by using the second generation Web server hyperwave.

    PubMed

    Quade, G; Novotny, J; Burde, B; May, F; Beck, L E; Goldschmidt, A

    1999-01-01

    A distributed multimedia electronic patient record (EPR) is a central component of a medicine-telematics application that supports physicians working in rural areas of South America, and offers medical services to scientists in Antarctica. A Hyperwave server is used to maintain the patient record. As opposed to common web servers--and as a second generation web server--Hyperwave provides the capability of holding documents in a distributed web space without the problem of broken links. This enables physicians to browse through a patient's record by using a standard browser even if the patient's record is distributed over several servers. The patient record is basically implemented on the "Good European Health Record" (GEHR) architecture.

  1. Biosequence Similarity Search on the Mercury System

    PubMed Central

    Krishnamurthy, Praveen; Buhler, Jeremy; Chamberlain, Roger; Franklin, Mark; Gyang, Kwame; Jacob, Arpith; Lancaster, Joseph

    2007-01-01

    Biosequence similarity search is an important application in modern molecular biology. Search algorithms aim to identify sets of sequences whose extensional similarity suggests a common evolutionary origin or function. The most widely used similarity search tool for biosequences is BLAST, a program designed to compare query sequences to a database. Here, we present the design of BLASTN, the version of BLAST that searches DNA sequences, on the Mercury system, an architecture that supports high-volume, high-throughput data movement off a data store and into reconfigurable hardware. An important component of application deployment on the Mercury system is the functional decomposition of the application onto both the reconfigurable hardware and the traditional processor. Both the Mercury BLASTN application design and its performance analysis are described. PMID:18846267

  2. Argo: an integrative, interactive, text mining-based workbench supporting curation

    PubMed Central

    Rak, Rafal; Rowley, Andrew; Black, William; Ananiadou, Sophia

    2012-01-01

    Curation of biomedical literature is often supported by the automatic analysis of textual content that generally involves a sequence of individual processing components. Text mining (TM) has been used to enhance the process of manual biocuration, but has been focused on specific databases and tasks rather than an environment integrating TM tools into the curation pipeline, catering for a variety of tasks, types of information and applications. Processing components usually come from different sources and often lack interoperability. The well established Unstructured Information Management Architecture is a framework that addresses interoperability by defining common data structures and interfaces. However, most of the efforts are targeted towards software developers and are not suitable for curators, or are otherwise inconvenient to use on a higher level of abstraction. To overcome these issues we introduce Argo, an interoperable, integrative, interactive and collaborative system for text analysis with a convenient graphic user interface to ease the development of processing workflows and boost productivity in labour-intensive manual curation. Robust, scalable text analytics follow a modular approach, adopting component modules for distinct levels of text analysis. The user interface is available entirely through a web browser that saves the user from going through often complicated and platform-dependent installation procedures. Argo comes with a predefined set of processing components commonly used in text analysis, while giving the users the ability to deposit their own components. The system accommodates various areas and levels of user expertise, from TM and computational linguistics to ontology-based curation. One of the key functionalities of Argo is its ability to seamlessly incorporate user-interactive components, such as manual annotation editors, into otherwise completely automatic pipelines. As a use case, we demonstrate the functionality of an in-built manual annotation editor that is well suited for in-text corpus annotation tasks. Database URL: http://www.nactem.ac.uk/Argo PMID:22434844

  3. Project Integration Architecture: Distributed Lock Management, Deadlock Detection, and Set Iteration

    NASA Technical Reports Server (NTRS)

    Jones, William Henry

    2005-01-01

    The migration of the Project Integration Architecture (PIA) to the distributed object environment of the Common Object Request Broker Architecture (CORBA) brings with it the nearly unavoidable requirements of multiaccessor, asynchronous operations. In order to maintain the integrity of data structures in such an environment, it is necessary to provide a locking mechanism capable of protecting the complex operations typical of the PIA architecture. This paper reports on the implementation of a locking mechanism to treat that need. Additionally, the ancillary features necessary to make the distributed lock mechanism work are discussed.

  4. Radiation-Tolerant Dual Data Bus

    NASA Technical Reports Server (NTRS)

    Kinstler, Gary A.

    2007-01-01

    An architecture, and a method of utilizing the architecture, have been proposed to enable error-free operation of a data bus that includes, and is connected to, commercial off-the-shelf (COTS) circuits and components that are inherently susceptible to singleevent upsets [SEUs (bit flips caused by impinging high-energy particles and photons)]. The architecture and method are applicable, more specifically, to data-bus circuitry based on the Institute for Electrical and Electronics Engineers (IEEE) 1394b standard for a high-speed serial bus.

  5. Design and Implementation of a Unified Command and Control Architecture for Multiple Cooperative Unmanned Vehicles Utilizing Commercial Off the Shelf Components

    DTIC Science & Technology

    2015-12-24

    network, allowing each to communicate with all nodes on the network. Additionally , the transmission power will be turned down to the lowest value . This...reserved for these unmanned agents are gen- erally too dull, dirty, dangerous, or difficult for onboard human pilots to complete. Additionally , the use...architectures do have a much higher level of complexity than single vehicle architectures. Additionally , the weight, size, and power limitations of the

  6. The benefit of combining a deep neural network architecture with ideal ratio mask estimation in computational speech segregation to improve speech intelligibility.

    PubMed

    Bentsen, Thomas; May, Tobias; Kressner, Abigail A; Dau, Torsten

    2018-01-01

    Computational speech segregation attempts to automatically separate speech from noise. This is challenging in conditions with interfering talkers and low signal-to-noise ratios. Recent approaches have adopted deep neural networks and successfully demonstrated speech intelligibility improvements. A selection of components may be responsible for the success with these state-of-the-art approaches: the system architecture, a time frame concatenation technique and the learning objective. The aim of this study was to explore the roles and the relative contributions of these components by measuring speech intelligibility in normal-hearing listeners. A substantial improvement of 25.4 percentage points in speech intelligibility scores was found going from a subband-based architecture, in which a Gaussian Mixture Model-based classifier predicts the distributions of speech and noise for each frequency channel, to a state-of-the-art deep neural network-based architecture. Another improvement of 13.9 percentage points was obtained by changing the learning objective from the ideal binary mask, in which individual time-frequency units are labeled as either speech- or noise-dominated, to the ideal ratio mask, where the units are assigned a continuous value between zero and one. Therefore, both components play significant roles and by combining them, speech intelligibility improvements were obtained in a six-talker condition at a low signal-to-noise ratio.

  7. A Proposed Information Architecture for Telehealth System Interoperability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Warren, S.; Craft, R.L.; Parks, R.C.

    1999-04-07

    Telemedicine technology is rapidly evolving. Whereas early telemedicine consultations relied primarily on video conferencing, consultations today may utilize video conferencing, medical peripherals, store-and-forward capabilities, electronic patient record management software, and/or a host of other emerging technologies. These remote care systems rely increasingly on distributed, collaborative information technology during the care delivery process, in its many forms. While these leading-edge systems are bellwethers for highly advanced telemedicine, the remote care market today is still immature. Most telemedicine systems are custom-designed and do not interoperate with other commercial offerings. Users are limited to a set of functionality that a single vendor providesmore » and must often pay high prices to obtain this functionality, since vendors in this marketplace must deliver entire systems in order to compete. Besides increasing corporate research and development costs, this inhibits the ability of the user to make intelligent purchasing decisions regarding best-of-breed technologies. We propose a secure, object-oriented information architecture for telemedicine systems that promotes plug-and-play interaction between system components through standardized interfaces, communication protocols, messaging formats, and data definitions. In this architecture, each component functions as a black box, and components plug together in a lego-like fashion to achieve the desired device or system functionality. The architecture will support various ongoing standards work in the medical device arena.« less

  8. DRO1 influences root system architecture in Arabidopsis and Prunus species

    USDA-ARS?s Scientific Manuscript database

    Roots provide essential uptake of water and nutrients from the soil, as well as anchorage and stability for the whole plant. Root orientation or angle is an important component of the overall architecture and depth of the root system; however, little is known about the genetic control of this trai...

  9. Software system architecture for corporate user support

    NASA Astrophysics Data System (ADS)

    Sukhopluyeva, V. S.; Kuznetsov, D. Y.

    2017-01-01

    In this article, several existing ready-to-use solutions for the HelpDesk are reviewed. Advantages and disadvantages of these systems are identified. Architecture of software solution for a corporate user support system is presented in a form of the use case, state, and component diagrams described by using a unified modeling language (UML).

  10. Reliability Engineering for Service Oriented Architectures

    DTIC Science & Technology

    2013-02-01

    Common Object Request Broker Architecture Ecosystem In software , an ecosystem is a set of applications and/or services that grad- ually build up over time...Enterprise Service Bus Foreign In an SOA context: Any SOA, service or software which the owners of the calling software do not have control of, either...SOA Service Oriented Architecture SRE Software Reliability Engineering System Mode Many systems exhibit different modes of operation. E.g. the cockpit

  11. Electro-optic architecture for servicing sensors and actuators in advanced aircraft propulsion systems

    NASA Technical Reports Server (NTRS)

    Poppel, G. L.; Glasheen, W. M.

    1989-01-01

    A detailed design of a fiber optic propulsion control system, integrating favored sensors and electro-optics architecture is presented. Layouts, schematics, and sensor lists describe an advanced fighter engine system model. Components and attributes of candidate fiber optic sensors are identified, and evaluation criteria are used in a trade study resulting in favored sensors for each measurand. System architectural ground rules were applied to accomplish an electro-optics architecture for the favored sensors. A key result was a considerable reduction in signal conductors. Drawings, schematics, specifications, and printed circuit board layouts describe the detailed system design, including application of a planar optical waveguide interface.

  12. A Case for Data Commons

    PubMed Central

    Grossman, Robert L.; Heath, Allison; Murphy, Mark; Patterson, Maria; Wells, Walt

    2017-01-01

    Data commons collocate data, storage, and computing infrastructure with core services and commonly used tools and applications for managing, analyzing, and sharing data to create an interoperable resource for the research community. An architecture for data commons is described, as well as some lessons learned from operating several large-scale data commons. PMID:29033693

  13. Workflow-enabled distributed component-based information architecture for digital medical imaging enterprises.

    PubMed

    Wong, Stephen T C; Tjandra, Donny; Wang, Huili; Shen, Weimin

    2003-09-01

    Few information systems today offer a flexible means to define and manage the automated part of radiology processes, which provide clinical imaging services for the entire healthcare organization. Even fewer of them provide a coherent architecture that can easily cope with heterogeneity and inevitable local adaptation of applications and can integrate clinical and administrative information to aid better clinical, operational, and business decisions. We describe an innovative enterprise architecture of image information management systems to fill the needs. Such a system is based on the interplay of production workflow management, distributed object computing, Java and Web techniques, and in-depth domain knowledge in radiology operations. Our design adapts the approach of "4+1" architectural view. In this new architecture, PACS and RIS become one while the user interaction can be automated by customized workflow process. Clinical service applications are implemented as active components. They can be reasonably substituted by applications of local adaptations and can be multiplied for fault tolerance and load balancing. Furthermore, the workflow-enabled digital radiology system would provide powerful query and statistical functions for managing resources and improving productivity. This paper will potentially lead to a new direction of image information management. We illustrate the innovative design with examples taken from an implemented system.

  14. HiMoP: A three-component architecture to create more human-acceptable social-assistive robots : Motivational architecture for assistive robots.

    PubMed

    Rodríguez-Lera, Francisco J; Matellán-Olivera, Vicente; Conde-González, Miguel Á; Martín-Rico, Francisco

    2018-05-01

    Generation of autonomous behavior for robots is a general unsolved problem. Users perceive robots as repetitive tools that do not respond to dynamic situations. This research deals with the generation of natural behaviors in assistive service robots for dynamic domestic environments, particularly, a motivational-oriented cognitive architecture to generate more natural behaviors in autonomous robots. The proposed architecture, called HiMoP, is based on three elements: a Hierarchy of needs to define robot drives; a set of Motivational variables connected to robot needs; and a Pool of finite-state machines to run robot behaviors. The first element is inspired in Alderfer's hierarchy of needs, which specifies the variables defined in the motivational component. The pool of finite-state machine implements the available robot actions, and those actions are dynamically selected taking into account the motivational variables and the external stimuli. Thus, the robot is able to exhibit different behaviors even under similar conditions. A customized version of the "Speech Recognition and Audio Detection Test," proposed by the RoboCup Federation, has been used to illustrate how the architecture works and how it dynamically adapts and activates robots behaviors taking into account internal variables and external stimuli.

  15. Flexible All-Digital Receiver for Bandwidth Efficient Modulations

    NASA Technical Reports Server (NTRS)

    Gray, Andrew; Srinivasan, Meera; Simon, Marvin; Yan, Tsun-Yee

    2000-01-01

    An all-digital high data rate parallel receiver architecture developed jointly by Goddard Space Flight Center and the Jet Propulsion Laboratory is presented. This receiver utilizes only a small number of high speed components along with a majority of lower speed components operating in a parallel frequency domain structure implementable in CMOS, and can currently process up to 600 Mbps with standard QPSK modulation. Performance results for this receiver for bandwidth efficient QPSK modulation schemes such as square-root raised cosine pulse shaped QPSK and Feher's patented QPSK are presented, demonstrating the flexibility of the receiver architecture.

  16. Proposed Functional Architecture and Associated Benefits Analysis of a Common Ground Control Station for Unmanned Aircraft Systems

    DTIC Science & Technology

    2010-03-01

    143 Table 12. High Level Analysis of O&S Costs of Different Training Options...Station On station 24/7 ( ETOS 80%) On station 24/7 for 30 consecutive days ( ETOS 95%) Mission Radius ≥ 2,000 nm ≥ 3,000 nm Net Ready-KPP COMMON...Training with As-Is Unique GCS Architectures 143 The results of the analysis for BUQs I-IV are shown in Table 11. The data shows that

  17. Business Case Analysis for the Versatile Depot Automated Test Station Used in the USAF Warner Robins Air Logistics Center Maintenance Depot

    DTIC Science & Technology

    2008-06-01

    executes the avionics test) can run on the new ATS thus creating the common ATS framework . The system will also enable numerous new functional...Enterprise-level architecture that reflects corporate DoD priorities and requirements for business systems, and provides a common framework to ensure that...entire Business Mission Area (BMA) of the DoD. The BEA also contains a set of integrated Department of Defense Architecture Framework (DoDAF

  18. OneGeology-Europe: architecture, portal and web services to provide a European geological map

    NASA Astrophysics Data System (ADS)

    Tellez-Arenas, Agnès.; Serrano, Jean-Jacques; Tertre, François; Laxton, John

    2010-05-01

    OneGeology-Europe is a large ambitious project to make geological spatial data further known and accessible. The OneGeology-Europe project develops an integrated system of data to create and make accessible for the first time through the internet the geological map of the whole of Europe. The architecture implemented by the project is web services oriented, based on the OGC standards: the geological map is not a centralized database but is composed by several web services, each of them hosted by a European country involved in the project. Since geological data are elaborated differently from country to country, they are difficult to share. OneGeology-Europe, while providing more detailed and complete information, will foster even beyond the geological community an easier exchange of data within Europe and globally. This implies an important work regarding the harmonization of the data, both model and the content. OneGeology-Europe is characterised by the high technological capacity of the EU Member States, and has the final goal to achieve the harmonisation of European geological survey data according to common standards. As a direct consequence Europe will make a further step in terms of innovation and information dissemination, continuing to play a world leading role in the development of geosciences information. The scope of the common harmonized data model was defined primarily by the requirements of the geological map of Europe, but in addition users were consulted and the requirements of both INSPIRE and ‘high-resolution' geological maps were considered. The data model is based on GeoSciML, developed since 2006 by a group of Geological Surveys. The data providers involved in the project implemented a new component that allows the web services to deliver the geological map expressed into GeoSciML. In order to capture the information describing the geological units of the map of Europe the scope of the data model needs to include lithology; age; genesis and metamorphic character. For high resolution maps physical properties, bedding characteristics and weathering also need to be added. Furthermore, Geological data held by national geological surveys is generally described in national language of the country. The project has to deal with the multilingual issue, an important requirement of the INSPIRE directive. The project provides a list of harmonized vocabularies, a set of web services to deal with them, and a web site for helping the geoscientists while mapping the terms used into the national datasets into these vocabularies. The web services provided by each data provider, with the particular component that allows them to deliver the harmonised data model and to handle the multilingualism, are the first part of the architecture. The project also implements a web portal that provides several functionalities. Thanks to the common data model implemented by each web service delivering a part of the geological map, and using OGC SLD standards, the client offers the following option. A user can request for a sub-selection of the map, for instance searching on a particular attribute such as "age is quaternary", and display only the parts of the map according to the filter. Using the web services on the common vocabularies, the data displayed are translated. The project started September 2008 for two years, with 29 partners from 20 countries (20 partners are Geological Surveys). The budget is 3.25 M€, with a European Commission contribution of 2.6 M€. The paper will describe the technical solutions to implement OneGeology-Europe components: the profile of the common data model to exchange geological data, the web services to view and access geological data; and a geoportal to provide the user with a user-friendly way to discover, view and access geological data.

  19. The Allelic Landscape of Human Blood Cell Trait Variation and Links to Common Complex Disease.

    PubMed

    Astle, William J; Elding, Heather; Jiang, Tao; Allen, Dave; Ruklisa, Dace; Mann, Alice L; Mead, Daniel; Bouman, Heleen; Riveros-Mckay, Fernando; Kostadima, Myrto A; Lambourne, John J; Sivapalaratnam, Suthesh; Downes, Kate; Kundu, Kousik; Bomba, Lorenzo; Berentsen, Kim; Bradley, John R; Daugherty, Louise C; Delaneau, Olivier; Freson, Kathleen; Garner, Stephen F; Grassi, Luigi; Guerrero, Jose; Haimel, Matthias; Janssen-Megens, Eva M; Kaan, Anita; Kamat, Mihir; Kim, Bowon; Mandoli, Amit; Marchini, Jonathan; Martens, Joost H A; Meacham, Stuart; Megy, Karyn; O'Connell, Jared; Petersen, Romina; Sharifi, Nilofar; Sheard, Simon M; Staley, James R; Tuna, Salih; van der Ent, Martijn; Walter, Klaudia; Wang, Shuang-Yin; Wheeler, Eleanor; Wilder, Steven P; Iotchkova, Valentina; Moore, Carmel; Sambrook, Jennifer; Stunnenberg, Hendrik G; Di Angelantonio, Emanuele; Kaptoge, Stephen; Kuijpers, Taco W; Carrillo-de-Santa-Pau, Enrique; Juan, David; Rico, Daniel; Valencia, Alfonso; Chen, Lu; Ge, Bing; Vasquez, Louella; Kwan, Tony; Garrido-Martín, Diego; Watt, Stephen; Yang, Ying; Guigo, Roderic; Beck, Stephan; Paul, Dirk S; Pastinen, Tomi; Bujold, David; Bourque, Guillaume; Frontini, Mattia; Danesh, John; Roberts, David J; Ouwehand, Willem H; Butterworth, Adam S; Soranzo, Nicole

    2016-11-17

    Many common variants have been associated with hematological traits, but identification of causal genes and pathways has proven challenging. We performed a genome-wide association analysis in the UK Biobank and INTERVAL studies, testing 29.5 million genetic variants for association with 36 red cell, white cell, and platelet properties in 173,480 European-ancestry participants. This effort yielded hundreds of low frequency (<5%) and rare (<1%) variants with a strong impact on blood cell phenotypes. Our data highlight general properties of the allelic architecture of complex traits, including the proportion of the heritable component of each blood trait explained by the polygenic signal across different genome regulatory domains. Finally, through Mendelian randomization, we provide evidence of shared genetic pathways linking blood cell indices with complex pathologies, including autoimmune diseases, schizophrenia, and coronary heart disease and evidence suggesting previously reported population associations between blood cell indices and cardiovascular disease may be non-causal. Copyright © 2016 Elsevier Inc. All rights reserved.

  20. An open-source, extensible system for laboratory timing and control

    NASA Astrophysics Data System (ADS)

    Gaskell, Peter E.; Thorn, Jeremy J.; Alba, Sequoia; Steck, Daniel A.

    2009-11-01

    We describe a simple system for timing and control, which provides control of analog, digital, and radio-frequency signals. Our system differs from most common laboratory setups in that it is open source, built from off-the-shelf components, synchronized to a common and accurate clock, and connected over an Ethernet network. A simple bus architecture facilitates creating new and specialized devices with only moderate experience in circuit design. Each device operates independently, requiring only an Ethernet network connection to the controlling computer, a clock signal, and a trigger signal. This makes the system highly robust and scalable. The devices can all be connected to a single external clock, allowing synchronous operation of a large number of devices for situations requiring precise timing of many parallel control and acquisition channels. Provided an accurate enough clock, these devices are capable of triggering events separated by one day with near-microsecond precision. We have achieved precisions of ˜0.1 ppb (parts per 109) over 16 s.

  1. Distributed visualization framework architecture

    NASA Astrophysics Data System (ADS)

    Mishchenko, Oleg; Raman, Sundaresan; Crawfis, Roger

    2010-01-01

    An architecture for distributed and collaborative visualization is presented. The design goals of the system are to create a lightweight, easy to use and extensible framework for reasearch in scientific visualization. The system provides both single user and collaborative distributed environment. System architecture employs a client-server model. Visualization projects can be synchronously accessed and modified from different client machines. We present a set of visualization use cases that illustrate the flexibility of our system. The framework provides a rich set of reusable components for creating new applications. These components make heavy use of leading design patterns. All components are based on the functionality of a small set of interfaces. This allows new components to be integrated seamlessly with little to no effort. All user input and higher-level control functionality interface with proxy objects supporting a concrete implementation of these interfaces. These light-weight objects can be easily streamed across the web and even integrated with smart clients running on a user's cell phone. The back-end is supported by concrete implementations wherever needed (for instance for rendering). A middle-tier manages any communication and synchronization with the proxy objects. In addition to the data components, we have developed several first-class GUI components for visualization. These include a layer compositor editor, a programmable shader editor, a material editor and various drawable editors. These GUI components interact strictly with the interfaces. Access to the various entities in the system is provided by an AssetManager. The asset manager keeps track of all of the registered proxies and responds to queries on the overall system. This allows all user components to be populated automatically. Hence if a new component is added that supports the IMaterial interface, any instances of this can be used in the various GUI components that work with this interface. One of the main features is an interactive shader designer. This allows rapid prototyping of new visualization renderings that are shader-based and greatly accelerates the development and debug cycle.

  2. Flight Demonstration of X-33 Vehicle Health Management System Components on the F/A-18 Systems Research Aircraft

    NASA Technical Reports Server (NTRS)

    Schweikhard, Keith A.; Richards, W. Lance; Theisen, John; Mouyos, William; Garbos, Raymond

    2001-01-01

    The X-33 reusable launch vehicle demonstrator has identified the need to implement a vehicle health monitoring system that can acquire data that monitors system health and performance. Sanders, a Lockheed Martin Company, has designed and developed a COTS-based open architecture system that implements a number of technologies that have not been previously used in a flight environment. NASA Dryden Flight Research Center and Sanders teamed to demonstrate that the distributed remote health nodes, fiber optic distributed strain sensor, and fiber distributed data interface communications components of the X-33 vehicle health management (VHM) system could be successfully integrated and flown on a NASA F-18 aircraft. This paper briefly describes components of X-33 VHM architecture flown at Dryden and summarizes the integration and flight demonstration of these X-33 VHM components. Finally, it presents early results from the integration and flight efforts.

  3. Flight Demonstration of X-33 Vehicle Health Management System Components on the F/A-18 Systems Research Aircraft

    NASA Technical Reports Server (NTRS)

    Schweikhard, Keith A.; Richards, W. Lance; Theisen, John; Mouyos, William; Garbos, Raymond; Schkolnik, Gerald (Technical Monitor)

    1998-01-01

    The X-33 reusable launch vehicle demonstrator has identified the need to implement a vehicle health monitoring system that can acquire data that monitors system health and performance. Sanders, a Lockheed Martin Company, has designed and developed a commercial off-the-shelf (COTS)-based open architecture system that implements a number of technologies that have not been previously used in a flight environment. NASA Dryden Flight Research Center and Sanders teamed to demonstrate that the distributed remote health nodes, fiber optic distributed strain sensor, and fiber distributed data interface communications components of the X-33 vehicle health management (VHM) system could be successfully integrated and flown on a NASA F-18 aircraft. This paper briefly describes components of X-33 VHM architecture flown at Dryden and summarizes the integration and flight demonstration of these X-33 VHM components. Finally, it presents early results from the integration and flight efforts.

  4. The software architecture of climate models: a graphical comparison of CMIP5 and EMICAR5 configurations

    NASA Astrophysics Data System (ADS)

    Alexander, K.; Easterbrook, S. M.

    2015-01-01

    We analyse the source code of eight coupled climate models, selected from those that participated in the CMIP5 (Taylor et al., 2012) or EMICAR5 (Eby et al., 2013; Zickfeld et al., 2013) intercomparison projects. For each model, we sort the preprocessed code into components and subcomponents based on dependency structure. We then create software architecture diagrams which show the relative sizes of these components/subcomponents and the flow of data between them. The diagrams also illustrate several major classes of climate model design; the distribution of complexity between components, which depends on historical development paths as well as the conscious goals of each institution; and the sharing of components between different modelling groups. These diagrams offer insights into the similarities and differences between models, and have the potential to be useful tools for communication between scientists, scientific institutions, and the public.

  5. The software architecture of climate models: a graphical comparison of CMIP5 and EMICAR5 configurations

    NASA Astrophysics Data System (ADS)

    Alexander, K.; Easterbrook, S. M.

    2015-04-01

    We analyze the source code of eight coupled climate models, selected from those that participated in the CMIP5 (Taylor et al., 2012) or EMICAR5 (Eby et al., 2013; Zickfeld et al., 2013) intercomparison projects. For each model, we sort the preprocessed code into components and subcomponents based on dependency structure. We then create software architecture diagrams that show the relative sizes of these components/subcomponents and the flow of data between them. The diagrams also illustrate several major classes of climate model design; the distribution of complexity between components, which depends on historical development paths as well as the conscious goals of each institution; and the sharing of components between different modeling groups. These diagrams offer insights into the similarities and differences in structure between climate models, and have the potential to be useful tools for communication between scientists, scientific institutions, and the public.

  6. Thermostructural Properties Of Sic/Sic Panels With 2.5d And 3d Fiber Architectures

    NASA Technical Reports Server (NTRS)

    Yun, H. M.; DeCarlo, J. A.; Bhatt, R. H.; Jaskowiak, M. H.

    2005-01-01

    CMC hot-section components in advanced engines for power and propulsion will typically require high cracking strength, high ultimate strength and strain, high creep- rupture resistance, and high thermal conductivity in all directions. In the past, NASA has demonstrated fabrication of a variety of SiC/SiC flat panels and round tubes with various 2D fiber architectures using the high-modulus high-performance Sylramic-iBN Sic fiber and Sic-based matrices derived by CVI, MI, and/or PIP processes. The thermo- mechanical properties of these CMC have shown state-of-the-art performance, but primarily in the in-plane directions. Currently NASA is extending the thermostructural capability of these SiC/SiC systems in the thru-thickness direction by using various 2.5D and 3D fiber architectures. NASA is also using specially designed fabrication steps to optimize the properties of the BN-based interphase and Sic-based matrices. In this study, Sylramic-iBN/SiC panels with 2D plain weave, 2.5D satin weave, 2.5D ply-to-ply interlock weave, and 3D angle interlock fiber architectures, all woven at AITI, were fabricated using matrix densification routes previously established between NASA and GEPSC for CVI-MI processes and between NASA and Starfire-Systems for PIP processes. Introduction of the 2.5 D fiber architecture along with an improved matrix process was found to increase inter-laminar tensile strength from 1.5 -2 to 3 - 4 ksi and thru-thickness thermal conductivity from 15-20 to 30-35 BTU/ft.hr.F with minimal reduction in in-plane strength and creep-rupture properties. Such improvements should reduce thermal stresses and increase the thermostructural operating envelope for SiC/SiC engine components. These results are analyzed to offer general guidelines for selecting fiber architectures and constituent processes for high-performance SiC/SiC engine components.

  7. Design and flight test results of high speed optical bidirectional link between stratospheric platforms for aerospace applications

    NASA Astrophysics Data System (ADS)

    Briatore, S.; Akhtyamov, R.; Golkar, A.

    2017-08-01

    As small and nanosatellites become increasingly relevant in the aerospace industry1, 2, the need of efficient, lightweight and cost-effective networking solutions drives the need for the development of lightweight and low cost networking and communication terminals. In this paper we propose the design and prototype results of a hybrid optical and radio communication architecture developed to fit the coarse pointing capabilities of nanosatellites, tested through a proxy flight experiment on stratospheric balloons. This system takes advantage of the higher data-rate offered by optical communication channels while relying on the more mature and stable technology of conventional radio systems for link negotiation and low-speed data exchange. Such architecture allows the user to overcome the licensing requirements and scarce availability of high data-rate radio frequency channels in the commonly used bands. Outlined are the architecture, development and test of the mentioned terminal, with focus on the communication part and supporting technologies, including the navigation algorithm, the developed fail-safe approach, and the evolution of the pointing system continuing previous work done in 3. The system has been built with commercial-off-the-shelf components and demonstrated on a stratospheric balloon launch campaign. The paper outlines the results of an in-flight demonstration, where the two platforms successfully established an optical link at stratospheric altitudes. The results are then analyzed and contextualized in plans of future work for nanosatellite implementations.

  8. Molecular architecture requirements for polymer-grafted lignin superplasticizers.

    PubMed

    Gupta, Chetali; Sverdlove, Madeline J; Washburn, Newell R

    2015-04-07

    Superplasticizers are a class of anionic polymer dispersants used to inhibit aggregation in hydraulic cement, lowering the yield stress of cement pastes to improve workability and reduce water requirements. The plant-derived biopolymer lignin is commonly used as a low-cost/low-performance plasticizer, but attempts to improve its effects on cement rheology through copolymerization with synthetic monomers have not led to significant improvements. Here we demonstrate that kraft lignin can form the basis for high-performance superplasticizers in hydraulic cement, but the molecular architecture must be based on a lignin core with a synthetic-polymer corona that can be produced via controlled radical polymerization. Using slump tests of ordinary Portland cement pastes, we show that polyacrylamide-grafted lignin prepared via reversible addition-fragmentation chain transfer polymerization can reduce the yield stress of cement paste to similar levels as a leading commercial polycarboxylate ether superplasticizer at concentrations ten-fold lower, although the lignin material produced via controlled radical polymerization does not appear to reduce the dynamic viscosity of cement paste as effectively as the polycarboxylate superplasticizer, despite having a similar affinity for the individual mineral components of ordinary Portland cement. In contrast, polyacrylamide copolymerized with a methacrylated kraft lignin via conventional free radical polymerization having a similar overall composition did not reduce the yield stress or the viscosity of cement pastes. While further work is required to elucidate the mechanism of this effect, these results indicate that controlling the architecture of polymer-grafted lignin can significantly enhance its performance as a superplasticizer for cement.

  9. Comparison of LIDAR system performance for alternative single-mode receiver architectures: modeling and experimental validation

    NASA Astrophysics Data System (ADS)

    Toliver, Paul; Ozdur, Ibrahim; Agarwal, Anjali; Woodward, T. K.

    2013-05-01

    In this paper, we describe a detailed performance comparison of alternative single-pixel, single-mode LIDAR architectures including (i) linear-mode APD-based direct-detection, (ii) optically-preamplified PIN receiver, (iii) PINbased coherent-detection, and (iv) Geiger-mode single-photon-APD counting. Such a comparison is useful when considering next-generation LIDAR on a chip, which would allow one to leverage extensive waveguide-based structures and processing elements developed for telecom and apply them to small form-factor sensing applications. Models of four LIDAR transmit and receive systems are described in detail, which include not only the dominant sources of receiver noise commonly assumed in each of the four detection limits, but also additional noise terms present in realistic implementations. These receiver models are validated through the analysis of detection statistics collected from an experimental LIDAR testbed. The receiver is reconfigurable into four modes of operation, while transmit waveforms and channel characteristics are held constant. The use of a diffuse hard target highlights the importance of including speckle noise terms in the overall system analysis. All measurements are done at 1550 nm, which offers multiple system advantages including less stringent eye safety requirements and compatibility with available telecom components, optical amplification, and photonic integration. Ultimately, the experimentally-validated detection statistics can be used as part of an end-to-end system model for projecting rate, range, and resolution performance limits and tradeoffs of alternative integrated LIDAR architectures.

  10. Mercury: An Example of Effective Software Reuse for Metadata Management, Data Discovery and Access

    NASA Astrophysics Data System (ADS)

    Devarakonda, Ranjeet; Palanisamy, Giri; Green, James; Wilson, Bruce E.

    2008-12-01

    Mercury is a federated metadata harvesting, data discovery and access tool based on both open source packages and custom developed software. Though originally developed for NASA, the Mercury development consortium now includes funding from NASA, USGS, and DOE. Mercury supports the reuse of metadata by enabling searching across a range of metadata specification and standards including XML, Z39.50, FGDC, Dublin-Core, Darwin-Core, EML, and ISO-19115. Mercury provides a single portal to information contained in distributed data management systems. It collects metadata and key data from contributing project servers distributed around the world and builds a centralized index. The Mercury search interfaces then allow the users to perform simple, fielded, spatial and temporal searches across these metadata sources. One of the major goals of the recent redesign of Mercury was to improve the software reusability across the 12 projects which currently fund the continuing development of Mercury. These projects span a range of land, atmosphere, and ocean ecological communities and have a number of common needs for metadata searches, but they also have a number of needs specific to one or a few projects. To balance these common and project-specific needs, Mercury's architecture has three major reusable components; a harvester engine, an indexing system and a user interface component. The harvester engine is responsible for harvesting metadata records from various distributed servers around the USA and around the world. The harvester software was packaged in such a way that all the Mercury projects will use the same harvester scripts but each project will be driven by a set of project specific configuration files. The harvested files are structured metadata records that are indexed against the search library API consistently, so that it can render various search capabilities such as simple, fielded, spatial and temporal. This backend component is supported by a very flexible, easy to use Graphical User Interface which is driven by cascading style sheets, which make it even simpler for reusable design implementation. The new Mercury system is based on a Service Oriented Architecture and effectively reuses components for various services such as Thesaurus Service, Gazetteer Web Service and UDDI Directory Services. The software also provides various search services including: RSS, Geo-RSS, OpenSearch, Web Services and Portlets, integrated shopping cart to order datasets from various data centers (ORNL DAAC, NSIDC) and integrated visualization tools. Other features include: Filtering and dynamic sorting of search results, book- markable search results, save, retrieve, and modify search criteria.

  11. Mercury: An Example of Effective Software Reuse for Metadata Management, Data Discovery and Access

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Devarakonda, Ranjeet

    2008-01-01

    Mercury is a federated metadata harvesting, data discovery and access tool based on both open source packages and custom developed software. Though originally developed for NASA, the Mercury development consortium now includes funding from NASA, USGS, and DOE. Mercury supports the reuse of metadata by enabling searching across a range of metadata specification and standards including XML, Z39.50, FGDC, Dublin-Core, Darwin-Core, EML, and ISO-19115. Mercury provides a single portal to information contained in distributed data management systems. It collects metadata and key data from contributing project servers distributed around the world and builds a centralized index. The Mercury search interfacesmore » then allow the users to perform simple, fielded, spatial and temporal searches across these metadata sources. One of the major goals of the recent redesign of Mercury was to improve the software reusability across the 12 projects which currently fund the continuing development of Mercury. These projects span a range of land, atmosphere, and ocean ecological communities and have a number of common needs for metadata searches, but they also have a number of needs specific to one or a few projects. To balance these common and project-specific needs, Mercury's architecture has three major reusable components; a harvester engine, an indexing system and a user interface component. The harvester engine is responsible for harvesting metadata records from various distributed servers around the USA and around the world. The harvester software was packaged in such a way that all the Mercury projects will use the same harvester scripts but each project will be driven by a set of project specific configuration files. The harvested files are structured metadata records that are indexed against the search library API consistently, so that it can render various search capabilities such as simple, fielded, spatial and temporal. This backend component is supported by a very flexible, easy to use Graphical User Interface which is driven by cascading style sheets, which make it even simpler for reusable design implementation. The new Mercury system is based on a Service Oriented Architecture and effectively reuses components for various services such as Thesaurus Service, Gazetteer Web Service and UDDI Directory Services. The software also provides various search services including: RSS, Geo-RSS, OpenSearch, Web Services and Portlets, integrated shopping cart to order datasets from various data centers (ORNL DAAC, NSIDC) and integrated visualization tools. Other features include: Filtering and dynamic sorting of search results, book- markable search results, save, retrieve, and modify search criteria.« less

  12. Evidence of common and separate eye and hand accumulators underlying flexible eye-hand coordination

    PubMed Central

    Jana, Sumitash; Gopal, Atul

    2016-01-01

    Eye and hand movements are initiated by anatomically separate regions in the brain, and yet these movements can be flexibly coupled and decoupled, depending on the need. The computational architecture that enables this flexible coupling of independent effectors is not understood. Here, we studied the computational architecture that enables flexible eye-hand coordination using a drift diffusion framework, which predicts that the variability of the reaction time (RT) distribution scales with its mean. We show that a common stochastic accumulator to threshold, followed by a noisy effector-dependent delay, explains eye-hand RT distributions and their correlation in a visual search task that required decision-making, while an interactive eye and hand accumulator model did not. In contrast, in an eye-hand dual task, an interactive model better predicted the observed correlations and RT distributions than a common accumulator model. Notably, these two models could only be distinguished on the basis of the variability and not the means of the predicted RT distributions. Additionally, signatures of separate initiation signals were also observed in a small fraction of trials in the visual search task, implying that these distinct computational architectures were not a manifestation of the task design per se. Taken together, our results suggest two unique computational architectures for eye-hand coordination, with task context biasing the brain toward instantiating one of the two architectures. NEW & NOTEWORTHY Previous studies on eye-hand coordination have considered mainly the means of eye and hand reaction time (RT) distributions. Here, we leverage the approximately linear relationship between the mean and standard deviation of RT distributions, as predicted by the drift-diffusion model, to propose the existence of two distinct computational architectures underlying coordinated eye-hand movements. These architectures, for the first time, provide a computational basis for the flexible coupling between eye and hand movements. PMID:27784809

  13. Storage strategies of eddy-current FE-BI model for GPU implementation

    NASA Astrophysics Data System (ADS)

    Bardel, Charles; Lei, Naiguang; Udpa, Lalita

    2013-01-01

    In the past few years graphical processing units (GPUs) have shown tremendous improvements in computational throughput over standard CPU architecture. However, this comes at the cost of restructuring the algorithms to meet the strengths and drawbacks of this GPU architecture. A major drawback is the state of limited memory, and hence storage of FE stiffness matrices on the GPU is important. In contrast to storage on CPU the GPU storage format has significant influence on the overall performance. This paper presents an investigation of a storage strategy in the implementation of a two-dimensional finite element-boundary integral (FE-BI) model for Eddy current NDE applications, on GPU architecture. Specifically, the high dimensional matrices are manipulated by examining the matrix structure and optimally splitting into structurally independent component matrices for efficient storage and retrieval of each component. Results obtained using the proposed approach are compared to those of conventional CPU implementation for validating the method.

  14. A RESTful Service Oriented Architecture for Science Data Processing

    NASA Astrophysics Data System (ADS)

    Duggan, B.; Tilmes, C.; Durbin, P.; Masuoka, E.

    2012-12-01

    The Atmospheric Composition Processing System is an implementation of a RESTful Service Oriented Architecture which handles incoming data from the Ozone Monitoring Instrument and the Ozone Monitoring and Profiler Suite aboard the Aura and NPP spacecrafts respectively. The system has been built entirely from open source components, such as Postgres, Perl, and SQLite and has leveraged the vast resources of the Comprehensive Perl Archive Network (CPAN). The modular design of the system also allows for many of the components to be easily released and integrated into the CPAN ecosystem and reused independently. At minimal expense, the CPAN infrastructure and community provide peer review, feedback and continuous testing in a wide variety of environments and architectures. A well defined set of conventions also facilitates dependency management, packaging, and distribution of code. Test driven development also provides a way to ensure stability despite a continuously changing base of dependencies.

  15. Modeling the Stress Strain Behavior of Woven Ceramic Matrix Composites

    NASA Technical Reports Server (NTRS)

    Morscher, Gregory N.

    2006-01-01

    Woven SiC fiber reinforced SiC matrix composites represent one of the most mature composite systems to date. Future components fabricated out of these woven ceramic matrix composites are expected to vary in shape, curvature, architecture, and thickness. The design of future components using woven ceramic matrix composites necessitates a modeling approach that can account for these variations which are physically controlled by local constituent contents and architecture. Research over the years supported primarily by NASA Glenn Research Center has led to the development of simple mechanistic-based models that can describe the entire stress-strain curve for composite systems fabricated with chemical vapor infiltrated matrices and melt-infiltrated matrices for a wide range of constituent content and architecture. Several examples will be presented that demonstrate the approach to modeling which incorporates a thorough understanding of the stress-dependent matrix cracking properties of the composite system.

  16. OFMspert: An architecture for an operator's associate that evolves to an intelligent tutor

    NASA Technical Reports Server (NTRS)

    Mitchell, Christine M.

    1991-01-01

    With the emergence of new technology for both human-computer interaction and knowledge-based systems, a range of opportunities exist which enhance the effectiveness and efficiency of controllers of high-risk engineering systems. The design of an architecture for an operator's associate is described. This associate is a stand-alone model-based system designed to interact with operators of complex dynamic systems, such as airplanes, manned space systems, and satellite ground control systems in ways comparable to that of a human assistant. The operator function model expert system (OFMspert) architecture and the design and empirical validation of OFMspert's understanding component are described. The design and validation of OFMspert's interactive and control components are also described. A description of current work in which OFMspert provides the foundation in the development of an intelligent tutor that evolves to an assistant, as operator expertise evolves from novice to expert, is provided.

  17. End-to-end network models encompassing terrestrial, wireless, and satellite components

    NASA Astrophysics Data System (ADS)

    Boyarko, Chandler L.; Britton, John S.; Flores, Phil E.; Lambert, Charles B.; Pendzick, John M.; Ryan, Christopher M.; Shankman, Gordon L.; Williams, Ramon P.

    2004-08-01

    Development of network models that reflect true end-to-end architectures such as the Transformational Communications Architecture need to encompass terrestrial, wireless and satellite component to truly represent all of the complexities in a world wide communications network. Use of best-in-class tools including OPNET, Satellite Tool Kit (STK), Popkin System Architect and their well known XML-friendly definitions, such as OPNET Modeler's Data Type Description (DTD), or socket-based data transfer modules, such as STK/Connect, enable the sharing of data between applications for more rapid development of end-to-end system architectures and a more complete system design. By sharing the results of and integrating best-in-class tools we are able to (1) promote sharing of data, (2) enhance the fidelity of our results and (3) allow network and application performance to be viewed in the context of the entire enterprise and its processes.

  18. Long-range strategy for remote sensing: an integrated supersystem

    NASA Astrophysics Data System (ADS)

    Glackin, David L.; Dodd, Joseph K.

    1995-12-01

    Present large space-based remote sensing systems, and those planned for the next two decades, remain dichotomous and custom-built. An integrated architecture might reduce total cost without limiting system performance. An example of such an architecture, developed at The Aerospace Corporation, explores the feasibility of reducing overall space systems costs by forming a 'super-system' which will provide environmental, earth resources and theater surveillance information to a variety of users. The concept involves integration of programs, sharing of common spacecraft bus designs and launch vehicles, use of modular components and subsystems, integration of command and control and data capture functions, and establishment of an integrated program office. Smart functional modules that are easily tested and replaced are used wherever possible in the space segment. Data is disseminated to systems such as NASA's EOSDIS, and data processing is performed at established centers of expertise. This concept is advanced for potential application as a follow-on to currently budgeted and planned space-based remote sensing systems. We hope that this work will serve to engender discussion that may be of assistance in leading to multinational remote sensing systems with greater cost effectiveness at no loss of utility to the end user.

  19. 2015 ESGF Progress Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Williams, D. N.

    2015-06-22

    The Earth System Grid Federation (ESGF) is a multi-agency, international collaboration whose purpose is to develop the software infrastructure needed to facilitate and empower the study of climate change on a global scale. ESGF’s architecture employs a system of geographically distributed peer nodes that are independently administered yet united by common federation protocols and application programming interfaces. The cornerstones of its interoperability are the peer-to-peer messaging, which is continuously exchanged among all nodes in the federation; a shared architecture for search and discovery; and a security infrastructure based on industry standards. ESGF integrates popular application engines available from the open-sourcemore » community with custom components (for data publishing, searching, user interface, security, and messaging) that were developed collaboratively by the team. The full ESGF infrastructure has now been adopted by multiple Earth science projects and allows access to petabytes of geophysical data, including the Coupled Model Intercomparison Project (CMIP)—output used by the Intergovernmental Panel on Climate Change assessment reports. ESGF is a successful example of integration of disparate open-source technologies into a cohesive functional system that serves the needs of the global climate science community.« less

  20. Membrane Remodeling by the Double-Barrel Scaffolding Protein of Poxvirus

    PubMed Central

    Hijnen, Marcel; Schult, Philipp; Pettikiriarachchi, Anne; Mitra, Alok K.; Coulibaly, Fasséli

    2011-01-01

    In contrast to most enveloped viruses, poxviruses produce infectious particles that do not acquire their internal lipid membrane by budding through cellular compartments. Instead, poxvirus immature particles are generated from atypical crescent-shaped precursors whose architecture and composition remain contentious. Here we describe the 2.6 Å crystal structure of vaccinia virus D13, a key structural component of the outer scaffold of viral crescents. D13 folds into two jellyrolls decorated by a head domain of novel fold. It assembles into trimers that are homologous to the double-barrel capsid proteins of adenovirus and lipid-containing icosahedral viruses. We show that, when tethered onto artificial membranes, D13 forms a honeycomb lattice and assembly products structurally similar to the viral crescents and immature particles. The architecture of the D13 honeycomb lattice and the lipid-remodeling abilities of D13 support a model of assembly that exhibits similarities with the giant mimivirus. Overall, these findings establish that the first committed step of poxvirus morphogenesis utilizes an ancestral lipid-remodeling strategy common to icosahedral DNA viruses infecting all kingdoms of life. Furthermore, D13 is the target of rifampicin and its structure will aid the development of poxvirus assembly inhibitors. PMID:21931553

  1. Multimaterial 4D Printing with Tailorable Shape Memory Polymers

    PubMed Central

    Ge, Qi; Sakhaei, Amir Hosein; Lee, Howon; Dunn, Conner K.; Fang, Nicholas X.; Dunn, Martin L.

    2016-01-01

    We present a new 4D printing approach that can create high resolution (up to a few microns), multimaterial shape memory polymer (SMP) architectures. The approach is based on high resolution projection microstereolithography (PμSL) and uses a family of photo-curable methacrylate based copolymer networks. We designed the constituents and compositions to exhibit desired thermomechanical behavior (including rubbery modulus, glass transition temperature and failure strain which is more than 300% and larger than any existing printable materials) to enable controlled shape memory behavior. We used a high resolution, high contrast digital micro display to ensure high resolution of photo-curing methacrylate based SMPs that requires higher exposure energy than more common acrylate based polymers. An automated material exchange process enables the manufacture of 3D composite architectures from multiple photo-curable SMPs. In order to understand the behavior of the 3D composite microarchitectures, we carry out high fidelity computational simulations of their complex nonlinear, time-dependent behavior and study important design considerations including local deformation, shape fixity and free recovery rate. Simulations are in good agreement with experiments for a series of single and multimaterial components and can be used to facilitate the design of SMP 3D structures. PMID:27499417

  2. Dissociation of verbal working memory system components using a delayed serial recall task.

    PubMed

    Chein, J M; Fiez, J A

    2001-11-01

    Functional magnetic resonance imaging (fMRI) was used to investigate the neural substrates of component processes in verbal working memory. Based on behavioral research using manipulations of verbal stimulus type to dissociate storage, rehearsal, and executive components of verbal working memory, we designed a delayed serial recall task requiring subjects to encode, maintain, and overtly recall sets of verbal items for which phonological similarity, articulatory length, and lexical status were manipulated. By using a task with temporally extended trials, we were able to exploit the temporal resolution afforded by fMRI to partially isolate neural contributions to encoding, maintenance, and retrieval stages of task performance. Several regions commonly associated with maintenance, including supplementary motor, premotor, and inferior frontal areas, were found to be active across all three trial stages. Additionally, we found that left inferior frontal and supplementary motor regions showed patterns of stimulus and temporal sensitivity implicating them in distinct aspects of articulatory rehearsal, while no regions showed a pattern of sensitivity consistent with a role in phonological storage. Regional modulation by task difficulty was further investigated as a measure of executive processing. We interpret our findings as they relate to notions about the cognitive architecture underlying verbal working memory performance.

  3. Using Bioinformatics Approach to Explore the Pharmacological Mechanisms of Multiple Ingredients in Shuang-Huang-Lian

    PubMed Central

    Zhang, Bai-xia; Li, Jian; Gu, Hao; Li, Qiang; Zhang, Qi; Zhang, Tian-jiao; Wang, Yun; Cai, Cheng-ke

    2015-01-01

    Due to the proved clinical efficacy, Shuang-Huang-Lian (SHL) has developed a variety of dosage forms. However, the in-depth research on targets and pharmacological mechanisms of SHL preparations was scarce. In the presented study, the bioinformatics approaches were adopted to integrate relevant data and biological information. As a result, a PPI network was built and the common topological parameters were characterized. The results suggested that the PPI network of SHL exhibited a scale-free property and modular architecture. The drug target network of SHL was structured with 21 functional modules. According to certain modules and pharmacological effects distribution, an antitumor effect and potential drug targets were predicted. A biological network which contained 26 subnetworks was constructed to elucidate the antipneumonia mechanism of SHL. We also extracted the subnetwork to explicitly display the pathway where one effective component acts on the pneumonia related targets. In conclusions, a bioinformatics approach was established for exploring the drug targets, pharmacological activity distribution, effective components of SHL, and its mechanism of antipneumonia. Above all, we identified the effective components and disclosed the mechanism of SHL from the view of system. PMID:26495421

  4. Generic Software Architecture for Prognostics (GSAP) User Guide

    NASA Technical Reports Server (NTRS)

    Teubert, Christopher Allen; Daigle, Matthew John; Watkins, Jason; Sankararaman, Shankar; Goebel, Kai

    2016-01-01

    The Generic Software Architecture for Prognostics (GSAP) is a framework for applying prognostics. It makes applying prognostics easier by implementing many of the common elements across prognostic applications. The standard interface enables reuse of prognostic algorithms and models across systems using the GSAP framework.

  5. A Common Foundation of Information and Analytical Capability for AFSPC Decision Making

    DTIC Science & Technology

    2005-06-23

    System Strategic Master Plan MAPs/MSP CRRAAF TASK FORCE CONOPS MUA Task Weights Engagement Analysis ASIIS Optimization ACEIT COST Analysis...Engangement Architecture Analysis Architecture MUA AFSPC POM S&T Planning Military Utility Analysis ACEIT COST Analysis Joint Capab Integ Develop System

  6. Artificial intelligent e-learning architecture

    NASA Astrophysics Data System (ADS)

    Alharbi, Mafawez; Jemmali, Mahdi

    2017-03-01

    Many institutions and university has forced to use e learning, due to its ability to provide additional and flexible solutions for students and researchers. E-learning In the last decade have transported about the extreme changes in the distribution of education allowing learners to access multimedia course material at any time, from anywhere to suit their specific needs. In the form of e learning, instructors and learners live in different places and they do not engage in a classroom environment, but within virtual universe. Many researches have defined e learning based on their objectives. Therefore, there are small number of e-learning architecture have proposed in the literature. However, the proposed architecture has lack of embedding intelligent system in the architecture of e learning. This research argues that unexplored potential remains, as there is scope for e learning to be intelligent system. This research proposes e-learning architecture that incorporates intelligent system. There are intelligence components, which built into the architecture.

  7. Updates to the NASA Space Telecommunications Radio System (STRS) Architecture

    NASA Technical Reports Server (NTRS)

    Kacpura, Thomas J.; Handler, Louis M.; Briones, Janette; Hall, Charles S.

    2008-01-01

    This paper describes an update of the Space Telecommunications Radio System (STRS) open architecture for NASA space based radios. The STRS architecture has been defined as a framework for the design, development, operation and upgrade of space based software defined radios, where processing resources are constrained. The architecture has been updated based upon reviews by NASA missions, radio providers, and component vendors. The STRS Standard prescribes the architectural relationship between the software elements used in software execution and defines the Application Programmer Interface (API) between the operating environment and the waveform application. Modeling tools have been adopted to present the architecture. The paper will present a description of the updated API, configuration files, and constraints. Minimum compliance is discussed for early implementations. The paper then closes with a summary of the changes made and discussion of the relevant alignment with the Object Management Group (OMG) SWRadio specification, and enhancements to the specialized signal processing abstraction.

  8. Power optimization of digital baseband WCDMA receiver components on algorithmic and architectural level

    NASA Astrophysics Data System (ADS)

    Schämann, M.; Bücker, M.; Hessel, S.; Langmann, U.

    2008-05-01

    High data rates combined with high mobility represent a challenge for the design of cellular devices. Advanced algorithms are required which result in higher complexity, more chip area and increased power consumption. However, this contrasts to the limited power supply of mobile devices. This presentation discusses the application of an HSDPA receiver which has been optimized regarding power consumption with the focus on the algorithmic and architectural level. On algorithmic level the Rake combiner, Prefilter-Rake equalizer and MMSE equalizer are compared regarding their BER performance. Both equalizer approaches provide a significant increase of performance for high data rates compared to the Rake combiner which is commonly used for lower data rates. For both equalizer approaches several adaptive algorithms are available which differ in complexity and convergence properties. To identify the algorithm which achieves the required performance with the lowest power consumption the algorithms have been investigated using SystemC models regarding their performance and arithmetic complexity. Additionally, for the Prefilter Rake equalizer the power estimations of a modified Griffith (LMS) and a Levinson (RLS) algorithm have been compared with the tool ORINOCO supplied by ChipVision. The accuracy of this tool has been verified with a scalable architecture of the UMTS channel estimation described both in SystemC and VHDL targeting a 130 nm CMOS standard cell library. An architecture combining all three approaches combined with an adaptive control unit is presented. The control unit monitors the current condition of the propagation channel and adjusts parameters for the receiver like filter size and oversampling ratio to minimize the power consumption while maintaining the required performance. The optimization strategies result in a reduction of the number of arithmetic operations up to 70% for single components which leads to an estimated power reduction of up to 40% while the BER performance is not affected. This work utilizes SystemC and ORINOCO for the first estimation of power consumption in an early step of the design flow. Thereby algorithms can be compared in different operating modes including the effects of control units. Here an algorithm having higher peak complexity and power consumption but providing more flexibility showed less consumption for normal operating modes compared to the algorithm which is optimized for peak performance.

  9. The software architecture to control the Cherenkov Telescope Array

    NASA Astrophysics Data System (ADS)

    Oya, I.; Füßling, M.; Antonino, P. O.; Conforti, V.; Hagge, L.; Melkumyan, D.; Morgenstern, A.; Tosti, G.; Schwanke, U.; Schwarz, J.; Wegner, P.; Colomé, J.; Lyard, E.

    2016-07-01

    The Cherenkov Telescope Array (CTA) project is an initiative to build two large arrays of Cherenkov gamma- ray telescopes. CTA will be deployed as two installations, one in the northern and the other in the southern hemisphere, containing dozens of telescopes of different sizes. CTA is a big step forward in the field of ground- based gamma-ray astronomy, not only because of the expected scientific return, but also due to the order-of- magnitude larger scale of the instrument to be controlled. The performance requirements associated with such a large and distributed astronomical installation require a thoughtful analysis to determine the best software solutions. The array control and data acquisition (ACTL) work-package within the CTA initiative will deliver the software to control and acquire the data from the CTA instrumentation. In this contribution we present the current status of the formal ACTL system decomposition into software building blocks and the relationships among them. The system is modelled via the Systems Modelling Language (SysML) formalism. To cope with the complexity of the system, this architecture model is sub-divided into different perspectives. The relationships with the stakeholders and external systems are used to create the first perspective, the context of the ACTL software system. Use cases are employed to describe the interaction of those external elements with the ACTL system and are traced to a hierarchy of functionalities (abstract system functions) describing the internal structure of the ACTL system. These functions are then traced to fully specified logical elements (software components), the deployment of which as technical elements, is also described. This modelling approach allows us to decompose the ACTL software in elements to be created and the ow of information within the system, providing us with a clear way to identify sub-system interdependencies. This architectural approach allows us to build the ACTL system model and trace requirements to deliverables (source code, documentation, etc.), and permits the implementation of a flexible use-case driven software development approach thanks to the traceability from use cases to the logical software elements. The Alma Common Software (ACS) container/component framework, used for the control of the Atacama Large Millimeter/submillimeter Array (ALMA) is the basis for the ACTL software and as such it is considered as an integral part of the software architecture.

  10. Secure ASIC Architecture for Optimized Utilization of a Trusted Supply Chain for Common Architecture A and D Applications

    DTIC Science & Technology

    2017-03-01

    overseas. Concurrently, time to market and complex system requirements are increasingly outside the budget range of standalone DoD projects. This paper...expense and delay to market concerns, a major FPGA vendor has offered an FPGA specifically targeting the A&D market . Architecturally, this offering...time-to- market Such services could individually be engaged, each spanning commercial to Trusted handling levels, as appropriate for balancing

  11. Outline of a novel architecture for cortical computation.

    PubMed

    Majumdar, Kaushik

    2008-03-01

    In this paper a novel architecture for cortical computation has been proposed. This architecture is composed of computing paths consisting of neurons and synapses. These paths have been decomposed into lateral, longitudinal and vertical components. Cortical computation has then been decomposed into lateral computation (LaC), longitudinal computation (LoC) and vertical computation (VeC). It has been shown that various loop structures in the cortical circuit play important roles in cortical computation as well as in memory storage and retrieval, keeping in conformity with the molecular basis of short and long term memory. A new learning scheme for the brain has also been proposed and how it is implemented within the proposed architecture has been explained. A few mathematical results about the architecture have been proposed, some of which are without proof.

  12. Formalism Challenges of the Cougaar Model Driven Architecture

    NASA Technical Reports Server (NTRS)

    Bohner, Shawn A.; George, Boby; Gracanin, Denis; Hinchey, Michael G.

    2004-01-01

    The Cognitive Agent Architecture (Cougaar) is one of the most sophisticated distributed agent architectures developed today. As part of its research and evolution, Cougaar is being studied for application to large, logistics-based applications for the Department of Defense (DoD). Anticipiting future complex applications of Cougaar, we are investigating the Model Driven Architecture (MDA) approach to understand how effective it would be for increasing productivity in Cougar-based development efforts. Recognizing the sophistication of the Cougaar development environment and the limitations of transformation technologies for agents, we have systematically developed an approach that combines component assembly in the large and transformation in the small. This paper describes some of the key elements that went into the Cougaar Model Driven Architecture approach and the characteristics that drove the approach.

  13. Flexible weapons architecture design

    NASA Astrophysics Data System (ADS)

    Pyant, William C., III

    Present day air-delivered weapons are of a closed architecture, with little to no ability to tailor the weapon for the individual engagement. The closed architectures require weaponeers to make the target fit the weapon instead of fitting the individual weapons to a target. The concept of a flexible weapons aims to modularize weapons design using an open architecture shell into which different modules are inserted to achieve the desired target fractional damage while reducing cost and civilian casualties. This thesis shows that the architecture design factors of damage mechanism, fusing, weapons weight, guidance, and propulsion are significant in enhancing weapon performance objectives, and would benefit from modularization. Additionally, this thesis constructs an algorithm that can be used to design a weapon set for a particular target class based on these modular components.

  14. FPGA Implementation of Generalized Hebbian Algorithm for Texture Classification

    PubMed Central

    Lin, Shiow-Jyu; Hwang, Wen-Jyi; Lee, Wei-Hao

    2012-01-01

    This paper presents a novel hardware architecture for principal component analysis. The architecture is based on the Generalized Hebbian Algorithm (GHA) because of its simplicity and effectiveness. The architecture is separated into three portions: the weight vector updating unit, the principal computation unit and the memory unit. In the weight vector updating unit, the computation of different synaptic weight vectors shares the same circuit for reducing the area costs. To show the effectiveness of the circuit, a texture classification system based on the proposed architecture is physically implemented by Field Programmable Gate Array (FPGA). It is embedded in a System-On-Programmable-Chip (SOPC) platform for performance measurement. Experimental results show that the proposed architecture is an efficient design for attaining both high speed performance and low area costs. PMID:22778640

  15. A discrete decentralized variable structure robotic controller

    NASA Technical Reports Server (NTRS)

    Tumeh, Zuheir S.

    1989-01-01

    A decentralized trajectory controller for robotic manipulators is designed and tested using a multiprocessor architecture and a PUMA 560 robot arm. The controller is made up of a nominal model-based component and a correction component based on a variable structure suction control approach. The second control component is designed using bounds on the difference between the used and actual values of the model parameters. Since the continuous manipulator system is digitally controlled along a trajectory, a discretized equivalent model of the manipulator is used to derive the controller. The motivation for decentralized control is that the derived algorithms can be executed in parallel using a distributed, relatively inexpensive, architecture where each joint is assigned a microprocessor. Nonlinear interaction and coupling between joints is treated as a disturbance torque that is estimated and compensated for.

  16. Physics-Based Design Tools for Lightweight Ceramic Composite Turbine Components with Durable Microstructures

    NASA Technical Reports Server (NTRS)

    DiCarlo, James A.

    2011-01-01

    Under the Supersonics Project of the NASA Fundamental Aeronautics Program, modeling and experimental efforts are underway to develop generic physics-based tools to better implement lightweight ceramic matrix composites into supersonic engine components and to assure sufficient durability for these components in the engine environment. These activities, which have a crosscutting aspect for other areas of the Fundamental Aero program, are focusing primarily on improving the multi-directional design strength and rupture strength of high-performance SiC/SiC composites by advanced fiber architecture design. This presentation discusses progress in tool development with particular focus on the use of 2.5D-woven architectures and state-of-the-art constituents for a generic un-cooled SiC/SiC low-pressure turbine blade.

  17. Technical architecture of ONC-approved plans for statewide health information exchange.

    PubMed

    Barrows, Randolph C; Ezzard, John

    2011-01-01

    ONC-approved state plans for HIE were reviewed for descriptions and depictions of statewide HIE technical architecture. Review was complicated by non-standard organizational elements and technical terminology across state plans. Findings were mapped to industry standard, referenced, and defined HIE architecture descriptions and characteristics. Results are preliminary due to the initial subset of ONC-approved plans available, the rapid pace of new ONC-plan approvals, and continuing advancements in standards and technology of HIE, etc. Review of 28 state plans shows virtually all include a direct messaging component, but for participating entities at state-specific levels of granularity (RHIO, enterprise, organization/provider). About ½ of reviewed plans describe a federated architecture, and ¼ of plans utilize a single-vendor "hybrid-federated" architecture. About 1/3 of states plan to leverage new federal and open exchange technologies (DIRECT, CONNECT, etc.). Only one plan describes a centralized architecture for statewide HIE, but others combine central and federated architectural approaches.

  18. Agricultural Urbanism in the Context of Landscape Ecological Architecture

    NASA Astrophysics Data System (ADS)

    Maltseva, I. N.; Kaganovich, N. N.; Mindiyrova, T. N.

    2017-11-01

    The article analyzes some of the fundamental aspects of cities sustainable development connected in many respects with the concept of ecological architecture. One of the main concepts of sustainability is considered in detail: the city as an eco-sustainable and balanced system, architectural objects as a full-fledged part of this system, which, most likely, will be determined by one of the directions of this development - the development of landscape architecture as an tool for integration of nature into the urban environment. At the same time, the variety of its functional forms and architectural methods in the system of organization of internal and external space is outlined as well as its interrelation with energy-saving architecture defining them as the two most important components of eco-sustainable development. The development forms of landscape architecture are considered in the review of analogs, as an example (agricultural urbanism object) a thesis on the topic “Vertical Farm Agroindustrial Complex” is presented.

  19. Design of Power System Architectures for Small Spacecraft Systems

    NASA Technical Reports Server (NTRS)

    Momoh, James A.; Subramonian, Rama; Dias, Lakshman G.

    1996-01-01

    The objective of this research is to perform a trade study on several candidate power system architectures for small spacecrafts to be used in NASA's new millennium program. Three initial candidate architectures have been proposed by NASA and two other candidate architectures have been proposed by Howard University. Howard University is currently conducting the necessary analysis, synthesis, and simulation needed to perform the trade studies and arrive at the optimal power system architecture. Statistical, sensitivity and tolerant studies has been performed on the systems. It is concluded from present studies that certain components such as the series regulators, buck-boost converters and power converters can be minimized while retaining the desired functionality of the overall architecture. This in conjunction with battery scalability studies and system efficiency studies have enabled us to develop more economic architectures. Future studies will include artificial neural networks and fuzzy logic to analyze the performance of the systems. Fault simulation studies and fault diagnosis studies using EMTP and artificial neural networks will also be conducted.

  20. Technical Architecture of ONC-Approved Plans For Statewide Health Information Exchange

    PubMed Central

    Barrows, Randolph C.; Ezzard, John

    2011-01-01

    ONC-approved state plans for HIE were reviewed for descriptions and depictions of statewide HIE technical architecture. Review was complicated by non-standard organizational elements and technical terminology across state plans. Findings were mapped to industry standard, referenced, and defined HIE architecture descriptions and characteristics. Results are preliminary due to the initial subset of ONC-approved plans available, the rapid pace of new ONC-plan approvals, and continuing advancements in standards and technology of HIE, etc. Review of 28 state plans shows virtually all include a direct messaging component, but for participating entities at state-specific levels of granularity (RHIO, enterprise, organization/provider). About ½ of reviewed plans describe a federated architecture, and ¼ of plans utilize a single-vendor “hybrid-federated” architecture. About 1/3 of states plan to leverage new federal and open exchange technologies (DIRECT, CONNECT, etc.). Only one plan describes a centralized architecture for statewide HIE, but others combine central and federated architectural approaches. PMID:22195059

  1. Effect of genetic architecture on the prediction accuracy of quantitative traits in samples of unrelated individuals.

    PubMed

    Morgante, Fabio; Huang, Wen; Maltecca, Christian; Mackay, Trudy F C

    2018-06-01

    Predicting complex phenotypes from genomic data is a fundamental aim of animal and plant breeding, where we wish to predict genetic merits of selection candidates; and of human genetics, where we wish to predict disease risk. While genomic prediction models work well with populations of related individuals and high linkage disequilibrium (LD) (e.g., livestock), comparable models perform poorly for populations of unrelated individuals and low LD (e.g., humans). We hypothesized that low prediction accuracies in the latter situation may occur when the genetics architecture of the trait departs from the infinitesimal and additive architecture assumed by most prediction models. We used simulated data for 10,000 lines based on sequence data from a population of unrelated, inbred Drosophila melanogaster lines to evaluate this hypothesis. We show that, even in very simplified scenarios meant as a stress test of the commonly used Genomic Best Linear Unbiased Predictor (G-BLUP) method, using all common variants yields low prediction accuracy regardless of the trait genetic architecture. However, prediction accuracy increases when predictions are informed by the genetic architecture inferred from mapping the top variants affecting main effects and interactions in the training data, provided there is sufficient power for mapping. When the true genetic architecture is largely or partially due to epistatic interactions, the additive model may not perform well, while models that account explicitly for interactions generally increase prediction accuracy. Our results indicate that accounting for genetic architecture can improve prediction accuracy for quantitative traits.

  2. Do Intelligent Robots Need Emotion?

    PubMed

    Pessoa, Luiz

    2017-11-01

    What is the place of emotion in intelligent robots? Researchers have advocated the inclusion of some emotion-related components in the information-processing architecture of autonomous agents. It is argued here that emotion needs to be merged with all aspects of the architecture: cognitive-emotional integration should be a key design principle. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Model-Based Engineering for Supply Chain Risk Management

    DTIC Science & Technology

    2015-09-30

    Privacy, 2009 [19] Julien Delange Wheel Brake System Example using AADL; Feiler, Peter; Hansson, Jörgen; de Niz, Dionisio; & Wrage, Lutz. System ...University Software Engineering Institute Abstract—Expanded use of commercial components has increased the complexity of system assurance...verification. Model- based engineering (MBE) offers a means to design, develop, analyze, and maintain a complex system architecture. Architecture Analysis

  4. PRISMA-MAR: An Architecture Model for Data Visualization in Augmented Reality Mobile Devices

    ERIC Educational Resources Information Center

    Gomes Costa, Mauro Alexandre Folha; Serique Meiguins, Bianchi; Carneiro, Nikolas S.; Gonçalves Meiguins, Aruanda Simões

    2013-01-01

    This paper proposes an extension to mobile augmented reality (MAR) environments--the addition of data charts to the more usual text, image and video components. To this purpose, we have designed a client-server architecture including the main necessary modules and services to provide an Information Visualization MAR experience. The server side…

  5. Extraction of user's navigation commands from upper body force interaction in walker assisted gait.

    PubMed

    Frizera Neto, Anselmo; Gallego, Juan A; Rocon, Eduardo; Pons, José L; Ceres, Ramón

    2010-08-05

    The advances in technology make possible the incorporation of sensors and actuators in rollators, building safer robots and extending the use of walkers to a more diverse population. This paper presents a new method for the extraction of navigation related components from upper-body force interaction data in walker assisted gait. A filtering architecture is designed to cancel: (i) the high-frequency noise caused by vibrations on the walker's structure due to irregularities on the terrain or walker's wheels and (ii) the cadence related force components caused by user's trunk oscillations during gait. As a result, a third component related to user's navigation commands is distinguished. For the cancelation of high-frequency noise, a Benedict-Bordner g-h filter was designed presenting very low values for Kinematic Tracking Error ((2.035 +/- 0.358).10(-2) kgf) and delay ((1.897 +/- 0.3697).10(1)ms). A Fourier Linear Combiner filtering architecture was implemented for the adaptive attenuation of about 80% of the cadence related components' energy from force data. This was done without compromising the information contained in the frequencies close to such notch filters. The presented methodology offers an effective cancelation of the undesired components from force data, allowing the system to extract in real-time voluntary user's navigation commands. Based on this real-time identification of voluntary user's commands, a classical approach to the control architecture of the robotic walker is being developed, in order to obtain stable and safe user assisted locomotion.

  6. DFT algorithms for bit-serial GaAs array processor architectures

    NASA Technical Reports Server (NTRS)

    Mcmillan, Gary B.

    1988-01-01

    Systems and Processes Engineering Corporation (SPEC) has developed an innovative array processor architecture for computing Fourier transforms and other commonly used signal processing algorithms. This architecture is designed to extract the highest possible array performance from state-of-the-art GaAs technology. SPEC's architectural design includes a high performance RISC processor implemented in GaAs, along with a Floating Point Coprocessor and a unique Array Communications Coprocessor, also implemented in GaAs technology. Together, these data processors represent the latest in technology, both from an architectural and implementation viewpoint. SPEC has examined numerous algorithms and parallel processing architectures to determine the optimum array processor architecture. SPEC has developed an array processor architecture with integral communications ability to provide maximum node connectivity. The Array Communications Coprocessor embeds communications operations directly in the core of the processor architecture. A Floating Point Coprocessor architecture has been defined that utilizes Bit-Serial arithmetic units, operating at very high frequency, to perform floating point operations. These Bit-Serial devices reduce the device integration level and complexity to a level compatible with state-of-the-art GaAs device technology.

  7. Citizen Observatories: A Standards Based Architecture

    NASA Astrophysics Data System (ADS)

    Simonis, Ingo

    2015-04-01

    A number of large-scale research projects are currently under way exploring the various components of citizen observatories, e.g. CITI-SENSE (http://www.citi-sense.eu), Citclops (http://citclops.eu), COBWEB (http://cobwebproject.eu), OMNISCIENTIS (http://www.omniscientis.eu), and WeSenseIt (http://www.wesenseit.eu). Common to all projects is the motivation to develop a platform enabling effective participation by citizens in environmental projects, while considering important aspects such as security, privacy, long-term storage and availability, accessibility of raw and processed data and its proper integration into catalogues and international exchange and collaboration systems such as GEOSS or INSPIRE. This paper describes the software architecture implemented for setting up crowdsourcing campaigns using standardized components, interfaces, security features, and distribution capabilities. It illustrates the Citizen Observatory Toolkit, a software suite that allows defining crowdsourcing campaigns, to invite registered and unregistered participants to participate in crowdsourcing campaigns, and to analyze, process, and visualize raw and quality enhanced crowd sourcing data and derived products. The Citizen Observatory Toolkit is not a single software product. Instead, it is a framework of components that are built using internationally adopted standards wherever possible (e.g. OGC standards from Sensor Web Enablement, GeoPackage, and Web Mapping and Processing Services, as well as security and metadata/cataloguing standards), defines profiles of those standards where necessary (e.g. SWE O&M profile, SensorML profile), and implements design decisions based on the motivation to maximize interoperability and reusability of all components. The toolkit contains tools to set up, manage and maintain crowdsourcing campaigns, allows building on-demand apps optimized for the specific sampling focus, supports offline and online sampling modes using modern cell phones with built-in sensing technologies, automates the upload of the raw data, and handles conflation services to match quality requirements and analysis challenges. The strict implementation of all components using internationally adopted standards ensures maximal interoperability and reusability of all components. The Citizen Observatory Toolkit is currently developed as part of the COBWEB research project. COBWEB is partially funded by the European Programme FP7/2007-2013 under grant agreement n° 308513; part of the topic ENV.2012.6.5-1 "Developing community based environmental monitoring and information systems using innovative and novel earth observation applications.

  8. Architecture for fiber-optic sensors and actuators in aircraft propulsion systems

    NASA Technical Reports Server (NTRS)

    Glomb, W. L., Jr.

    1990-01-01

    This paper describes a design for fiber-optic sensing and control in advanced aircraft Electronic Engine Control (EEC). The recommended architecture is an on-engine EEC which contains electro-optic interface circuits for fiber-optic sensors. Size and weight are reduced by multiplexing arrays of functionally similar sensors on a pairs of optical fibers to common electro-optical interfaces. The architecture contains interfaces to seven sensor groups. Nine distinct fiber-optic sensor types were found to provide the sensing functions. Analysis revealed no strong discriminator (except reliability of laser diodes and remote electronics) on which to base a selection of preferred common interface type. A hardware test program is recommended to assess the relative maturity of the technologies and to determine real performance in the engine environment.

  9. Mapping SOA Artefacts onto an Enterprise Reference Architecture Framework

    NASA Astrophysics Data System (ADS)

    Noran, Ovidiu

    Currently, there is still no common agreement on the service-Oriented architecture (SOA) definition, or the types and meaning of the artefacts involved in the creation and maintenance of an SOA. Furthermore, the SOA image shift from an infrastructure solution to a business-wide change project may have promoted a perception that SOA is a parallel initiative, a competitor and perhaps a successor of enterprise architecture (EA). This chapter attempts to map several typical SOA artefacts onto an enterprise reference framework commonly used in EA. This is done in order to show that the EA framework can express and structure most of the SOA artefacts and therefore, a framework for SOA could in fact be derived from an EA framework with the ensuing SOA-EA integration benefits.

  10. Electrical Grounding Architecture for Unmanned Spacecraft

    NASA Technical Reports Server (NTRS)

    1998-01-01

    This handbook is approved for use by NASA Headquarters and all NASA Centers and is intended to provide a common framework for consistent practices across NASA programs. This handbook was developed to describe electrical grounding design architecture options for unmanned spacecraft. This handbook is written for spacecraft system engineers, power engineers, and electromagnetic compatibility (EMC) engineers. Spacecraft grounding architecture is a system-level decision which must be established at the earliest point in spacecraft design. All other grounding design must be coordinated with and be consistent with the system-level architecture. This handbook assumes that there is no one single 'correct' design for spacecraft grounding architecture. There have been many successful satellite and spacecraft programs from NASA, using a variety of grounding architectures with different levels of complexity. However, some design principles learned over the years apply to all types of spacecraft development. This handbook summarizes those principles to help guide spacecraft grounding architecture design for NASA and others.

  11. Project Integration Architecture: Implementation of the CORBA-Served Application Infrastructure

    NASA Technical Reports Server (NTRS)

    Jones, William Henry

    2005-01-01

    The Project Integration Architecture (PIA) has been demonstrated in a single-machine C++ implementation prototype. The architecture is in the process of being migrated to a Common Object Request Broker Architecture (CORBA) implementation. The migration of the Foundation Layer interfaces is fundamentally complete. The implementation of the Application Layer infrastructure for that migration is reported. The Application Layer provides for distributed user identification and authentication, per-user/per-instance access controls, server administration, the formation of mutually-trusting application servers, a server locality protocol, and an ability to search for interface implementations through such trusted server networks.

  12. Predictors of Future Performance in Architectural Design Education

    ERIC Educational Resources Information Center

    Roberts, A. S.

    2007-01-01

    The link between academic performance in secondary education and the subsequent performance of students studying architecture at university level is commonly questioned by educators and admissions tutors. This paper investigates the potential for using measures of cognitive style and spatial ability as predictors of future potential in…

  13. Airport Surface Network Architecture Definition

    NASA Technical Reports Server (NTRS)

    Nguyen, Thanh C.; Eddy, Wesley M.; Bretmersky, Steven C.; Lawas-Grodek, Fran; Ellis, Brenda L.

    2006-01-01

    Currently, airport surface communications are fragmented across multiple types of systems. These communication systems for airport operations at most airports today are based dedicated and separate architectures that cannot support system-wide interoperability and information sharing. The requirements placed upon the Communications, Navigation, and Surveillance (CNS) systems in airports are rapidly growing and integration is urgently needed if the future vision of the National Airspace System (NAS) and the Next Generation Air Transportation System (NGATS) 2025 concept are to be realized. To address this and other problems such as airport surface congestion, the Space Based Technologies Project s Surface ICNS Network Architecture team at NASA Glenn Research Center has assessed airport surface communications requirements, analyzed existing and future surface applications, and defined a set of architecture functions that will help design a scalable, reliable and flexible surface network architecture to meet the current and future needs of airport operations. This paper describes the systems approach or methodology to networking that was employed to assess airport surface communications requirements, analyze applications, and to define the surface network architecture functions as the building blocks or components of the network. The systems approach used for defining these functions is relatively new to networking. It is viewing the surface network, along with its environment (everything that the surface network interacts with or impacts), as a system. Associated with this system are sets of services that are offered by the network to the rest of the system. Therefore, the surface network is considered as part of the larger system (such as the NAS), with interactions and dependencies between the surface network and its users, applications, and devices. The surface network architecture includes components such as addressing/routing, network management, network performance and security.

  14. Research of Ancient Architectures in Jin-Fen Area Based on GIS&BIM Technology

    NASA Astrophysics Data System (ADS)

    Jia, Jing; Zheng, Qiuhong; Gao, Huiying; Sun, Hai

    2017-05-01

    The number of well-preserved ancient buildings located in Shanxi Province, enjoying the absolute maximum proportion of ancient architectures in China, is about 18418, among which, 9053 buildings have the structural style of wood frame. The value of the application of BIM (Building Information Modeling) and GIS (Geographic Information System) is gradually probed and testified in the corresponding fields of ancient architecture’s spatial distribution information management, routine maintenance and special conservation & restoration, the evaluation and simulation of related disasters, such as earthquake. The research objects are ancient architectures in JIN-FEN area, which were first investigated by Sicheng LIANG and recorded in his work of “Chinese ancient architectures survey report”. The research objects, i.e. the ancient architectures in Jin-Fen area include those in Sicheng LIANG’s investigation, and further adjustments were made through authors’ on-site investigation and literature searching & collection. During this research process, the spatial distributing Geodatabase of research objects is established utilizing GIS. The BIM components library for ancient buildings is formed combining on-site investigation data and precedent classic works, such as “Yingzao Fashi”, a treatise on architectural methods in Song Dynasty, “Yongle Encyclopedia” and “Gongcheng Zuofa Zeli”, case collections of engineering practice, by the Ministry of Construction of Qing Dynasty. A building of Guangsheng temple in Hongtong county is selected as an example to elaborate the BIM model construction process based on the BIM components library for ancient buildings. Based on the foregoing work results of spatial distribution data, attribute data of features, 3D graphic information and parametric building information model, the information management system for ancient architectures in Jin-Fen Area, utilizing GIS&BIM technology, could be constructed to support the further research of seismic disaster analysis and seismic performance simulation.

  15. Security architecture for HL/7 message interchange.

    PubMed

    Chen, T S; Liao, B S; Lin, M G; Gough, T G

    2001-01-01

    The promotion of quality medical treatment is very important to the healthcare providers as well as to patients. It requires that the medical resources of different hospitals be combined to ensure that medical information is shared and that resources are not wasted. A computer-based patient record is one of the best methods to accomplish the interchange of the patient's clinical data. In our system, the Health Level/Seven (HL/7) format is used for the interchange of the clinical data, as it has been supported by many healthcare providers and become a â standard'. The security of the interchange of clinical data is a serious issue for people using the Internet for data communication. Several international well-developed security algorithms, models and secure policies are adopted in the design of a security handler for an HL/7 architecture. The goal of our system is to combine our security system with the end-to-end communication systems constructed from the HL/7 format to establish a safe delivery channel. A suitable security interchange environment is implemented to address some shortcomings in clinical data interchange. located at the application layer of the ISO/OSI reference model. The medical message components, sub-components, and related types of message event are the primary goals of the HL/7 protocols. The patient management system, the doctor's system for recording his advice, examination and diagnosis as well as any financial management system are all covered by the HL/7 protocols. Healthcare providers and hospitals in Taiwan are very interested in developing the HL/7 protocols as a common standard for clinical data interchange.

  16. Internet-enabled collaborative agent-based supply chains

    NASA Astrophysics Data System (ADS)

    Shen, Weiming; Kremer, Rob; Norrie, Douglas H.

    2000-12-01

    This paper presents some results of our recent research work related to the development of a new Collaborative Agent System Architecture (CASA) and an Infrastructure for Collaborative Agent Systems (ICAS). Initially being proposed as a general architecture for Internet based collaborative agent systems (particularly complex industrial collaborative agent systems), the proposed architecture is very suitable for managing the Internet enabled complex supply chain for a large manufacturing enterprise. The general collaborative agent system architecture with the basic communication and cooperation services, domain independent components, prototypes and mechanisms are described. Benefits of implementing Internet enabled supply chains with the proposed infrastructure are discussed. A case study on Internet enabled supply chain management is presented.

  17. Rethinking the architectural design concept in the digital culture (in architecture's practice perspective)

    NASA Astrophysics Data System (ADS)

    Prawata, Albertus Galih

    2017-11-01

    The architectural design stages in architectural practices or in architectural design studio consist of many aspects. One of them is during the early phases of the design process, where the architects or designers try to interpret the project brief into the design concept. This paper is a report of the procedure of digital tools in the early design process in an architectural practice in Jakarta. It targets principally the use of BIM and digital modeling to generate information and transform them into conceptual forms, which is not very common in Indonesian architectural practices. Traditionally, the project brief is transformed into conceptual forms by using sketches, drawings, and physical model. The new method using digital tools shows that it is possible to do the same thing during the initial stage of the design process to create early architectural design forms. Architect's traditional tools and methods begin to be replaced effectively by digital tools, which would drive bigger opportunities for innovation.

  18. Efficient Orchestration of Data Centers Via Comprehensive and Application Aware Trade Off Exploration

    DTIC Science & Technology

    2016-12-01

    proposes to save power by concentrating traffic over a small subset of links. data center architecture [12], as depicted in Figure 1.1. The fat-tree... architecture is a physical network topology commonly used in data networks representing a hier- archical multi-rooted tree consisting of four levels...milliseconds) is an order of magnitude faster than the GASO variants (tens of seconds). 3.4.3 LAW for Architectures of Different Dimensions In this section

  19. An Evolutionarily Structured Universe of Protein Architecture

    PubMed Central

    Caetano-Anollés, Gustavo; Caetano-Anollés, Derek

    2003-01-01

    Protein structural diversity encompasses a finite set of architectural designs. Embedded in these topologies are evolutionary histories that we here uncover using cladistic principles and measurements of protein-fold usage and sharing. The reconstructed phylogenies are inherently rooted and depict histories of protein and proteome diversification. Proteome phylogenies showed two monophyletic sister-groups delimiting Bacteria and Archaea, and a topology rooted in Eucarya. This suggests three dramatic evolutionary events and a common ancestor with a eukaryotic-like, gene-rich, and relatively modern organization. Conversely, a general phylogeny of protein architectures showed that structural classes of globular proteins appeared early in evolution and in defined order, the α/β class being the first. Although most ancestral folds shared a common architecture of barrels or interleaved β-sheets and α-helices, many were clearly derived, such as polyhedral folds in the all-α class and β-sandwiches, β-propellers, and β-prisms in all-β proteins. We also describe transformation pathways of architectures that are prevalently used in nature. For example, β-barrels with increased curl and stagger were favored evolutionary outcomes in the all-β class. Interestingly, we found cases where structural change followed the α-to-β tendency uncovered in the tree of architectures. Lastly, we traced the total number of enzymatic functions associated with folds in the trees and show that there is a general link between structure and enzymatic function. PMID:12840035

  20. Health care professional workstation: software system construction using DSSA scenario-based engineering process.

    PubMed

    Hufnagel, S; Harbison, K; Silva, J; Mettala, E

    1994-01-01

    This paper describes a new method for the evolutionary determination of user requirements and system specifications called scenario-based engineering process (SEP). Health care professional workstations are critical components of large scale health care system architectures. We suggest that domain-specific software architectures (DSSAs) be used to specify standard interfaces and protocols for reusable software components throughout those architectures, including workstations. We encourage the use of engineering principles and abstraction mechanisms. Engineering principles are flexible guidelines, adaptable to particular situations. Abstraction mechanisms are simplifications for management of complexity. We recommend object-oriented design principles, graphical structural specifications, and formal components' behavioral specifications. We give an ambulatory care scenario and associated models to demonstrate SEP. The scenario uses health care terminology and gives patients' and health care providers' system views. Our goal is to have a threefold benefit. (i) Scenario view abstractions provide consistent interdisciplinary communications. (ii) Hierarchical object-oriented structures provide useful abstractions for reuse, understandability, and long term evolution. (iii) SEP and health care DSSA integration into computer aided software engineering (CASE) environments. These environments should support rapid construction and certification of individualized systems, from reuse libraries.

Top