Science.gov

Sample records for integrating software architectures

  1. Integrating MPI and deduplication engines: a software architecture roadmap.

    PubMed

    Baksi, Dibyendu

    2009-03-01

    The objective of this paper is to clarify the major concepts related to architecture and design of patient identity management software systems so that an implementor looking to solve a specific integration problem in the context of a Master Patient Index (MPI) and a deduplication engine can address the relevant issues. The ideas presented are illustrated in the context of a reference use case from Integrating the Health Enterprise Patient Identifier Cross-referencing (IHE PIX) profile. Sound software engineering principles using the latest design paradigm of model driven architecture (MDA) are applied to define different views of the architecture. The main contribution of the paper is a clear software architecture roadmap for implementors of patient identity management systems. Conceptual design in terms of static and dynamic views of the interfaces is provided as an example of platform independent model. This makes the roadmap applicable to any specific solutions of MPI, deduplication library or software platform. Stakeholders in need of integration of MPIs and deduplication engines can evaluate vendor specific solutions and software platform technologies in terms of fundamental concepts and can make informed decisions that preserve investment. This also allows freedom from vendor lock-in and the ability to kick-start integration efforts based on a solid architecture.

  2. NASA Integrated Network Monitor and Control Software Architecture

    NASA Technical Reports Server (NTRS)

    Shames, Peter; Anderson, Michael; Kowal, Steve; Levesque, Michael; Sindiy, Oleg; Donahue, Kenneth; Barnes, Patrick

    2012-01-01

    The National Aeronautics and Space Administration (NASA) Space Communications and Navigation office (SCaN) has commissioned a series of trade studies to define a new architecture intended to integrate the three existing networks that it operates, the Deep Space Network (DSN), Space Network (SN), and Near Earth Network (NEN), into one integrated network that offers users a set of common, standardized, services and interfaces. The integrated monitor and control architecture utilizes common software and common operator interfaces that can be deployed at all three network elements. This software uses state-of-the-art concepts such as a pool of re-programmable equipment that acts like a configurable software radio, distributed hierarchical control, and centralized management of the whole SCaN integrated network. For this trade space study a model-based approach using SysML was adopted to describe and analyze several possible options for the integrated network monitor and control architecture. This model was used to refine the design and to drive the costing of the four different software options. This trade study modeled the three existing self standing network elements at point of departure, and then described how to integrate them using variations of new and existing monitor and control system components for the different proposed deployments under consideration. This paper will describe the trade space explored, the selected system architecture, the modeling and trade study methods, and some observations on useful approaches to implementing such model based trade space representation and analysis.

  3. NASA Integrated Network Monitor and Control Software Architecture

    NASA Technical Reports Server (NTRS)

    Shames, Peter; Anderson, Michael; Kowal, Steve; Levesque, Michael; Sindiy, Oleg; Donahue, Kenneth; Barnes, Patrick

    2012-01-01

    The National Aeronautics and Space Administration (NASA) Space Communications and Navigation office (SCaN) has commissioned a series of trade studies to define a new architecture intended to integrate the three existing networks that it operates, the Deep Space Network (DSN), Space Network (SN), and Near Earth Network (NEN), into one integrated network that offers users a set of common, standardized, services and interfaces. The integrated monitor and control architecture utilizes common software and common operator interfaces that can be deployed at all three network elements. This software uses state-of-the-art concepts such as a pool of re-programmable equipment that acts like a configurable software radio, distributed hierarchical control, and centralized management of the whole SCaN integrated network. For this trade space study a model-based approach using SysML was adopted to describe and analyze several possible options for the integrated network monitor and control architecture. This model was used to refine the design and to drive the costing of the four different software options. This trade study modeled the three existing self standing network elements at point of departure, and then described how to integrate them using variations of new and existing monitor and control system components for the different proposed deployments under consideration. This paper will describe the trade space explored, the selected system architecture, the modeling and trade study methods, and some observations on useful approaches to implementing such model based trade space representation and analysis.

  4. Integrity Constraint Monitoring in Software Development: Proposed Architectures

    NASA Technical Reports Server (NTRS)

    Fernandez, Francisco G.

    1997-01-01

    In the development of complex software systems, designers are required to obtain from many sources and manage vast amounts of knowledge of the system being built and communicate this information to personnel with a variety of backgrounds. Knowledge concerning the properties of the system, including the structure of, relationships between and limitations of the data objects in the system, becomes increasingly more vital as the complexity of the system and the number of knowledge sources increases. Ensuring that violations of these properties do not occur becomes steadily more challenging. One approach toward managing the enforcement or system properties, called context monitoring, uses a centralized repository of integrity constraints and a constraint satisfiability mechanism for dynamic verification of property enforcement during program execution. The focus of this paper is to describe possible software architectures that define a mechanism for dynamically checking the satisfiability of a set of constraints on a program. The next section describes the context monitoring approach in general. Section 3 gives an overview of the work currently being done toward the addition of an integrity constraint satisfiability mechanism to a high-level program language, SequenceL, and demonstrates how this model is being examined to develop a general software architecture. Section 4 describes possible architectures for a general constraint satisfiability mechanism, as well as an alternative approach that, uses embedded database queries in lieu of an external monitor. The paper concludes with a brief summary outlining the, current state of the research and future work.

  5. Integrating software architectures for distributed simulations and simulation analysis communities.

    SciTech Connect

    Goldsby, Michael E.; Fellig, Daniel; Linebarger, John Michael; Moore, Patrick Curtis; Sa, Timothy J.; Hawley, Marilyn F.

    2005-10-01

    The one-year Software Architecture LDRD (No.79819) was a cross-site effort between Sandia California and Sandia New Mexico. The purpose of this research was to further develop and demonstrate integrating software architecture frameworks for distributed simulation and distributed collaboration in the homeland security domain. The integrated frameworks were initially developed through the Weapons of Mass Destruction Decision Analysis Center (WMD-DAC), sited at SNL/CA, and the National Infrastructure Simulation & Analysis Center (NISAC), sited at SNL/NM. The primary deliverable was a demonstration of both a federation of distributed simulations and a federation of distributed collaborative simulation analysis communities in the context of the same integrated scenario, which was the release of smallpox in San Diego, California. To our knowledge this was the first time such a combination of federations under a single scenario has ever been demonstrated. A secondary deliverable was the creation of the standalone GroupMeld{trademark} collaboration client, which uses the GroupMeld{trademark} synchronous collaboration framework. In addition, a small pilot experiment that used both integrating frameworks allowed a greater range of crisis management options to be performed and evaluated than would have been possible without the use of the frameworks.

  6. Integrated Modular Avionics for Spacecraft Software Architecture and Requirements

    NASA Astrophysics Data System (ADS)

    Deredempt, Marie-Helene; Rossignol, Alain; Windsor, James; De-Ferluc, Regis; Sanmarti, Joaquim; Thorn, Jason; Parisis, Paul; Quartier, Fernand; Vatrinet, Francis; Schoofs, Tobias; Crespo, Alfons; Galizzi, Julien; Garcia, Gerald; Arberet, Paul

    2012-08-01

    Space industries designers, for scientific, observation, exploration and telecom missions are now facing requirements such as long lifetime, autonomy and safe operation guarantee in case of failure. New technical and industrial challenges will add complexity. The key point to ensure the success of future industrial projects is to answer on board processing increasing demand by designing more scalable and modular architectures in order to allow new missions whilst improving lifecycle, costs of design, qualification, and security.Focusing on data processing, new technology such as time and space partitioning as part of Integrated Modular Avionics (IMA) experimented by aeronautical domain and industrialized in the new generation of aircraft, was analyzed first for security and feasibility in space domain by an ESA project on secure partitioning and working group. In order to complete these studies, Integrated Modular Avionics for Space, as current ESA project, has the objective to confirm the feasibility of Time and Space Partitioning in space domain using existing hardware and based on ARINC653.By combining the efforts of industrial partners, the IMA for Space (IMA SP) project main goals are to focus first on some major topics such as computational model, impact of caches, impact on process and tools, Failure Detection and Isolation Recovery (FDIR), maintenance and I/O management in order to consolidate requirements, then to develop software solutions that meet requirements and lastly to implement these solutions in a demonstration phase with operational software.

  7. Using an architectural approach to integrate heterogeneous, distributed software components

    NASA Technical Reports Server (NTRS)

    Callahan, John R.; Purtilo, James M.

    1995-01-01

    Many computer programs cannot be easily integrated because their components are distributed and heterogeneous, i.e., they are implemented in diverse programming languages, use different data representation formats, or their runtime environments are incompatible. In many cases, programs are integrated by modifying their components or interposing mechanisms that handle communication and conversion tasks. For example, remote procedure call (RPC) helps integrate heterogeneous, distributed programs. When configuring such programs, however, mechanisms like RPC must be used explicitly by software developers in order to integrate collections of diverse components. Each collection may require a unique integration solution. This paper describes improvements to the concepts of software packaging and some of our experiences in constructing complex software systems from a wide variety of components in different execution environments. Software packaging is a process that automatically determines how to integrate a diverse collection of computer programs based on the types of components involved and the capabilities of available translators and adapters in an environment. Software packaging provides a context that relates such mechanisms to software integration processes and reduces the cost of configuring applications whose components are distributed or implemented in different programming languages. Our software packaging tool subsumes traditional integration tools like UNIX make by providing a rule-based approach to software integration that is independent of execution environments.

  8. Assessment of the integration capability of system architectures from a complex and distributed software systems perspective

    NASA Astrophysics Data System (ADS)

    Leuchter, S.; Reinert, F.; Müller, W.

    2014-06-01

    Procurement and design of system architectures capable of network centric operations demand for an assessment scheme in order to compare different alternative realizations. In this contribution an assessment method for system architectures targeted at the C4ISR domain is presented. The method addresses the integration capability of software systems from a complex and distributed software system perspective focusing communication, interfaces and software. The aim is to evaluate the capability to integrate a system or its functions within a system-of-systems network. This method uses approaches from software architecture quality assessment and applies them on the system architecture level. It features a specific goal tree of several dimensions that are relevant for enterprise integration. These dimensions have to be weighed against each other and totalized using methods from the normative decision theory in order to reflect the intention of the particular enterprise integration effort. The indicators and measurements for many of the considered quality features rely on a model based view on systems, networks, and the enterprise. That means it is applicable to System-of-System specifications based on enterprise architectural frameworks relying on defined meta-models or domain ontologies for defining views and viewpoints. In the defense context we use the NATO Architecture Framework (NAF) to ground respective system models. The proposed assessment method allows evaluating and comparing competing system designs regarding their future integration potential. It is a contribution to the system-of-systems engineering methodology.

  9. Robust Software Architecture for Robots

    NASA Technical Reports Server (NTRS)

    Aghazanian, Hrand; Baumgartner, Eric; Garrett, Michael

    2009-01-01

    Robust Real-Time Reconfigurable Robotics Software Architecture (R4SA) is the name of both a software architecture and software that embodies the architecture. The architecture was conceived in the spirit of current practice in designing modular, hard, realtime aerospace systems. The architecture facilitates the integration of new sensory, motor, and control software modules into the software of a given robotic system. R4SA was developed for initial application aboard exploratory mobile robots on Mars, but is adaptable to terrestrial robotic systems, real-time embedded computing systems in general, and robotic toys.

  10. Software Architecture Technology Initiative

    DTIC Science & Technology

    2007-05-01

    2007 Carnegie Mellon University Software Architecture Technology Initiative Mark Klein Third Annual SATURN Workshop May 2007 Report Documentation...3. DATES COVERED 00-00-2007 to 00-00-2007 4. TITLE AND SUBTITLE Software Architecture Technology Initiative 5a. CONTRACT NUMBER 5b. GRANT...STATEMENT Approved for public release; distribution unlimited 13. SUPPLEMENTARY NOTES presented at the SEI Software Architecture Technology User

  11. Software Architecture Evolution

    ERIC Educational Resources Information Center

    Barnes, Jeffrey M.

    2013-01-01

    Many software systems eventually undergo changes to their basic architectural structure. Such changes may be prompted by new feature requests, new quality attribute requirements, changing technology, or other reasons. Whatever the causes, architecture evolution is commonplace in real-world software projects. Today's software architects, however,…

  12. Software Architecture Evolution

    ERIC Educational Resources Information Center

    Barnes, Jeffrey M.

    2013-01-01

    Many software systems eventually undergo changes to their basic architectural structure. Such changes may be prompted by new feature requests, new quality attribute requirements, changing technology, or other reasons. Whatever the causes, architecture evolution is commonplace in real-world software projects. Today's software architects, however,…

  13. Software Architecture Technology Initiative

    DTIC Science & Technology

    2008-04-01

    2008 Carnegie Mellon University 2008 PLS March 2008 © 2008 Carnegie Mellon University Software Architecture Technology Initiative SATURN 2008...SUBTITLE Software Architecture Technology Initiative 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT...SUPPLEMENTARY NOTES presented at the SEI Software Architecture Technology User Network (SATURN) Workshop, 30 Apr ? 1 May 2008, Pittsburgh, PA. 14

  14. Architecture for Verifiable Software

    NASA Technical Reports Server (NTRS)

    Reinholtz, William; Dvorak, Daniel

    2005-01-01

    Verifiable MDS Architecture (VMA) is a software architecture that facilitates the construction of highly verifiable flight software for NASA s Mission Data System (MDS), especially for smaller missions subject to cost constraints. More specifically, the purpose served by VMA is to facilitate aggressive verification and validation of flight software while imposing a minimum of constraints on overall functionality. VMA exploits the state-based architecture of the MDS and partitions verification issues into elements susceptible to independent verification and validation, in such a manner that scaling issues are minimized, so that relatively large software systems can be aggressively verified in a cost-effective manner.

  15. The ALMA software architecture

    NASA Astrophysics Data System (ADS)

    Schwarz, Joseph; Farris, Allen; Sommer, Heiko

    2004-09-01

    The software for the Atacama Large Millimeter Array (ALMA) is being developed by many institutes on two continents. The software itself will function in a distributed environment, from the 0.5-14 kmbaselines that separate antennas to the larger distances that separate the array site at the Llano de Chajnantor in Chile from the operations and user support facilities in Chile, North America and Europe. Distributed development demands 1) interfaces that allow separated groups to work with minimal dependence on their counterparts at other locations; and 2) a common architecture to minimize duplication and ensure that developers can always perform similar tasks in a similar way. The Container/Component model provides a blueprint for the separation of functional from technical concerns: application developers concentrate on implementing functionality in Components, which depend on Containers to provide them with services such as access to remote resources, transparent serialization of entity objects to XML, logging, error handling and security. Early system integrations have verified that this architecture is sound and that developers can successfully exploit its features. The Containers and their services are provided by a system-orienteddevelopment team as part of the ALMA Common Software (ACS), middleware that is based on CORBA.

  16. Software architecture design domain

    SciTech Connect

    White, S.A.

    1996-12-31

    Software architectures can provide a basis for the capture and subsequent reuse of design knowledge. The goal of software architecture is to allow the design of a system to take place at a higher level of abstraction; a level concerned with components, connections, constraints, rationale. This architectural view of software adds a new layer of abstraction to the traditional design phase of software development. It has resulted in a flurry of activity towards techniques, tools, and architectural design languages developed specifically to assist with this activity. An analysis of architectural descriptions, even though they differ in notation, shows a common set of key constructs that are present across widely varying domains. These common aspects form a core set of constructs that should belong to any ADL in order to for the language to offer the ability to specify software systems at the architectural level. This analysis also revealed a second set of constructs which served to expand the first set thereby improving the syntax and semantics. These constructs are classified according to whether they provide representation and analysis support for architectures belonging to many varying application domains (domain-independent construct class) or to a particular application domain (domain-dependent constructs). This paper presents the constructs of these two classes, their placement in the architecture design domain and shows how they may be used to classify, select, and analyze proclaimed architectural design languages (ADLs).

  17. Software architecture of the III/FBI segment of the FBI's integrated automated identification system

    NASA Astrophysics Data System (ADS)

    Booker, Brian T.

    1997-02-01

    This paper will describe the software architecture of the Interstate Identification Index (III/FBI) Segment of the FBI's Integrated Automated Fingerprint Identification System (IAFIS). IAFIS is currently under development, with deployment to begin in 1998. III/FBI will provide the repository of criminal history and photographs for criminal subjects, as well as identification data for military and civilian federal employees. Services provided by III/FBI include maintenance of the criminal and civil data, subject search of the criminal and civil data, and response generation services for IAFIS. III/FBI software will be comprised of both COTS and an estimated 250,000 lines of developed C code. This paper will describe the following: (1) the high-level requirements of the III/FBI software; (2) the decomposition of the III/FBI software into Computer Software Configuration Items (CSCIs); (3) the top-level design of the III/FBI CSCIs; and (4) the relationships among the developed CSCIs and the COTS products that will comprise the III/FBI software.

  18. Decentralized Software Architecture

    DTIC Science & Technology

    2002-12-01

    to describe software. Thus, the essential anthropocentrism of the term’s tradi- tional definition suggests exploring a social process that illustrates...and disagreement outside it. The anthropocentrism of the concept may seem irrelevant to the concerns of software architecture, but the missing link

  19. EZ-Rhizo: integrated software for the fast and accurate measurement of root system architecture.

    PubMed

    Armengaud, Patrick; Zambaux, Kevin; Hills, Adrian; Sulpice, Ronan; Pattison, Richard J; Blatt, Michael R; Amtmann, Anna

    2009-03-01

    The root system is essential for the growth and development of plants. In addition to anchoring the plant in the ground, it is the site of uptake of water and minerals from the soil. Plant root systems show an astonishing plasticity in their architecture, which allows for optimal exploitation of diverse soil structures and conditions. The signalling pathways that enable plants to sense and respond to changes in soil conditions, in particular nutrient supply, are a topic of intensive research, and root system architecture (RSA) is an important and obvious phenotypic output. At present, the quantitative description of RSA is labour intensive and time consuming, even using the currently available software, and the lack of a fast RSA measuring tool hampers forward and quantitative genetics studies. Here, we describe EZ-Rhizo: a Windows-integrated and semi-automated computer program designed to detect and quantify multiple RSA parameters from plants growing on a solid support medium. The method is non-invasive, enabling the user to follow RSA development over time. We have successfully applied EZ-Rhizo to evaluate natural variation in RSA across 23 Arabidopsis thaliana accessions, and have identified new RSA determinants as a basis for future quantitative trait locus (QTL) analysis.

  20. ALMA software architecture

    NASA Astrophysics Data System (ADS)

    Schwarz, Joseph; Raffi, Gianni

    2002-12-01

    The Atacama Large Millimeter Array (ALMA) is a joint project involving astronomical organizations in Europe and North America. ALMA will consist of at least 64 12-meter antennas operating in the millimeter and sub-millimeter range. It will be located at an altitude of about 5000m in the Chilean Atacama desert. The primary challenge to the development of the software architecture is the fact that both its development and runtime environments will be distributed. Groups at different institutes will develop the key elements such as Proposal Preparation tools, Instrument operation, On-line calibration and reduction, and Archiving. The Proposal Preparation software will be used primarily at scientists' home institutions (or on their laptops), while Instrument Operations will execute on a set of networked computers at the ALMA Operations Support Facility. The ALMA Science Archive, itself to be replicated at several sites, will serve astronomers worldwide. Building upon the existing ALMA Common Software (ACS), the system architects will prepare a robust framework that will use XML-encoded entity objects to provide an effective solution to the persistence needs of this system, while remaining largely independent of any underlying DBMS technology. Independence of distributed subsystems will be facilitated by an XML- and CORBA-based pass-by-value mechanism for exchange of objects. Proof of concept (as well as a guide to subsystem developers) will come from a prototype whose details will be presented.

  1. Compositional Specification of Software Architecture

    NASA Technical Reports Server (NTRS)

    Penix, John; Lau, Sonie (Technical Monitor)

    1998-01-01

    This paper describes our experience using parameterized algebraic specifications to model properties of software architectures. The goal is to model the decomposition of requirements independent of the style used to implement the architecture. We begin by providing an overview of the role of architecture specification in software development. We then describe how architecture specifications are build up from component and connector specifications and give an overview of insights gained from a case study used to validate the method.

  2. Software Architecture Evolution

    DTIC Science & Technology

    2013-12-01

    adequately. Swanson’s taxonomy was so influential that it was incorporated, many years later, into ISO /IEC 14764 [109, § 6.2], an international standard on...borrow the software maintenance typology from ISO /IEC 14764 for their “Need for Evolution” dimension. I am aware of one empirical study that has examined...Integration. The feature prototypes were designed to allow us to develop, in iso - lation, the main features required for an evolution planning plug-in

  3. Toward Measures for Software Architectures

    DTIC Science & Technology

    2006-03-01

    application of measurement technology to software architectures. The ultimate goal of this body of work is to provide measurement guidance and quantitative...ISO/IEC FCD 9126-1.2. Information Technology – Software Product Quality, Part 1: Quality Model, 1998. [Kruchten 95] Kruchten, P. “The 4+1 View Model...research into the application of measurement technology to software architectures. The ultimate goal of this body of work is to provide

  4. Project Integration Architecture: Application Architecture

    NASA Technical Reports Server (NTRS)

    Jones, William Henry

    2005-01-01

    The Project Integration Architecture (PIA) implements a flexible, object-oriented, wrapping architecture which encapsulates all of the information associated with engineering applications. The architecture allows the progress of a project to be tracked and documented in its entirety. Additionally, by bringing all of the information sources and sinks of a project into a single architectural space, the ability to transport information between those applications is enabled.

  5. Software synthesis using generic architectures

    NASA Technical Reports Server (NTRS)

    Bhansali, Sanjay

    1993-01-01

    A framework for synthesizing software systems based on abstracting software system designs and the design process is described. The result of such an abstraction process is a generic architecture and the process knowledge for customizing the architecture. The customization process knowledge is used to assist a designer in customizing the architecture as opposed to completely automating the design of systems. Our approach using an implemented example of a generic tracking architecture which was customized in two different domains is illustrated. How the designs produced using KASE compare to the original designs of the two systems, and current work and plans for extending KASE to other application areas are described.

  6. A Practical Software Architecture for Virtual Universities

    ERIC Educational Resources Information Center

    Xiang, Peifeng; Shi, Yuanchun; Qin, Weijun

    2006-01-01

    This article introduces a practical software architecture called CUBES, which focuses on system integration and evolvement for online virtual universities. The key of CUBES is a supporting platform that helps to integrate and evolve heterogeneous educational applications developed by different organizations. Both standardized educational…

  7. A Practical Software Architecture for Virtual Universities

    ERIC Educational Resources Information Center

    Xiang, Peifeng; Shi, Yuanchun; Qin, Weijun

    2006-01-01

    This article introduces a practical software architecture called CUBES, which focuses on system integration and evolvement for online virtual universities. The key of CUBES is a supporting platform that helps to integrate and evolve heterogeneous educational applications developed by different organizations. Both standardized educational…

  8. Transformation as a Design Process and Runtime Architecture for High Integrity Software

    SciTech Connect

    Bespalko, S.J.; Winter, V.L.

    1999-04-05

    We have discussed two aspects of creating high integrity software that greatly benefit from the availability of transformation technology, which in this case is manifest by the requirement for a sophisticated backtracking parser. First, because of the potential for correctly manipulating programs via small changes, an automated non-procedural transformation system can be a valuable tool for constructing high assurance software. Second, modeling the processing of translating data into information as a, perhaps, context-dependent grammar leads to an efficient, compact implementation. From a practical perspective, the transformation process should begin in the domain language in which a problem is initially expressed. Thus in order for a transformation system to be practical it must be flexible with respect to domain-specific languages. We have argued that transformation applied to specification results in a highly reliable system. We also attempted to briefly demonstrate that transformation technology applied to the runtime environment will result in a safe and secure system. We thus believe that the sophisticated multi-lookahead backtracking parsing technology is central to the task of being in a position to demonstrate the existence of HIS.

  9. Software Defined Radio with Parallelized Software Architecture

    NASA Technical Reports Server (NTRS)

    Heckler, Greg

    2013-01-01

    This software implements software-defined radio procession over multi-core, multi-CPU systems in a way that maximizes the use of CPU resources in the system. The software treats each processing step in either a communications or navigation modulator or demodulator system as an independent, threaded block. Each threaded block is defined with a programmable number of input or output buffers; these buffers are implemented using POSIX pipes. In addition, each threaded block is assigned a unique thread upon block installation. A modulator or demodulator system is built by assembly of the threaded blocks into a flow graph, which assembles the processing blocks to accomplish the desired signal processing. This software architecture allows the software to scale effortlessly between single CPU/single-core computers or multi-CPU/multi-core computers without recompilation. NASA spaceflight and ground communications systems currently rely exclusively on ASICs or FPGAs. This software allows low- and medium-bandwidth (100 bps to .50 Mbps) software defined radios to be designed and implemented solely in C/C++ software, while lowering development costs and facilitating reuse and extensibility.

  10. Software Defined Radio with Parallelized Software Architecture

    NASA Technical Reports Server (NTRS)

    Heckler, Greg

    2013-01-01

    This software implements software-defined radio procession over multicore, multi-CPU systems in a way that maximizes the use of CPU resources in the system. The software treats each processing step in either a communications or navigation modulator or demodulator system as an independent, threaded block. Each threaded block is defined with a programmable number of input or output buffers; these buffers are implemented using POSIX pipes. In addition, each threaded block is assigned a unique thread upon block installation. A modulator or demodulator system is built by assembly of the threaded blocks into a flow graph, which assembles the processing blocks to accomplish the desired signal processing. This software architecture allows the software to scale effortlessly between single CPU/single-core computers or multi-CPU/multi-core computers without recompilation. NASA spaceflight and ground communications systems currently rely exclusively on ASICs or FPGAs. This software allows low- and medium-bandwidth (100 bps to approx.50 Mbps) software defined radios to be designed and implemented solely in C/C++ software, while lowering development costs and facilitating reuse and extensibility.

  11. Project Integration Architecture: Architectural Overview

    NASA Technical Reports Server (NTRS)

    Jones, William Henry

    2001-01-01

    The Project Integration Architecture (PIA) implements a flexible, object-oriented, wrapping architecture which encapsulates all of the information associated with engineering applications. The architecture allows the progress of a project to be tracked and documented in its entirety. By being a single, self-revealing architecture, the ability to develop single tools, for example a single graphical user interface, to span all applications is enabled. Additionally, by bringing all of the information sources and sinks of a project into a single architectural space, the ability to transport information between those applications becomes possible, Object-encapsulation further allows information to become in a sense self-aware, knowing things such as its own dimensionality and providing functionality appropriate to its kind.

  12. Software design by reusing architectures

    NASA Technical Reports Server (NTRS)

    Bhansali, Sanjay; Nii, H. Penny

    1992-01-01

    Abstraction fosters reuse by providing a class of artifacts that can be instantiated or customized to produce a set of artifacts meeting different specific requirements. It is proposed that significant leverage can be obtained by abstracting software system designs and the design process. The result of such an abstraction is a generic architecture and a set of knowledge-based, customization tools that can be used to instantiate the generic architecture. An approach for designing software systems based on the above idea are described. The approach is illustrated through an implemented example, and the advantages and limitations of the approach are discussed.

  13. Software Architecture for Autonomous Spacecraft

    NASA Technical Reports Server (NTRS)

    Shih, Jimmy S.

    1997-01-01

    The thesis objective is to design an autonomous spacecraft architecture to perform both deliberative and reactive behaviors. The Autonomous Small Planet In-Situ Reaction to Events (ASPIRE) project uses the architecture to integrate several autonomous technologies for a comet orbiter mission.

  14. From data to the decision: A software architecture to integrate predictive modelling in clinical settings.

    PubMed

    Martinez-Millana, A; Fernandez-Llatas, C; Sacchi, L; Segagni, D; Guillen, S; Bellazzi, R; Traver, V

    2015-08-01

    The application of statistics and mathematics over large amounts of data is providing healthcare systems with new tools for screening and managing multiple diseases. Nonetheless, these tools have many technical and clinical limitations as they are based on datasets with concrete characteristics. This proposition paper describes a novel architecture focused on providing a validation framework for discrimination and prediction models in the screening of Type 2 diabetes. For that, the architecture has been designed to gather different data sources under a common data structure and, furthermore, to be controlled by a centralized component (Orchestrator) in charge of directing the interaction flows among data sources, models and graphical user interfaces. This innovative approach aims to overcome the data-dependency of the models by providing a validation framework for the models as they are used within clinical settings.

  15. Software Architecture for Planetary and Lunar Robotics

    NASA Technical Reports Server (NTRS)

    Utz, Hans; Fong, Teny; Nesnas, Iasa A. D.

    2006-01-01

    A viewgraph presentation on the role that software architecture plays in space and lunar robotics is shown. The topics include: 1) The Intelligent Robotics Group; 2) The Lunar Mission; 3) Lunar Robotics; and 4) Software Architecture for Space Robotics.

  16. SOIS and Software Reference Architecture

    NASA Astrophysics Data System (ADS)

    Torelli, Felice; Taylor, Chris; Viana Sanchez, Aitor; Mendham, Peter; Fowell, Stuart

    2011-08-01

    In the recent years ESA, in conjunction with the European space industry, has supported a number of initiatives aimed at increasing the level of standardisation in the elements composing a spacecraft avionics system. This is an ongoing process with many contributors including SAVOIR, SAVOIR-FAIRE1, CCSDS SOIS and the ECSS protocol standardisation activities. The SOIS service framework and the accompanying ECSS protocols play a key role in the evolution of flight avionics as, on the one side they standardise the external hardware interfaces required for interconnection and on the other standardise the communication services seen by applications. Much still remains to be done, in particular, the communications based services defined by SOIS must be incorporated into a standard software architecture such that real implementations can take benefit from the standards. Aspects such as the use of Electronic Data Sheets2 need to be evaluated, as does the use of plug and play concepts. This paper will present the latest evolution of the SOIS architecture and related CCSDS recommendations including the relationship and status of the software reference architecture being developed under SAVOIR-FAIRE. The potential impacts on the design of a system adopting the standardised services will also be discussed in three main areas: communication with external equipments, abstraction of sensors & actuators and software bus.

  17. Extension of the AMBER molecular dynamics software to Intel's Many Integrated Core (MIC) architecture

    NASA Astrophysics Data System (ADS)

    Needham, Perri J.; Bhuiyan, Ashraf; Walker, Ross C.

    2016-04-01

    We present an implementation of explicit solvent particle mesh Ewald (PME) classical molecular dynamics (MD) within the PMEMD molecular dynamics engine, that forms part of the AMBER v14 MD software package, that makes use of Intel Xeon Phi coprocessors by offloading portions of the PME direct summation and neighbor list build to the coprocessor. We refer to this implementation as pmemd MIC offload and in this paper present the technical details of the algorithm, including basic models for MPI and OpenMP configuration, and analyze the resultant performance. The algorithm provides the best performance improvement for large systems (>400,000 atoms), achieving a ∼35% performance improvement for satellite tobacco mosaic virus (1,067,095 atoms) when 2 Intel E5-2697 v2 processors (2 ×12 cores, 30M cache, 2.7 GHz) are coupled to an Intel Xeon Phi coprocessor (Model 7120P-1.238/1.333 GHz, 61 cores). The implementation utilizes a two-fold decomposition strategy: spatial decomposition using an MPI library and thread-based decomposition using OpenMP. We also present compiler optimization settings that improve the performance on Intel Xeon processors, while retaining simulation accuracy.

  18. A software architecture for multidisciplinary applications: Integrating task and data parallelism

    NASA Technical Reports Server (NTRS)

    Chapman, Barbara; Mehrotra, Piyush; Vanrosendale, John; Zima, Hans

    1994-01-01

    Data parallel languages such as Vienna Fortran and HPF can be successfully applied to a wide range of numerical applications. However, many advanced scientific and engineering applications are of a multidisciplinary and heterogeneous nature and thus do not fit well into the data parallel paradigm. In this paper we present new Fortran 90 language extensions to fill this gap. Tasks can be spawned as asynchronous activities in a homogeneous or heterogeneous computing environment; they interact by sharing access to Shared Data Abstractions (SDA's). SDA's are an extension of Fortran 90 modules, representing a pool of common data, together with a set of Methods for controlled access to these data and a mechanism for providing persistent storage. Our language supports the integration of data and task parallelism as well as nested task parallelism and thus can be used to express multidisciplinary applications in a natural and efficient way.

  19. Generic Software Architecture for Launchers

    NASA Astrophysics Data System (ADS)

    Carre, Emilien; Gast, Philippe; Hiron, Emmanuel; Leblanc, Alain; Lesens, David; Mescam, Emmanuelle; Moro, Pierre

    2015-09-01

    The definition and reuse of generic software architecture for launchers is not so usual for several reasons: the number of European launcher families is very small (Ariane 5 and Vega for these last decades); the real time constraints (reactivity and determinism needs) are very hard; low levels of versatility are required (implying often an ad hoc development of the launcher mission). In comparison, satellites are often built on a generic platform made up of reusable hardware building blocks (processors, star-trackers, gyroscopes, etc.) and reusable software building blocks (middleware, TM/TC, On Board Control Procedure, etc.). If some of these reasons are still valid (e.g. the limited number of development), the increase of the available CPU power makes today an approach based on a generic time triggered middleware (ensuring the full determinism of the system) and a centralised mission and vehicle management (offering more flexibility in the design and facilitating the long term maintenance) achievable. This paper presents an example of generic software architecture which could be envisaged for future launchers, based on the previously described principles and supported by model driven engineering and automatic code generation.

  20. Future Trends of Software Technology and Applications: Software Architecture

    DTIC Science & Technology

    2006-01-01

    Sponsored by the U.S. Department of Defense © 2006 by Carnegie Mellon University 1 Pittsburgh, PA 15213-3890 Future Trends of Software Technology ...COVERED 00-00-2006 to 00-00-2006 4. TITLE AND SUBTITLE Future Trends of Software Technology and Applications: Software Architecture 5a. CONTRACT...and Applications: Software Architecture Paul Clements Software Engineering Institute Carnegie Mellon University Report Documentation Page Form

  1. Project Integration Architecture

    NASA Technical Reports Server (NTRS)

    Jones, William Henry

    2008-01-01

    The Project Integration Architecture (PIA) is a distributed, object-oriented, conceptual, software framework for the generation, organization, publication, integration, and consumption of all information involved in any complex technological process in a manner that is intelligible to both computers and humans. In the development of PIA, it was recognized that in order to provide a single computational environment in which all information associated with any given complex technological process could be viewed, reviewed, manipulated, and shared, it is necessary to formulate all the elements of such a process on the most fundamental level. In this formulation, any such element is regarded as being composed of any or all of three parts: input information, some transformation of that input information, and some useful output information. Another fundamental principle of PIA is the assumption that no consumer of information, whether human or computer, can be assumed to have any useful foreknowledge of an element presented to it. Consequently, a PIA-compliant computing system is required to be ready to respond to any questions, posed by the consumer, concerning the nature of the proffered element. In colloquial terms, a PIA-compliant system must be prepared to provide all the information needed to place the element in context. To satisfy this requirement, PIA extends the previously established object-oriented- programming concept of self-revelation and applies it on a grand scale. To enable pervasive use of self-revelation, PIA exploits another previously established object-oriented-programming concept - that of semantic infusion through class derivation. By means of self-revelation and semantic infusion through class derivation, a consumer of information can inquire about the contents of all information entities (e.g., databases and software) and can interact appropriately with those entities. Other key features of PIA are listed.

  2. Software Management Environment (SME) concepts and architecture, revision 1

    NASA Technical Reports Server (NTRS)

    Hendrick, Robert; Kistler, David; Valett, Jon

    1992-01-01

    This document presents the concepts and architecture of the Software Management Environment (SME), developed for the Software Engineering Branch of the Flight Dynamic Division (FDD) of GSFC. The SME provides an integrated set of experience-based management tools that can assist software development managers in managing and planning flight dynamics software development projects. This document provides a high-level description of the types of information required to implement such an automated management tool.

  3. Hardware Architecture Study for NASA's Space Software Defined Radios

    NASA Technical Reports Server (NTRS)

    Reinhart, Richard C.; Scardelletti, Maximilian C.; Mortensen, Dale J.; Kacpura, Thomas J.; Andro, Monty; Smith, Carl; Liebetreu, John

    2008-01-01

    This study defines a hardware architecture approach for software defined radios to enable commonality among NASA space missions. The architecture accommodates a range of reconfigurable processing technologies including general purpose processors, digital signal processors, field programmable gate arrays (FPGAs), and application-specific integrated circuits (ASICs) in addition to flexible and tunable radio frequency (RF) front-ends to satisfy varying mission requirements. The hardware architecture consists of modules, radio functions, and and interfaces. The modules are a logical division of common radio functions that comprise a typical communication radio. This paper describes the architecture details, module definitions, and the typical functions on each module as well as the module interfaces. Trade-offs between component-based, custom architecture and a functional-based, open architecture are described. The architecture does not specify the internal physical implementation within each module, nor does the architecture mandate the standards or ratings of the hardware used to construct the radios.

  4. The Software Architecture of the Upgraded ESA DRAMA Software Suite

    NASA Astrophysics Data System (ADS)

    Kebschull, Christopher; Flegel, Sven; Gelhaus, Johannes; Mockel, Marek; Braun, Vitali; Radtke, Jonas; Wiedemann, Carsten; Vorsmann, Peter; Sanchez-Ortiz, Noelia; Krag, Holger

    2013-08-01

    In the beginnings of man's space flight activities there was the belief that space is so big that everybody could use it without any repercussions. However during the last six decades the increasing use of Earth's orbits has lead to a rapid growth in the space debris environment, which has a big influence on current and future space missions. For this reason ESA issued the "Requirements on Space Debris Mitigation for ESA Projects" [1] in 2008, which apply to all ESA missions henceforth. The DRAMA (Debris Risk Assessment and Mitigation Analysis) software suite had been developed to support the planning of space missions to comply with these requirements. During the last year the DRAMA software suite has been upgraded under ESA contract by TUBS and DEIMOS to include additional tools and increase the performance of existing ones. This paper describes the overall software architecture of the ESA DRAMA software suite. Specifically the new graphical user interface, which manages the five main tools ARES (Assessment of Risk Event Statistics), MIDAS (MASTER-based Impact Flux and Damage Assessment Software), OSCAR (Orbital Spacecraft Active Removal), CROC (Cross Section of Complex Bodies) and SARA (Re-entry Survival and Risk Analysis) is being discussed. The advancements are highlighted as well as the challenges that arise from the integration of the five tool interfaces. A framework had been developed at the ILR and was used for MASTER-2009 and PROOF-2009. The Java based GUI framework, enables the cross-platform deployment, and its underlying model-view-presenter (MVP) software pattern, meet strict design requirements necessary to ensure a robust and reliable method of operation in an environment where the GUI is separated from the processing back-end. While the GUI framework evolved with each project, allowing an increasing degree of integration of services like validators for input fields, it has also increased in complexity. The paper will conclude with an outlook on

  5. Implications of Responsive Space on the Flight Software Architecture

    NASA Technical Reports Server (NTRS)

    Wilmot, Jonathan

    2006-01-01

    The Responsive Space initiative has several implications for flight software that need to be addressed not only within the run-time element, but the development infrastructure and software life-cycle process elements as well. The runtime element must at a minimum support Plug & Play, while the development and process elements need to incorporate methods to quickly generate the needed documentation, code, tests, and all of the artifacts required of flight quality software. Very rapid response times go even further, and imply little or no new software development, requiring instead, using only predeveloped and certified software modules that can be integrated and tested through automated methods. These elements have typically been addressed individually with significant benefits, but it is when they are combined that they can have the greatest impact to Responsive Space. The Flight Software Branch at NASA's Goddard Space Flight Center has been developing the runtime, infrastructure and process elements needed for rapid integration with the Core Flight software System (CFS) architecture. The CFS architecture consists of three main components; the core Flight Executive (cFE), the component catalog, and the Integrated Development Environment (DE). This paper will discuss the design of the components, how they facilitate rapid integration, and lessons learned as the architecture is utilized for an upcoming spacecraft.

  6. A software architecture for automating operations processes

    NASA Technical Reports Server (NTRS)

    Miller, Kevin J.

    1994-01-01

    The Operations Engineering Lab (OEL) at JPL has developed a software architecture based on an integrated toolkit approach for simplifying and automating mission operations tasks. The toolkit approach is based on building adaptable, reusable graphical tools that are integrated through a combination of libraries, scripts, and system-level user interface shells. The graphical interface shells are designed to integrate and visually guide a user through the complex steps in an operations process. They provide a user with an integrated system-level picture of an overall process, defining the required inputs and possible output through interactive on-screen graphics. The OEL has developed the software for building these process-oriented graphical user interface (GUI) shells. The OEL Shell development system (OEL Shell) is an extension of JPL's Widget Creation Library (WCL). The OEL Shell system can be used to easily build user interfaces for running complex processes, applications with extensive command-line interfaces, and tool-integration tasks. The interface shells display a logical process flow using arrows and box graphics. They also allow a user to select which output products are desired and which input sources are needed, eliminating the need to know which program and its associated command-line parameters must be executed in each case. The shells have also proved valuable for use as operations training tools because of the OEL Shell hypertext help environment. The OEL toolkit approach is guided by several principles, including the use of ASCII text file interfaces with a multimission format, Perl scripts for mission-specific adaptation code, and programs that include a simple command-line interface for batch mode processing. Projects can adapt the interface shells by simple changes to the resources configuration file. This approach has allowed the development of sophisticated, automated software systems that are easy, cheap, and fast to build. This paper will

  7. Architecture of a high-performance surgical guidance system based on C-arm cone-beam CT: software platform for technical integration and clinical translation

    NASA Astrophysics Data System (ADS)

    Uneri, Ali; Schafer, Sebastian; Mirota, Daniel; Nithiananthan, Sajendra; Otake, Yoshito; Reaungamornrat, Sureerat; Yoo, Jongheun; Stayman, J. Webster; Reh, Douglas; Gallia, Gary L.; Khanna, A. Jay; Hager, Gregory; Taylor, Russell H.; Kleinszig, Gerhard; Siewerdsen, Jeffrey H.

    2011-03-01

    Intraoperative imaging modalities are becoming more prevalent in recent years, and the need for integration of these modalities with surgical guidance is rising, creating new possibilities as well as challenges. In the context of such emerging technologies and new clinical applications, a software architecture for cone-beam CT (CBCT) guided surgery has been developed with emphasis on binding open-source surgical navigation libraries and integrating intraoperative CBCT with novel, application-specific registration and guidance technologies. The architecture design is focused on accelerating translation of task-specific technical development in a wide range of applications, including orthopaedic, head-and-neck, and thoracic surgeries. The surgical guidance system is interfaced with a prototype mobile C-arm for high-quality CBCT and through a modular software architecture, integration of different tools and devices consistent with surgical workflow in each of these applications is realized. Specific modules are developed according to the surgical task, such as: 3D-3D rigid or deformable registration of preoperative images, surgical planning data, and up-to-date CBCT images; 3D-2D registration of planning and image data in real-time fluoroscopy and/or digitally reconstructed radiographs (DRRs); compatibility with infrared, electromagnetic, and video-based trackers used individually or in hybrid arrangements; augmented overlay of image and planning data in endoscopic or in-room video; real-time "virtual fluoroscopy" computed from GPU-accelerated DRRs; and multi-modality image display. The platform aims to minimize offline data processing by exposing quantitative tools that analyze and communicate factors of geometric precision. The system was translated to preclinical phantom and cadaver studies for assessment of fiducial (FRE) and target registration error (TRE) showing sub-mm accuracy in targeting and video overlay within intraoperative CBCT. The work culminates in

  8. Architecture for hospital information integration

    NASA Astrophysics Data System (ADS)

    Chimiak, William J.; Janariz, Daniel L.; Martinez, Ralph

    1999-07-01

    The ongoing integration of hospital information systems (HIS) continues. Data storage systems, data networks and computers improve, data bases grow and health-care applications increase. Some computer operating systems continue to evolve and some fade. Health care delivery now depends on this computer-assisted environment. The result is the critical harmonization of the various hospital information systems becomes increasingly difficult. The purpose of this paper is to present an architecture for HIS integration that is computer-language-neutral and computer- hardware-neutral for the informatics applications. The proposed architecture builds upon the work done at the University of Arizona on middleware, the work of the National Electrical Manufacturers Association, and the American College of Radiology. It is a fresh approach to allowing applications engineers to access medical data easily and thus concentrates on the application techniques in which they are expert without struggling with medical information syntaxes. The HIS can be modeled using a hierarchy of information sub-systems thus facilitating its understanding. The architecture includes the resulting information model along with a strict but intuitive application programming interface, managed by CORBA. The CORBA requirement facilitates interoperability. It should also reduce software and hardware development times.

  9. Parallel time integration software

    SciTech Connect

    2014-07-01

    This package implements an optimal-scaling multigrid solver for the (non) linear systems that arise from the discretization of problems with evolutionary behavior. Typically, solution algorithms for evolution equations are based on a time-marching approach, solving sequentially for one time step after the other. Parallelism in these traditional time-integrarion techniques is limited to spatial parallelism. However, current trends in computer architectures are leading twards system with more, but not faster. processors. Therefore, faster compute speeds must come from greater parallelism. One approach to achieve parallelism in time is with multigrid, but extending classical multigrid methods for elliptic poerators to this setting is a significant achievement. In this software, we implement a non-intrusive, optimal-scaling time-parallel method based on multigrid reduction techniques. The examples in the package demonstrate optimality of our multigrid-reduction-in-time algorithm (MGRIT) for solving a variety of parabolic equations in two and three sparial dimensions. These examples can also be used to show that MGRIT can achieve significant speedup in comparison to sequential time marching on modern architectures.

  10. A Software Architecture for Semiautonomous Robot Control

    NASA Technical Reports Server (NTRS)

    Kortenkamp, David

    2004-01-01

    A software architecture has been developed to increase the safety and effectiveness with which tasks are performed by robots that are capable of functioning autonomously but sometimes are operated under control by humans. The control system of such a robot designed according to a prior software architecture has no way of taking account of how the environment has changed or what parts of a task were performed during an interval of control by a human, so that errors can occur (and, hence, safety and effectiveness jeopardized) when the human relinquishes control. The present architecture incorporates the control, task-planning, and sensor-based-monitoring features of typical prior autonomous-robot software architectures, plus features for updating information on the environment and planning of tasks during control by a human operator in order to enable the robot to track the actions taken by the operator and to be ready to resume autonomous operation with minimal error. The present architecture also provides a user interface that presents, to the operator, a variety of information on the internal state of the robot and the status of the task.

  11. Software system architecture for corporate user support

    NASA Astrophysics Data System (ADS)

    Sukhopluyeva, V. S.; Kuznetsov, D. Y.

    2017-01-01

    In this article, several existing ready-to-use solutions for the HelpDesk are reviewed. Advantages and disadvantages of these systems are identified. Architecture of software solution for a corporate user support system is presented in a form of the use case, state, and component diagrams described by using a unified modeling language (UML).

  12. Domain specific software architectures: Command and control

    NASA Technical Reports Server (NTRS)

    Braun, Christine; Hatch, William; Ruegsegger, Theodore; Balzer, Bob; Feather, Martin; Goldman, Neil; Wile, Dave

    1992-01-01

    GTE is the Command and Control contractor for the Domain Specific Software Architectures program. The objective of this program is to develop and demonstrate an architecture-driven, component-based capability for the automated generation of command and control (C2) applications. Such a capability will significantly reduce the cost of C2 applications development and will lead to improved system quality and reliability through the use of proven architectures and components. A major focus of GTE's approach is the automated generation of application components in particular subdomains. Our initial work in this area has concentrated in the message handling subdomain; we have defined and prototyped an approach that can automate one of the most software-intensive parts of C2 systems development. This paper provides an overview of the GTE team's DSSA approach and then presents our work on automated support for message processing.

  13. Open architecture software platform for biomedical signal analysis.

    PubMed

    Duque, Juliano J; Silva, Luiz E V; Murta, Luiz O

    2013-01-01

    Biomedical signals are very important reporters of the physiological status in human body. Therefore, great attention is devoted to the study of analysis methods that help extracting the greatest amount of relevant information from these signals. There are several free of charge softwares which can process biomedical data, but they are usually closed architecture, not allowing addition of new functionalities by users. This paper presents a proposal for free open architecture software platform for biomedical signal analysis, named JBioS. Implemented in Java, the platform offers some basic functionalities to load and display signals, and allows the integration of new software components through plugins. JBioS facilitates validation of new analysis methods and provides an environment for multi-methods analysis. Plugins can be developed for preprocessing, analyzing and simulating signals. Some applications have been done using this platform, suggesting that, with these features, JBioS presents itself as a software with potential applications in both research and clinical area.

  14. On-Board Software Reference Architecture for Payloads

    NASA Astrophysics Data System (ADS)

    Bos, Victor; Rugina, Ana; Trcka, Adam

    2016-08-01

    The goal of the On-board Software Reference Architecture for Payloads (OSRA-P) is to identify an architecture for payload software to harmonize the payload domain, to enable more reuse of common/generic payload software across different payloads and missions and to ease the integration of the payloads with the platform.To investigate the payload domain, recent and current payload instruments of European space missions have been analyzed. This led to a Payload Catalogue describing 12 payload instruments as well as a Capability Matrix listing specific characteristics of each payload. In addition, a functional decomposition of payload software was prepared which contains functionalities typically found in payload systems. The definition of OSRA-P was evaluated by case studies and a dedicated OSRA-P workshop to gather feedback from the payload community.

  15. Ensemble: an Architecture for Mission-Operations Software

    NASA Technical Reports Server (NTRS)

    Norris, Jeffrey; Powell, Mark; Fox, Jason; Rabe, Kenneth; Shu, IHsiang; McCurdy, Michael; Vera, Alonso

    2008-01-01

    Ensemble is the name of an open architecture for, and a methodology for the development of, spacecraft mission operations software. Ensemble is also potentially applicable to the development of non-spacecraft mission-operations- type software. Ensemble capitalizes on the strengths of the open-source Eclipse software and its architecture to address several issues that have arisen repeatedly in the development of mission-operations software: Heretofore, mission-operations application programs have been developed in disparate programming environments and integrated during the final stages of development of missions. The programs have been poorly integrated, and it has been costly to develop, test, and deploy them. Users of each program have been forced to interact with several different graphical user interfaces (GUIs). Also, the strategy typically used in integrating the programs has yielded serial chains of operational software tools of such a nature that during use of a given tool, it has not been possible to gain access to the capabilities afforded by other tools. In contrast, the Ensemble approach offers a low-risk path towards tighter integration of mission-operations software tools.

  16. Software architecture of biomimetic underwater vehicle

    NASA Astrophysics Data System (ADS)

    Praczyk, Tomasz; Szymak, Piotr

    2016-05-01

    Autonomous underwater vehicles are vehicles that are entirely or partly independent of human decisions. In order to obtain operational independence, the vehicles have to be equipped with a specialized software. The main task of the software is to move the vehicle along a trajectory with collision avoidance. Moreover, the software has also to manage different devices installed on the vehicle board, e.g. to start and stop cameras, sonars etc. In addition to the software embedded on the vehicle board, the software responsible for managing the vehicle by the operator is also necessary. Its task is to define mission of the vehicle, to start, to stop the mission, to send emergency commands, to monitor vehicle parameters, and to control the vehicle in remotely operated mode. An important objective of the software is also to support development and tests of other software components. To this end, a simulation environment is necessary, i.e. simulation model of the vehicle and all its key devices, the model of the sea environment, and the software to visualize behavior of the vehicle. The paper presents architecture of the software designed for biomimetic autonomous underwater vehicle (BAUV) that is being constructed within the framework of the scientific project financed by Polish National Center of Research and Development.

  17. Issues in Defining Software Architectures in a GIS Environment

    NASA Technical Reports Server (NTRS)

    Acosta, Jesus; Alvorado, Lori

    1997-01-01

    The primary mission of the Pan-American Center for Earth and Environmental Studies (PACES) is to advance the research areas that are relevant to NASA's Mission to Planet Earth program. One of the activities at PACES is the establishment of a repository for geographical, geological and environmental information that covers various regions of Mexico and the southwest region of the U.S. and that is acquired from NASA and other sources through remote sensing, ground studies or paper-based maps. The center will be providing access of this information to other government entities in the U.S. and Mexico, and research groups from universities, national laboratories and industry. Geographical Information Systems(GIS) provide the means to manage, manipulate, analyze and display geographically referenced information that will be managed by PACES. Excellent off-the-shelf software exists for a complete GIS as well as software for storing and managing spatial databases, processing images, networking and viewing maps with layered information. This allows the user flexibility in combining systems to create a GIS or to mix these software packages with custom-built application programs. Software architectural languages provide the ability to specify the computational components and interactions among these components, an important topic in the domain of GIS because of the need to integrate numerous software packages. This paper discusses the characteristics that architectural languages address with respect to the issues relating to the data that must be communicated between software systems and components when systems interact. The paper presents a background on GIS in section 2. Section 3 gives an overview of software architecture and architectural languages. Section 4 suggests issues that may be of concern when defining the software architecture of a GIS. The last section discusses the future research effort and finishes with a summary.

  18. Flexible Software Architecture for Visualization and Seismic Data Analysis

    NASA Astrophysics Data System (ADS)

    Petunin, S.; Pavlov, I.; Mogilenskikh, D.; Podzyuban, D.; Arkhipov, A.; Baturuin, N.; Lisin, A.; Smith, A.; Rivers, W.; Harben, P.

    2007-12-01

    Research in the field of seismology requires software and signal processing utilities for seismogram manipulation and analysis. Seismologists and data analysts often encounter a major problem in the use of any particular software application specific to seismic data analysis: the tuning of commands and windows to the specific waveforms and hot key combinations so as to fit their familiar informational environment. The ability to modify the user's interface independently from the developer requires an adaptive code structure. An adaptive code structure also allows for expansion of software capabilities such as new signal processing modules and implementation of more efficient algorithms. Our approach is to use a flexible "open" architecture for development of geophysical software. This report presents an integrated solution for organizing a logical software architecture based on the Unix version of the Geotool software implemented on the Microsoft NET 2.0 platform. Selection of this platform greatly expands the variety and number of computers that can implement the software, including laptops that can be utilized in field conditions. It also facilitates implementation of communication functions for seismic data requests from remote databases through the Internet. The main principle of the new architecture for Geotool is that scientists should be able to add new routines for digital waveform analysis via software plug-ins that utilize the basic Geotool display for GUI interaction. The use of plug-ins allows the efficient integration of diverse signal-processing software, including software still in preliminary development, into an organized platform without changing the fundamental structure of that platform itself. An analyst's use of Geotool is tracked via a metadata file so that future studies can reconstruct, and alter, the original signal processing operations. The work has been completed in the framework of a joint Russian- American project.

  19. Educational Software Architecture and Systematic Impact: The Promise of Component Software.

    ERIC Educational Resources Information Center

    Roschelle, Jeremy; Kaput, James

    1996-01-01

    Examines the failure of current technology to meet education's flexible needs and points to a promising solution: component software architecture. Discusses the failure of stand-alone applications in their incompatibility, waste of funding, prevention of customization, and obstruction of integration. (AEF)

  20. Educational Software Architecture and Systematic Impact: The Promise of Component Software.

    ERIC Educational Resources Information Center

    Roschelle, Jeremy; Kaput, James

    1996-01-01

    Examines the failure of current technology to meet education's flexible needs and points to a promising solution: component software architecture. Discusses the failure of stand-alone applications in their incompatibility, waste of funding, prevention of customization, and obstruction of integration. (AEF)

  1. Software control architecture for autonomous vehicles

    NASA Astrophysics Data System (ADS)

    Nelson, Michael L.; DeAnda, Juan R.; Fox, Richard K.; Meng, Xiannong

    1999-07-01

    The Strategic-Tactical-Execution Software Control Architecture (STESCA) is a tri-level approach to controlling autonomous vehicles. Using an object-oriented approach, STESCA has been developed as a generalization of the Rational Behavior Model (RBM). STESCA was initially implemented for the Phoenix Autonomous Underwater Vehicle (Naval Postgraduate School -- Monterey, CA), and is currently being implemented for the Pioneer AT land-based wheeled vehicle. The goals of STESCA are twofold. First is to create a generic framework to simplify the process of creating a software control architecture for autonomous vehicles of any type. Second is to allow for mission specification system by 'anyone' with minimal training to control the overall vehicle functionality. This paper describes the prototype implementation of STESCA for the Pioneer AT.

  2. The Software Architecture of Global Climate Models

    NASA Astrophysics Data System (ADS)

    Alexander, K. A.; Easterbrook, S. M.

    2011-12-01

    It has become common to compare and contrast the output of multiple global climate models (GCMs), such as in the Climate Model Intercomparison Project Phase 5 (CMIP5). However, intercomparisons of the software architecture of GCMs are almost nonexistent. In this qualitative study of seven GCMs from Canada, the United States, and Europe, we attempt to fill this gap in research. We describe the various representations of the climate system as computer programs, and account for architectural differences between models. Most GCMs now practice component-based software engineering, where Earth system components (such as the atmosphere or land surface) are present as highly encapsulated sub-models. This architecture facilitates a mix-and-match approach to climate modelling that allows for convenient sharing of model components between institutions, but it also leads to difficulty when choosing where to draw the lines between systems that are not encapsulated in the real world, such as sea ice. We also examine different styles of couplers in GCMs, which manage interaction and data flow between components. Finally, we pay particular attention to the varying levels of complexity in GCMs, both between and within models. Many GCMs have some components that are significantly more complex than others, a phenomenon which can be explained by the respective institution's research goals as well as the origin of the model components. In conclusion, although some features of software architecture have been adopted by every GCM we examined, other features show a wide range of different design choices and strategies. These architectural differences may provide new insights into variability and spread between models.

  3. Power, Avionics and Software Communication Network Architecture

    NASA Technical Reports Server (NTRS)

    Ivancic, William D.; Sands, Obed S.; Bakula, Casey J.; Oldham, Daniel R.; Wright, Ted; Bradish, Martin A.; Klebau, Joseph M.

    2014-01-01

    This document describes the communication architecture for the Power, Avionics and Software (PAS) 2.0 subsystem for the Advanced Extravehicular Mobile Unit (AEMU). The following systems are described in detail: Caution Warn- ing and Control System, Informatics, Storage, Video, Audio, Communication, and Monitoring Test and Validation. This document also provides some background as well as the purpose and goals of the PAS project at Glenn Research Center (GRC).

  4. A software architecture for autonomous orbital robotics

    NASA Astrophysics Data System (ADS)

    Henshaw, Carl G.; Akins, Keith; Creamer, N. Glenn; Faria, Matthew; Flagg, Cris; Hayden, Matthew; Healy, Liam; Hrolenok, Brian; Johnson, Jeffrey; Lyons, Kimberly; Pipitone, Frank; Tasker, Fred

    2006-05-01

    SUMO, the Spacecraft for the Universal Modification of Orbits, is a DARPA-sponsored spacecraft designed to provide orbital repositioning services to geosynchronous satellites. Such services may be needed to facilitate changing the geostationary slot of a satellite, to allow a satellite to be used until the propellant is expended instead of reserving propellant for a retirement burn, or to rescue a satellite stranded in geosynchronous transfer orbit due to a launch failure. Notably, SUMO is being designed to be compatible with the current geosynchronous satellite catalog, which implies that it does not require the customer spacecraft to have special docking fixtures, optical guides, or cooperative communications or pose sensors. In addition, the final approach and grapple will be performed autonomously. SUMO is being designed and built by the Naval Center for Space Technology, a division of the U.S. Naval Research Laboratory in Washington, DC. The nature of the SUMO concept mission leads to significant challenges in onboard spacecraft autonomy. Also, because research and development in machine vision, trajectory planning, and automation algorithms for SUMO is being pursued in parallel with flight software development, there are considerable challenges in prototyping and testing algorithms in situ and in transitioning these algorithms from laboratory form into software suitable for flight. This paper discusses these challenges, outlining the current SUMO design from the standpoint of flight algorithms and software. In particular, the design of the SUMO phase 1 laboratory demonstration software is described in detail. The proposed flight-like software architecture is also described.

  5. A Software Architecture for High Level Applications

    SciTech Connect

    Shen,G.

    2009-05-04

    A modular software platform for high level applications is under development at the National Synchrotron Light Source II project. This platform is based on client-server architecture, and the components of high level applications on this platform will be modular and distributed, and therefore reusable. An online model server is indispensable for model based control. Different accelerator facilities have different requirements for the online simulation. To supply various accelerator simulators, a set of narrow and general application programming interfaces is developed based on Tracy-3 and Elegant. This paper describes the system architecture for the modular high level applications, the design of narrow and general application programming interface for an online model server, and the prototype of online model server.

  6. A Software Architecture for Network Communication

    DTIC Science & Technology

    1987-11-30

    valuable critical readings of this paper. I 20] REFERENCES I 1. D. P. Anderson, D. Ferrari, P. V. Rangan and S . Tzou, "The DASH Project: Issues in...Productivity Engineering in the UNIXt Environment 1j A Software Architecture for Network Communication Qp 0Technical Report 0) S . L. Graham...N00039-84- C -0089 August 7, 1984 - August 6, 1987 D T IC EECTE Arpa Order No. 4871 JU 1988 fUNIX is a trademark of AT&T Bell Laboratories E i UTION AtT

  7. Software Architecture for Shared Information Systems

    DTIC Science & Technology

    1993-03-01

    issues motivate companies to invest in systems integration ( CSTB 1992, pp. 16-21): "• For many organizations, experiences with information technology...questions. The essential enabling technologies are of several kinds ( CSTB 1992, Nilsson et al 1990): 2 CMU/SEI-93-TR-3 " Architecture: System organization...schema and import schema are distinct are suppressed at this level of abstraction; these communication questions should be addressed in an expansion

  8. Integrating Software Modules For Robot Control

    NASA Technical Reports Server (NTRS)

    Volpe, Richard A.; Khosla, Pradeep; Stewart, David B.

    1993-01-01

    Reconfigurable, sensor-based control system uses state variables in systematic integration of reusable control modules. Designed for open-architecture hardware including many general-purpose microprocessors, each having own local memory plus access to global shared memory. Implemented in software as extension of Chimera II real-time operating system. Provides transparent computing mechanism for intertask communication between control modules and generic process-module architecture for multiprocessor realtime computation. Used to control robot arm. Proves useful in variety of other control and robotic applications.

  9. Integrating Software Modules For Robot Control

    NASA Technical Reports Server (NTRS)

    Volpe, Richard A.; Khosla, Pradeep; Stewart, David B.

    1993-01-01

    Reconfigurable, sensor-based control system uses state variables in systematic integration of reusable control modules. Designed for open-architecture hardware including many general-purpose microprocessors, each having own local memory plus access to global shared memory. Implemented in software as extension of Chimera II real-time operating system. Provides transparent computing mechanism for intertask communication between control modules and generic process-module architecture for multiprocessor realtime computation. Used to control robot arm. Proves useful in variety of other control and robotic applications.

  10. Playing Detective: Reconstructing Software Architecture from Available Evidence

    DTIC Science & Technology

    1997-10-01

    1997). Kazman, Rick; Abowd , Gregory; Bass, Len; & Webb, M. "SAAM: A Method for Analyzing the Properties of Software Architectures," 81-90...1994. Kazman, Rick; Abowd , Gregory; Bass, Len; & Clements, Paul. "Scenar- io-Based Analysis of Software Architecture," IEEE Software 13, 6 (No

  11. Software Defined Radio Architecture Contributions to Next Generation Space Communications

    NASA Technical Reports Server (NTRS)

    Kacpura, Thomas J.; Eddy, Wesley M.; Smith, Carl R.; Liebetreu, John

    2015-01-01

    systems, as well as those communications and navigation systems operated by international space agencies and civilian and government agencies. In this paper, we review the philosophies, technologies, architectural attributes, mission services, and communications capabilities that form the structure of candidate next-generation integrated communication architectures for space communications and navigation. A key area that this paper explores is from the development and operation of the software defined radio for the NASA Space Communications and Navigation (SCaN) Testbed currently on the International Space Station (ISS). Evaluating the lessons learned from development and operation feed back into the communications architecture. Leveraging the reconfigurability provides a change in the way that operations are done and must be considered. Quantifying the impact on the NASA Space Telecommunications Radio System (STRS) software defined radio architecture provides feedback to keep the standard useful and up to date. NASA is not the only customer of these radios. Software defined radios are developed for other applications, and taking advantage of these developments promotes an architecture that is cost effective and sustainable. Developments in the following areas such as an updated operating environment, higher data rates, networking and security can be leveraged. The ability to sustain an architecture that uses radios for multiple markets can lower costs and keep new technology infused.

  12. Integrating Technology with Architectural Needs

    ERIC Educational Resources Information Center

    Elmasry, Sarah

    2009-01-01

    Researchers at the Center of High Performance Learning Technologies (CHPLE), Virginia Tech, conducted a study investigating issues related to integration of learning technologies with architectural systems in contemporary learning environments. The study is qualitative in nature, and focuses on integration patterns of learning technologies with…

  13. COG Software Architecture Design Description Document

    SciTech Connect

    Buck, R M; Lent, E M

    2009-09-21

    This COG Software Architecture Design Description Document describes the organization and functionality of the COG Multiparticle Monte Carlo Transport Code for radiation shielding and criticality calculations, at a level of detail suitable for guiding a new code developer in the maintenance and enhancement of COG. The intended audience also includes managers and scientists and engineers who wish to have a general knowledge of how the code works. This Document is not intended for end-users. This document covers the software implemented in the standard COG Version 10, as released through RSICC and IAEA. Software resources provided by other institutions will not be covered. This document presents the routines grouped by modules and in the order of the three processing phases. Some routines are used in multiple phases. The routine description is presented once - the first time the routine is referenced. Since this is presented at the level of detail for guiding a new code developer, only the routines invoked by another routine that are significant for the processing phase that is being detailed are presented. An index to all routines detailed is included. Tables for the primary data structures are also presented.

  14. Formalization and visualization of domain-specific software architectures

    NASA Technical Reports Server (NTRS)

    Bailor, Paul D.; Luginbuhl, David R.; Robinson, John S.

    1992-01-01

    This paper describes a domain-specific software design system based on the concepts of software architectures engineering and domain-specific models and languages. In this system, software architectures are used as high level abstractions to formulate a domain-specific software design. The software architecture serves as a framework for composing architectural fragments (e.g., domain objects, system components, and hardware interfaces) that make up the knowledge (or model) base for solving a problem in a particular application area. A corresponding software design is generated by analyzing and describing a system in the context of the software architecture. While the software architecture serves as the framework for the design, this concept is insufficient by itself for supplying the additional details required for a specific design. Additional domain knowledge is still needed to instantiate components of the architecture and develop optimized algorithms for the problem domain. One possible way to obtain the additional details is through the use of domain-specific languages. Thus, the general concept of a software architecture and the specific design details provided by domain-specific languages are combined to create what can be termed a domain-specific software architecture (DSSA).

  15. The software architecture to control the Cherenkov Telescope Array

    NASA Astrophysics Data System (ADS)

    Oya, I.; Füßling, M.; Antonino, P. O.; Conforti, V.; Hagge, L.; Melkumyan, D.; Morgenstern, A.; Tosti, G.; Schwanke, U.; Schwarz, J.; Wegner, P.; Colomé, J.; Lyard, E.

    2016-07-01

    trace requirements to deliverables (source code, documentation, etc.), and permits the implementation of a flexible use-case driven software development approach thanks to the traceability from use cases to the logical software elements. The Alma Common Software (ACS) container/component framework, used for the control of the Atacama Large Millimeter/submillimeter Array (ALMA) is the basis for the ACTL software and as such it is considered as an integral part of the software architecture.

  16. A Multiprocessor-Based Sensor Fusion Software Architecture

    NASA Astrophysics Data System (ADS)

    Moxon, Bruce C.

    1988-03-01

    The ability to reason with information from a variety of sources is critical to the development of intelligent autonomous systems. Multisensor integration, or sensor fusion, is an area of research that attempts to provide a computational framework in which such perceptual reasoning can quickly and effectively be applied, enabling autonomous systems to function in unstructured, unconstrained environments. In this paper, the fundamental characteristics of the sensor fusion problem are explored. An hierarchical sensor fusion software architecture is presented as a computational framework in which information from complementary sensors is effectively combined. The concept of a sensor fusion pyramid is introduced, along with three unique computational abstractions: virtual sensors, virtual effectors, and focus of attention processing. The computing requirements of this sensor fusion architecture are investigated, and the blackboard system model is proposed as a computational methodology on which to build a sensor fusion software architecture. Finally, the Butterfly Parallel Processor is presented as a computer architecture that provides the computational capabilities required to support these intelligent systems applications.

  17. A Software Architecture for Intelligent Synthesis Environments

    NASA Technical Reports Server (NTRS)

    Filman, Robert E.; Norvig, Peter (Technical Monitor)

    2001-01-01

    The NASA's Intelligent Synthesis Environment (ISE) program is a grand attempt to develop a system to transform the way complex artifacts are engineered. This paper discusses a "middleware" architecture for enabling the development of ISE. Desirable elements of such an Intelligent Synthesis Architecture (ISA) include remote invocation; plug-and-play applications; scripting of applications; management of design artifacts, tools, and artifact and tool attributes; common system services; system management; and systematic enforcement of policies. This paper argues that the ISA extend conventional distributed object technology (DOT) such as CORBA and Product Data Managers with flexible repositories of product and tool annotations and "plug-and-play" mechanisms for inserting "ility" or orthogonal concerns into the system. I describe the Object Infrastructure Framework, an Aspect Oriented Programming (AOP) environment for developing distributed systems that provides utility insertion and enables consistent annotation maintenance. This technology can be used to enforce policies such as maintaining the annotations of artifacts, particularly the provenance and access control rules of artifacts-, performing automatic datatype transformations between representations; supplying alternative servers of the same service; reporting on the status of jobs and the system; conveying privileges throughout an application; supporting long-lived transactions; maintaining version consistency; and providing software redundancy and mobility.

  18. A Software Architecture for Intelligent Synthesis Environments

    NASA Technical Reports Server (NTRS)

    Filman, Robert E.; Norvig, Peter (Technical Monitor)

    2001-01-01

    The NASA's Intelligent Synthesis Environment (ISE) program is a grand attempt to develop a system to transform the way complex artifacts are engineered. This paper discusses a "middleware" architecture for enabling the development of ISE. Desirable elements of such an Intelligent Synthesis Architecture (ISA) include remote invocation; plug-and-play applications; scripting of applications; management of design artifacts, tools, and artifact and tool attributes; common system services; system management; and systematic enforcement of policies. This paper argues that the ISA extend conventional distributed object technology (DOT) such as CORBA and Product Data Managers with flexible repositories of product and tool annotations and "plug-and-play" mechanisms for inserting "ility" or orthogonal concerns into the system. I describe the Object Infrastructure Framework, an Aspect Oriented Programming (AOP) environment for developing distributed systems that provides utility insertion and enables consistent annotation maintenance. This technology can be used to enforce policies such as maintaining the annotations of artifacts, particularly the provenance and access control rules of artifacts-, performing automatic datatype transformations between representations; supplying alternative servers of the same service; reporting on the status of jobs and the system; conveying privileges throughout an application; supporting long-lived transactions; maintaining version consistency; and providing software redundancy and mobility.

  19. A Tool for Managing Software Architecture Knowledge

    SciTech Connect

    Babar, Muhammad A.; Gorton, Ian

    2007-08-01

    This paper describes a tool for managing architectural knowledge and rationale. The tool has been developed to support a framework for capturing and using architectural knowledge to improve the architecture process. This paper describes the main architectural components and features of the tool. The paper also provides examples of using the tool for supporting wellknown architecture design and analysis methods.

  20. Software architecture of INO340 telescope control system

    NASA Astrophysics Data System (ADS)

    Ravanmehr, Reza; Khosroshahi, Habib

    2016-08-01

    The software architecture plays an important role in distributed control system of astronomical projects because many subsystems and components must work together in a consistent and reliable way. We have utilized a customized architecture design approach based on "4+1 view model" in order to design INOCS software architecture. In this paper, after reviewing the top level INOCS architecture, we present the software architecture model of INOCS inspired by "4+1 model", for this purpose we provide logical, process, development, physical, and scenario views of our architecture using different UML diagrams and other illustrative visual charts. Each view presents INOCS software architecture from a different perspective. We finish the paper by science data operation of INO340 and the concluding remarks.

  1. LSST active optics system software architecture

    NASA Astrophysics Data System (ADS)

    Thomas, Sandrine J.; Chandrasekharan, Srinivasan; Lotz, Paul; Xin, Bo; Claver, Charles; Angeli, George; Sebag, Jacques; Dubois-Felsmann, Gregory P.

    2016-08-01

    The Large Synoptic Survey Telescope (LSST) is an 8-meter class wide-field telescope now under construction on Cerro Pachon, near La Serena, Chile. This ground-based telescope is designed to conduct a decade-long time domain survey of the optical sky. In order to achieve the LSST scientific goals, the telescope requires delivering seeing limited image quality over the 3.5 degree field-of-view. Like many telescopes, LSST will use an Active Optics System (AOS) to correct in near real-time the system aberrations primarily introduced by gravity and temperature gradients. The LSST AOS uses a combination of 4 curvature wavefront sensors (CWS) located on the outside of the LSST field-of-view. The information coming from the 4 CWS is combined to calculate the appropriate corrections to be sent to the 3 different mirrors composing LSST. The AOS software incorporates a wavefront sensor estimation pipeline (WEP) and an active optics control system (AOCS). The WEP estimates the wavefront residual error from the CWS images. The AOCS determines the correction to be sent to the different degrees of freedom every 30 seconds. In this paper, we describe the design and implementation of the AOS. More particularly, we will focus on the software architecture as well as the AOS interactions with the various subsystems within LSST.

  2. Safety-Critical Partitioned Software Architecture: A Partitioned Software Architecture for Robotic

    NASA Technical Reports Server (NTRS)

    Horvath, Greg; Chung, Seung H.; Cilloniz-Bicchi, Ferner

    2011-01-01

    The flight software on virtually every mission currently managed by JPL has several major flaws that make it vulnerable to potentially fatal software defects. Many of these problems can be addressed by recently developed partitioned operating systems (OS). JPL has avoided adopting a partitioned operating system on its flight missions, primarily because doing so would require significant changes in flight software design, and the risks associated with changes of that magnitude cannot be accepted by an active flight project. The choice of a partitioned OS can have a dramatic effect on the overall system and software architecture, allowing for realization of benefits far beyond the concerns typically associated with the choice of OS. Specifically, we believe that a partitioned operating system, when coupled with an appropriate architecture, can provide a strong infrastructure for developing systems for which reusability, modifiability, testability, and reliability are essential qualities. By adopting a partitioned OS, projects can gain benefits throughout the entire development lifecycle, from requirements and design, all the way to implementation, testing, and operations.

  3. Safety-Critical Partitioned Software Architecture: A Partitioned Software Architecture for Robotic

    NASA Technical Reports Server (NTRS)

    Horvath, Greg; Chung, Seung H.; Cilloniz-Bicchi, Ferner

    2011-01-01

    The flight software on virtually every mission currently managed by JPL has several major flaws that make it vulnerable to potentially fatal software defects. Many of these problems can be addressed by recently developed partitioned operating systems (OS). JPL has avoided adopting a partitioned operating system on its flight missions, primarily because doing so would require significant changes in flight software design, and the risks associated with changes of that magnitude cannot be accepted by an active flight project. The choice of a partitioned OS can have a dramatic effect on the overall system and software architecture, allowing for realization of benefits far beyond the concerns typically associated with the choice of OS. Specifically, we believe that a partitioned operating system, when coupled with an appropriate architecture, can provide a strong infrastructure for developing systems for which reusability, modifiability, testability, and reliability are essential qualities. By adopting a partitioned OS, projects can gain benefits throughout the entire development lifecycle, from requirements and design, all the way to implementation, testing, and operations.

  4. Achieving Product Qualities Through Software Architecture Practices

    DTIC Science & Technology

    2016-06-14

    Generate quality attribute utility tree 6. Analyze architectural approaches 7. Brainstorm and prioritize scenarios 8. Analyze architectural...approaches 9. Present results © 2004 by Carnegie Mellon University page 67 Example Utility Tree Utility Performance Modifiability Availability Security Add

  5. High Performance Orbital Propagation Using a Generic Software Architecture

    NASA Astrophysics Data System (ADS)

    Möckel, M.; Bennett, J.; Stoll, E.; Zhang, K.

    2016-09-01

    Orbital propagation is a key element in many fields of space research. Over the decades, scientists have developed numerous orbit propagation algorithms, often tailored to specific use cases that vary in available input data, desired output as well as demands of execution speed and accuracy. Conjunction assessments, for example, require highly accurate propagations of a relatively small number of objects while statistical analyses of the (untracked) space debris population need a propagator that can process large numbers of objects in a short time with only medium accuracy. Especially in the latter case, a significant increase of computation speed can be achieved by using graphics processors, devices that are designed to process hundreds or thousands of calculations in parallel. In this paper, an analytical propagator is introduced that uses graphics processing to reduce the run time for propagating a large space debris population from several hours to minutes with only a minor loss of accuracy. A second performance analysis is conducted on a parallelised version of the popular SGP4 algorithm. It is discussed how these modifications can be applied to more accurate numerical propagators. Both programs are implemented using a generic, plugin-based software architecture designed for straightforward integration of propagators into other software tools. It is shown how this architecture can be used to easily integrate, compare and combine different orbital propagators, both CPU and GPU-based.

  6. Software attribute visualization for high integrity software

    SciTech Connect

    Pollock, G.M.

    1998-03-01

    This report documents a prototype tool developed to investigate the use of visualization and virtual reality technologies for improving software surety confidence. The tool is utilized within the execution phase of the software life cycle. It provides a capability to monitor an executing program against prespecified requirements constraints provided in a program written in the requirements specification language SAGE. The resulting Software Attribute Visual Analysis Tool (SAVAnT) also provides a technique to assess the completeness of a software specification.

  7. NPOESS Interface Data Processing Segment Architecture and Software

    NASA Astrophysics Data System (ADS)

    Turek, S.; Souza, K. G.; Fox, C. A.; Grant, K. D.

    2004-12-01

    The National Oceanic and Atmospheric Administration (NOAA), Department of Defense (DoD), and National Aeronautics and Space Administration (NASA) are jointly acquiring the next-generation weather and environmental satellite system; the National Polar-orbiting Operational Environmental Satellite System (NPOESS). NPOESS is an estimated \\$6.5 billion program replacing the current Polar-orbiting Operational Environmental Satellites (POES) managed by NOAA and the Defense Meteorological Satellite Program (DMSP) managed by the DoD. The NPOESS satellites carry a suite of sensors that collect meteorological, oceanographic, climatological, and solar-geophysical observations of the earth, atmosphere, and space. The ground data processing segment for NPOESS is the Interface Data Processing Segment (IDPS). The IDPS processes NPOESS satellite data to provide weather, oceanographic, and environmental data products to NOAA and DoD processing centers and field terminals operated by the United States government. This paper describes Raytheon's high performance computer and software architecture for the NPOESS IDPS. NOAA, the DoD, and NASA selected this architecture after a 2.5-year Program Definition and Risk Reduction (PDRR) competition. The PDRR phase concluded in August of 2002, and has been followed by the NPOESS Preparatory Project (NPP) phase. The NPP satellite, scheduled to launch in late 2006, will provide risk reduction for the future NPOESS satellites, and will enable data continuity between the current EOS missions and NPOESS. Efforts within the PDRR and NPP phases consist of: requirements definition and flowdown from system to segment to subsystem, Object-Oriented (OO) software design, software code development, science to operational code conversion, integration and qualification testing. The NPOESS phase, which supports a constellation of three satellites, will also consist of this same lifecycle during the 2005 through 2009 timeframe, with operations and support

  8. Software development strategies for parallel computer architectures

    NASA Astrophysics Data System (ADS)

    Gruber, Ralf; Cooper, W. Anthony; Beniston, Martin; Gengler, Marc; Merazzi, Silvio

    1991-09-01

    As pragmatic users of high performance supercomputers, we believe that nowadays parallel computer architectures with disturbed memories are not yet mature to be used by a wide range of application engineers. A big effort should be made to bring these very promising computers closer to the users. One major flaw of massively parallel machines is that the programmer has to take care himself of the data flow which is often different on different parallel computers. To overcome this problem, we propose that data structures be standardized. The data base then can become an integrated part of the system and the data flow for a given algorithm can be easily prescribed. Fixing data structures forces the computer manufacturer to rather adapt his machine to user's demands and not, as it happens now, the user has to adapt to the innovative computer science approach of the computer manufacturer. In this paper, we present data standards chosen for our ASTRID programming platform for research scientist and engineers, as well as a plasma physics application which won the Cray Gigaflop Performance Awards 1989 and 1990 and which was succesfully ported on an INTEL iPSC/2 hypercube.

  9. The NASA Integrated Information Technology Architecture

    NASA Technical Reports Server (NTRS)

    Baldridge, Tim

    1997-01-01

    This document defines an Information Technology Architecture for the National Aeronautics and Space Administration (NASA), where Information Technology (IT) refers to the hardware, software, standards, protocols and processes that enable the creation, manipulation, storage, organization and sharing of information. An architecture provides an itemization and definition of these IT structures, a view of the relationship of the structures to each other and, most importantly, an accessible view of the whole. It is a fundamental assumption of this document that a useful, interoperable and affordable IT environment is key to the execution of the core NASA scientific and project competencies and business practices. This Architecture represents the highest level system design and guideline for NASA IT related activities and has been created on the authority of the NASA Chief Information Officer (CIO) and will be maintained under the auspices of that office. It addresses all aspects of general purpose, research, administrative and scientific computing and networking throughout the NASA Agency and is applicable to all NASA administrative offices, projects, field centers and remote sites. Through the establishment of five Objectives and six Principles this Architecture provides a blueprint for all NASA IT service providers: civil service, contractor and outsourcer. The most significant of the Objectives and Principles are the commitment to customer-driven IT implementations and the commitment to a simpler, cost-efficient, standards-based, modular IT infrastructure. In order to ensure that the Architecture is presented and defined in the context of the mission, project and business goals of NASA, this Architecture consists of four layers in which each subsequent layer builds on the previous layer. They are: 1) the Business Architecture: the operational functions of the business, or Enterprise, 2) the Systems Architecture: the specific Enterprise activities within the context

  10. The NASA Integrated Information Technology Architecture

    NASA Technical Reports Server (NTRS)

    Baldridge, Tim

    1997-01-01

    This document defines an Information Technology Architecture for the National Aeronautics and Space Administration (NASA), where Information Technology (IT) refers to the hardware, software, standards, protocols and processes that enable the creation, manipulation, storage, organization and sharing of information. An architecture provides an itemization and definition of these IT structures, a view of the relationship of the structures to each other and, most importantly, an accessible view of the whole. It is a fundamental assumption of this document that a useful, interoperable and affordable IT environment is key to the execution of the core NASA scientific and project competencies and business practices. This Architecture represents the highest level system design and guideline for NASA IT related activities and has been created on the authority of the NASA Chief Information Officer (CIO) and will be maintained under the auspices of that office. It addresses all aspects of general purpose, research, administrative and scientific computing and networking throughout the NASA Agency and is applicable to all NASA administrative offices, projects, field centers and remote sites. Through the establishment of five Objectives and six Principles this Architecture provides a blueprint for all NASA IT service providers: civil service, contractor and outsourcer. The most significant of the Objectives and Principles are the commitment to customer-driven IT implementations and the commitment to a simpler, cost-efficient, standards-based, modular IT infrastructure. In order to ensure that the Architecture is presented and defined in the context of the mission, project and business goals of NASA, this Architecture consists of four layers in which each subsequent layer builds on the previous layer. They are: 1) the Business Architecture: the operational functions of the business, or Enterprise, 2) the Systems Architecture: the specific Enterprise activities within the context

  11. Verifying Architectural Design Rules of the Flight Software Product Line

    NASA Technical Reports Server (NTRS)

    Ganesan, Dharmalingam; Lindvall, Mikael; Ackermann, Chris; McComas, David; Bartholomew, Maureen

    2009-01-01

    This paper presents experiences of verifying architectural design rules of the NASA Core Flight Software (CFS) product line implementation. The goal of the verification is to check whether the implementation is consistent with the CFS architectural rules derived from the developer's guide. The results indicate that consistency checking helps a) identifying architecturally significant deviations that were eluded during code reviews, b) clarifying the design rules to the team, and c) assessing the overall implementation quality. Furthermore, it helps connecting business goals to architectural principles, and to the implementation. This paper is the first step in the definition of a method for analyzing and evaluating product line implementations from an architecture-centric perspective.

  12. Architecture independent environment for developing engineering software on MIMD computers

    NASA Technical Reports Server (NTRS)

    Valimohamed, Karim A.; Lopez, L. A.

    1990-01-01

    Engineers are constantly faced with solving problems of increasing complexity and detail. Multiple Instruction stream Multiple Data stream (MIMD) computers have been developed to overcome the performance limitations of serial computers. The hardware architectures of MIMD computers vary considerably and are much more sophisticated than serial computers. Developing large scale software for a variety of MIMD computers is difficult and expensive. There is a need to provide tools that facilitate programming these machines. First, the issues that must be considered to develop those tools are examined. The two main areas of concern were architecture independence and data management. Architecture independent software facilitates software portability and improves the longevity and utility of the software product. It provides some form of insurance for the investment of time and effort that goes into developing the software. The management of data is a crucial aspect of solving large engineering problems. It must be considered in light of the new hardware organizations that are available. Second, the functional design and implementation of a software environment that facilitates developing architecture independent software for large engineering applications are described. The topics of discussion include: a description of the model that supports the development of architecture independent software; identifying and exploiting concurrency within the application program; data coherence; engineering data base and memory management.

  13. Software architecture and engineering for patient records: current and future.

    PubMed

    Weng, Chunhua; Levine, Betty A; Mun, Seong K

    2009-05-01

    During the "The National Forum on the Future of the Defense Health Information System," a track focusing on "Systems Architecture and Software Engineering" included eight presenters. These presenters identified three key areas of interest in this field, which include the need for open enterprise architecture and a federated database design, net centrality based on service-oriented architecture, and the need for focus on software usability and reusability. The eight panelists provided recommendations related to the suitability of service-oriented architecture and the enabling technologies of grid computing and Web 2.0 for building health services research centers and federated data warehouses to facilitate large-scale collaborative health care and research. Finally, they discussed the need to leverage industry best practices for software engineering to facilitate rapid software development, testing, and deployment.

  14. Telemetry Modernization with Open Architecture Software-Defined Radio Technology

    DTIC Science & Technology

    2016-01-01

    Tech Notes January 2016www.ll.mit.edu Telemetry—the automated measurement and transmission of data from remote sources to receiving stations—plays...Marshall Islands, Lincoln Laboratory is modern- izing the test range’s telemetry systems with a software-defined, radio-based open architecture. This...As the market demand for use of the existing Telemetry Modernization with Open Architecture Software-Defined Radio Technology Lincoln Laboratory

  15. Evaluation of Software Dependability at the Architecture Definition Stage

    DTIC Science & Technology

    2010-06-01

    2005]. [ Babar et al. 2004] proposes a framework for their comparison and assessment. Here are some examples of architecture oriented approaches...11-33, Jan-Mar 2004 Babar , M, Zhu, L and Jeffrey, R. A Framework for Classifying and Comparing Software Architecture Evaluation Methods, Proc. Of

  16. An Architecture-Centric Approach for Acquiring Software-Reliant Systems

    DTIC Science & Technology

    2011-04-30

    documenting and communicating the software architecture, analyzing or evaluating the software architecture, = = ==========^`nrfpfqflk=obpb^o`eW=`ob...ADD) and Architecture Expert tool (ArchE) documenting and communicating the architecture Views and Beyond Approach; Architecture and Analysis...ensuring use of effective architecture practices Architecture Competence Assessment Creating the Business Case for the System Justification for a

  17. Trends and New Directions in Software Architecture

    DTIC Science & Technology

    2014-10-10

    Detailed Design ( UML ) Code: Coding (no detailed design) Test: Testing 2014 Effort in Percent over Cycles – 2 Reqts: Requirements HLD/Arch: High...level Design / Architecture DLD: Detailed Design ( UML ) Code: Coding (no detailed design) Test: Testing 2014 Effort in Percent over Cycles – 3...Reqts: Requirements HLD/Arch: High level Design / Architecture DLD: Detailed Design ( UML ) Code: Coding (no detailed design) Test: Testing 2014

  18. New architectures support for ALMA common software: lessons learned

    NASA Astrophysics Data System (ADS)

    Menay, Camilo E.; Zamora, Gabriel A.; Tobar, Rodrigo J.; Avarias, Jorge A.; Dahl-skog, Kevin R.; von Brand, Horst H.; Chiozzi, Gianluca

    2010-07-01

    ALMA Common Software (ACS) is a distributed control framework based on CORBA that provides communication between distributed pieces of software. Because of its size and complexity it provides its own compilation system, a mix of several technologies. The current ACS compilation process depends on specific tools, compilers, code generation, and a strict dependency model induced by the large number of software components. This document presents a summary of several porting and compatibility attempts at using ACS on platforms other than the officially supported one. A porting of ACS to the Microsoft Windows Platform and to the ARM processor architecture were attempted, with different grades of success. Also, support for LINUX-PREEMPT (a set of real-time patches for the Linux kernel) using a new design for real-time services was implemented. These efforts were integrated with the ACS building and compilation system, while others were included in its design. Lessons learned in this process are presented, and a general approach is extracted from them.

  19. Software Productivity of Field Experiments Using the Mobile Agents Open Architecture with Workflow Interoperability

    NASA Technical Reports Server (NTRS)

    Clancey, William J.; Lowry, Michael R.; Nado, Robert Allen; Sierhuis, Maarten

    2011-01-01

    We analyzed a series of ten systematically developed surface exploration systems that integrated a variety of hardware and software components. Design, development, and testing data suggest that incremental buildup of an exploration system for long-duration capabilities is facilitated by an open architecture with appropriate-level APIs, specifically designed to facilitate integration of new components. This improves software productivity by reducing changes required for reconfiguring an existing system.

  20. NASA Data Acquisitions System (NDAS) Software Architecture

    NASA Technical Reports Server (NTRS)

    Davis, Dawn; Duncan, Michael; Franzl, Richard; Holladay, Wendy; Marshall, Peggi; Morris, Jon; Turowski, Mark

    2012-01-01

    The NDAS Software Project is for the development of common low speed data acquisition system software to support NASA's rocket propulsion testing facilities at John C. Stennis Space Center (SSC), White Sands Test Facility (WSTF), Plum Brook Station (PBS), and Marshall Space Flight Center (MSFC).

  1. Architecture for interoperable software in biology.

    PubMed

    Bare, James Christopher; Baliga, Nitin S

    2014-07-01

    Understanding biological complexity demands a combination of high-throughput data and interdisciplinary skills. One way to bring to bear the necessary combination of data types and expertise is by encapsulating domain knowledge in software and composing that software to create a customized data analysis environment. To this end, simple flexible strategies are needed for interconnecting heterogeneous software tools and enabling data exchange between them. Drawing on our own work and that of others, we present several strategies for interoperability and their consequences, in particular, a set of simple data structures--list, matrix, network, table and tuple--that have proven sufficient to achieve a high degree of interoperability. We provide a few guidelines for the development of future software that will function as part of an interoperable community of software tools for biological data analysis and visualization.

  2. Architecture for interoperable software in biology

    PubMed Central

    Baliga, Nitin S.

    2014-01-01

    Understanding biological complexity demands a combination of high-throughput data and interdisciplinary skills. One way to bring to bear the necessary combination of data types and expertise is by encapsulating domain knowledge in software and composing that software to create a customized data analysis environment. To this end, simple flexible strategies are needed for interconnecting heterogeneous software tools and enabling data exchange between them. Drawing on our own work and that of others, we present several strategies for interoperability and their consequences, in particular, a set of simple data structures—list, matrix, network, table and tuple—that have proven sufficient to achieve a high degree of interoperability. We provide a few guidelines for the development of future software that will function as part of an interoperable community of software tools for biological data analysis and visualization. PMID:23235920

  3. Comprehending Software Architecture using a Single-View Visualization

    SciTech Connect

    Panas, T; Epperly, T W; Quinlan, D J; Saebjoernsen, A; Vuduc, R W

    2007-01-17

    Software is among the most complex human artifacts, and visualization is widely acknowledged as important to understanding software. In this paper, we consider the problem of understanding a software system's architecture through visualization. Whereas traditional visualizations use multiple stakeholder-specific views to present different kinds of task-specific information, we propose an additional visualization technique that unifies the presentation of various kinds of architecture-level information, thereby allowing a variety of stakeholders to quickly see and communicate current development, quality, and costs of a software system. For future empirical evaluation of multi-aspect, single-view architectural visualizations, we have implemented our idea in an existing visualization tool, Vizz3D. Our implementation includes techniques, such as the use of a city metaphor, that reduce visual complexity in order to support single-view visualizations of large-scale programs.

  4. Software Reuse in the Naval Open Architecture

    DTIC Science & Technology

    2008-03-01

    productivity, better quality, and a decrease in time to market for products. Software reuse is a quickly developing underlying pillar of the Naval Open...rapid application development, fastest time to market for updates and keeping up with the Internet technology wars. The last fifty years have seen...collaborations, working software, and responding to change. The enterprise that can get a product or service to market first wins. Organizations

  5. Designing Software Architecture to Achieve Business Goals

    DTIC Science & Technology

    2010-02-19

    the requirements that drive the design of the architecture. • Quality attribute requirements • Business requirements for the developing organization...Quality attribute requirements are, typically, not well specified. • The system shall be modular • The system shall be secure Business requirements for

  6. Software Architecture for Simultaneous Process Control and Software Development/Modification

    SciTech Connect

    Lenarduzzi, Roberto; Hileman, Michael S; McMillan, David E; Holmes Jr, William; Blankenship, Mark; Wilder, Terry

    2011-01-01

    A software architecture is described that allows modification of some application code sections while the remainder of the application continues executing. This architecture facilitates long term testing and process control because the overall process need not be stopped and restarted to allow modifications or additions to the software. A working implementation using National Instruments LabVIEW{trademark} sub-panel and shared variable features is described as an example. This architecture provides several benefits in both the program development and execution environments. The software is easier to maintain and it is not necessary to recompile the entire program after a modification.

  7. Flexible Rover Architecture for Science Instrument Integration and Testing

    NASA Technical Reports Server (NTRS)

    Bualat, Maria G.; Kobayashi, Linda; Lee, Susan Y.; Park, Eric

    2006-01-01

    At NASA Ames Research Center, the Intelligent Robotics Group (IRG) fields the K9 and K10 class rovers. Both use a mobile robot hardware architecture designed for extensibility and reconfigurability that allows for rapid changes in instrumentation and provides a high degree of modularity. Over the past ssveral years, we have worked with instrument developers at NASA centers, universities, and national laboratories to integrate or partially integrate their instruments onboard the K9 and K10 rovers. Early efforts required considerable interaction to work through integration issues such as power, data protocol and mechanical mounting. These interactions informed the design of our current avionics architecture, and have simplified more recent integration projects. In this paper, we will describe the IRG extensible avionics and software architecture and the effect it has had on our recent instrument integration efforts, including integration of four Mars Instrument Development Program devices.

  8. GridOPTICS(TM): A Design for Plug-and-Play Smart Grid Software Architecture

    SciTech Connect

    Gorton, Ian; Liu, Yan; Yin, Jian

    2012-06-03

    As the smart grid becomes reality, software architectures for integrating legacy systems with new innovative approaches for grid management are needed. These architectures must exhibit flexibility, extensibility, interoperability and scalability. In this position paper, we describe our preliminary work to design such an architecture, known as GridOPTICS, that will enable the deployment and integration of new software tools in smart grid operations. Our preliminary design is based upon use cases from PNNL’s Future Power Grid Initiative, which is a developing a collection of advanced software technologies for smart grid management and control. We describe the motivations for GridOPTICS, and the preliminary design that we are currently prototyping for several distinct use cases.

  9. INO340 telescope control system: software architecture and development

    NASA Astrophysics Data System (ADS)

    Ravanmehr, Reza; Jafarzadeh, Asghar

    2014-07-01

    The Iranian National Observatory telescope (INO340) is a 3.4m Alt-Az reflecting optical telescope under design and development. It is f/11 Ritchey-Chretien with a 0.3° field-of-view. INO340 telescope control system utilizes a distributed control system paradigm that includes four major systems: Telescope Control System (TCS), Observation System Supervisor (OSS), Interlock System (ILS) and Observatory Monitoring System (OMS). The control system software also employs 3-tiered hierarchical architecture. In this paper, after presenting the fundamental concepts and operations of the INO340 control system, we propose the distributed control system software architecture including technical and functional architecture, middleware and infrastructure design and finally the software development process.

  10. The Need for Software Architecture Evaluation in the Acquisition of Software-Intensive Sysetms

    DTIC Science & Technology

    2014-01-01

    scenario generation framework (Bass, Bachmann et al. 2003) Elements Brief Description Stimulus A condition that needs to be considered when it arrives...Architecture Evaluation Methods. 15th Australian Software Engineering Conference. Bass, L., F. Bachmann and M. Klein (2003). "Deriving Architectural

  11. ICAROUS: Integrated Configurable Architecture for Unmanned Systems

    NASA Technical Reports Server (NTRS)

    Consiglio, Maria C.

    2016-01-01

    NASA's Unmanned Aerial System (UAS) Traffic Management (UTM) project aims at enabling near-term, safe operations of small UAS vehicles in uncontrolled airspace, i.e., Class G airspace. A far-term goal of UTM research and development is to accommodate the expected rise in small UAS traffic density throughout the National Airspace System (NAS) at low altitudes for beyond visual line-of-sight operations. This video describes a new capability referred to as ICAROUS (Integrated Configurable Algorithms for Reliable Operations of Unmanned Systems), which is being developed under the auspices of the UTM project. ICAROUS is a software architecture comprised of highly assured algorithms for building safety-centric, autonomous, unmanned aircraft applications. Central to the development of the ICAROUS algorithms is the use of well-established formal methods to guarantee higher levels of safety assurance by monitoring and bounding the behavior of autonomous systems. The core autonomy-enabling capabilities in ICAROUS include constraint conformance monitoring and autonomous detect and avoid functions. ICAROUS also provides a highly configurable user interface that enables the modular integration of mission-specific software components.

  12. ICAROUS: Integrated Configurable Architecture for Unmanned Systems

    NASA Technical Reports Server (NTRS)

    Consiglio, Maria C.

    2016-01-01

    NASA's Unmanned Aerial System (UAS) Traffic Management (UTM) project aims at enabling near-term, safe operations of small UAS vehicles in uncontrolled airspace, i.e., Class G airspace. A far-term goal of UTM research and development is to accommodate the expected rise in small UAS traffic density throughout the National Airspace System (NAS) at low altitudes for beyond visual line-of-sight operations. This video describes a new capability referred to as ICAROUS (Integrated Configurable Algorithms for Reliable Operations of Unmanned Systems), which is being developed under the auspices of the UTM project. ICAROUS is a software architecture comprised of highly assured algorithms for building safety-centric, autonomous, unmanned aircraft applications. Central to the development of the ICAROUS algorithms is the use of well-established formal methods to guarantee higher levels of safety assurance by monitoring and bounding the behavior of autonomous systems. The core autonomy-enabling capabilities in ICAROUS include constraint conformance monitoring and autonomous detect and avoid functions. ICAROUS also provides a highly configurable user interface that enables the modular integration of mission-specific software components.

  13. Achieving Critical System Survivability Through Software Architectures

    DTIC Science & Technology

    2006-01-01

    survivability. Two major projects of note in the area are OASIS and MAFTIA. For detailed dis- cussions of intrusion tolerance, see the text by Lala [21] and...Software Wrappers.” in OASIS: Foundations of Intrusion Tolerant Systems (J. Lala Ed.), IEEE Computer Society Press, 2003. [16] Gartner, Felix C...2003. [21] Lala , J. “Foundations of Intrusion Tolerant Systems.” IEEE Computer Society Press, Catalog # PR02057, 2003. [22] Leveson, N., T. Shimeall, J

  14. Software Engineering in Practice: Design and Architectures of FLOSS Systems

    NASA Astrophysics Data System (ADS)

    Capiluppi, Andrea; Knowles, Thomas

    Free/Libre/Open Source Software (FLOSS) practitioners and developers are typically also users of their own systems: as a result, traditional software engineering (SE) processes (e.g., the requirements and design phases), take less time to articulate and negotiate among FLOSS developers. Design and requirements are kept more as informal knowledge, rather than formally described and assessed. This paper attempts to recover the SE concepts of software design and architectures from three FLOSS case studies, sharing the same application domain (i.e., Instant Messaging). Its first objective is to determine whether a common architecture emerges from the three systems, which can be used as shared knowledge for future applications. The second objective is to determine whether these architectures evolve or decay during the evolution of these systems. The results of this study are encouraging: albeit no explicit effort was done by FLOSS developers to define a high-level view of the architecture, a common shared architecture could be distilled for the Instant Messaging application domain. It was also found that, for two of the three systems, the architecture becomes better organised, and the components better specified, as long as the system evolves in time.

  15. Programmable bandwidth management in software-defined EPON architecture

    NASA Astrophysics Data System (ADS)

    Li, Chengjun; Guo, Wei; Wang, Wei; Hu, Weisheng; Xia, Ming

    2016-07-01

    This paper proposes a software-defined EPON architecture which replaces the hardware-implemented DBA module with reprogrammable DBA module. The DBA module allows pluggable bandwidth allocation algorithms among multiple ONUs adaptive to traffic profiles and network states. We also introduce a bandwidth management scheme executed at the controller to manage the customized DBA algorithms for all date queues of ONUs. Our performance investigation verifies the effectiveness of this new EPON architecture, and numerical results show that software-defined EPONs can achieve less traffic delay and provide better support to service differentiation in comparison with traditional EPONs.

  16. Agent Architecture for Aviation Data Integration System

    NASA Technical Reports Server (NTRS)

    Kulkarni, Deepak; Wang, Yao; Windrem, May; Patel, Hemil; Wei, Mei

    2004-01-01

    This paper describes the proposed agent-based architecture of the Aviation Data Integration System (ADIS). ADIS is a software system that provides integrated heterogeneous data to support aviation problem-solving activities. Examples of aviation problem-solving activities include engineering troubleshooting, incident and accident investigation, routine flight operations monitoring, safety assessment, maintenance procedure debugging, and training assessment. A wide variety of information is typically referenced when engaging in these activities. Some of this information includes flight recorder data, Automatic Terminal Information Service (ATIS) reports, Jeppesen charts, weather data, air traffic control information, safety reports, and runway visual range data. Such wide-ranging information cannot be found in any single unified information source. Therefore, this information must be actively collected, assembled, and presented in a manner that supports the users problem-solving activities. This information integration task is non-trivial and presents a variety of technical challenges. ADIS has been developed to do this task and it permits integration of weather, RVR, radar data, and Jeppesen charts with flight data. ADIS has been implemented and used by several airlines FOQA teams. The initial feedback from airlines is that such a system is very useful in FOQA analysis. Based on the feedback from the initial deployment, we are developing a new version of the system that would make further progress in achieving following goals of our project.

  17. FLEX: A Modular Software Architecture for Flight License Exam

    NASA Astrophysics Data System (ADS)

    Arsan, Taner; Saka, Hamit Emre; Sahin, Ceyhun

    This paper is about the design and implementation of an examination system based on World Wide Web. It is called FLEX-Flight License Exam Software. We designed and implemented flexible and modular software architecture. The implemented system has basic specifications such as appending questions in system, building exams with these appended questions and making students to take these exams. There are three different types of users with different authorizations. These are system administrator, operators and students. System administrator operates and maintains the system, and also audits the system integrity. The system administrator can not be able to change the result of exams and can not take an exam. Operator module includes instructors. Operators have some privileges such as preparing exams, entering questions, changing the existing questions and etc. Students can log on the system and can be accessed to exams by a certain URL. The other characteristic of our system is that operators and system administrator are not able to delete questions due to the security problems. Exam questions can be inserted on their topics and lectures in the database. Thus; operators and system administrator can easily choose questions. When all these are taken into consideration, FLEX software provides opportunities to many students to take exams at the same time in safe, reliable and user friendly conditions. It is also reliable examination system for the authorized aviation administration companies. Web development platform - LAMP; Linux, Apache web server, MySQL, Object-oriented scripting Language - PHP are used for developing the system and page structures are developed by Content Management System - CMS.

  18. Hardware and software fault tolerance - A unified architectural approach

    NASA Technical Reports Server (NTRS)

    Lala, Jaynarayan H.; Alger, Linda S.

    1988-01-01

    The loss of hardware fault tolerance which often arises when design diversity is used to improve the fault tolerance of computer software is considered analytically, and a unified design approach is proposed to avoid the problem. The fundamental theory of fault-tolerant (FT) architectures is reviewed; the current status of design-diversity software development is surveyed; and the FT-processor/attached-processor (FTP/AP) architecture developed by Lala et al. (1986) is described in detail and illustrated with diagrams. FTP/AP is shown to permit efficient implementation of N-version FT software while still tolerating random hardware failures with very high coverage; the reliability is found to be significantly higher than that of conventional majority-vote N-version software.

  19. Software Defined Radios - Architectures, Systems and Functions

    NASA Technical Reports Server (NTRS)

    Sims, William H.

    2017-01-01

    Software Defined Radio is an industry term describing a method of utilizing a minimum amount of Radio Frequency (RF)/analog electronics before digitization takes place. Upon digitization all other functions are performed in software/firmware. There are as many different types of SDRs as there are data systems. Software Defined Radio (SDR) technology has been proven in the commercial sector since the early 90's. Today's rapid advancement in mobile telephone reliability and power management capabilities exemplifies the effectiveness of the SDR technology for the modern communications market. In contrast the foundations of transponder technology presently qualified for satellite applications were developed during the early space program of the 1960's. SDR technology offers potential to revolutionize satellite transponder technology by increasing science data through-put capability by at least an order of magnitude. While the SDR is adaptive in nature and is "One-size-fits-all" by design, conventional transponders are built to a specific platform and must be redesigned for every new bus. The SDR uses a minimum amount of analog/Radio Frequency components to up/down-convert the RF signal to/from a digital format. Once analog data is digitized, all processing is performed using hardware logic. Typical SDR processes include; filtering, modulation, up/down converting and demodulation. This presentation will show how the emerging SDR market has leveraged the existing commercial sector to provide a path to a radiation tolerant SDR transponder. These innovations will reduce the cost of transceivers, a decrease in power requirements and a commensurate reduction in volume. A second pay-off is the increased flexibility of the SDR by allowing the same hardware to implement multiple transponder types by altering hardware logic - no change of analog hardware is required - all of which can be ultimately accomplished in orbit. This in turn would provide high capability and low cost

  20. Software Defined Radios - Architectures, Systems and Functions

    NASA Technical Reports Server (NTRS)

    Sims, Herb

    2017-01-01

    Software Defined Radio is an industry term describing a method of utilizing a minimum amount of Radio Frequency (RF)/analog electronics before digitization takes place. Upon digitization all other functions are performed in software/firmware. There are as many different types of SDRs as there are data systems. Software Defined Radio (SDR) technology has been proven in the commercial sector since the early 90's. Today's rapid advancement in mobile telephone reliability and power management capabilities exemplifies the effectiveness of the SDR technology for the modern communications market. In contrast the foundations of transponder technology presently qualified for satellite applications were developed during the early space program of the 1960's. SDR technology offers potential to revolutionize satellite transponder technology by increasing science data through-put capability by at least an order of magnitude. While the SDR is adaptive in nature and is "One-size-fits-all" by design, conventional transponders are built to a specific platform and must be redesigned for every new bus. The SDR uses a minimum amount of analog/Radio Frequency components to up/down-convert the RF signal to/from a digital format. Once analog data is digitized, all processing is performed using hardware logic. Typical SDR processes include; filtering, modulation, up/down converting and demodulation. This presentation will show how the emerging SDR market has leveraged the existing commercial sector to provide a path to a radiation tolerant SDR transponder. These innovations will reduce the cost of transceivers, a decrease in power requirements and a commensurate reduction in volume. A second pay-off is the increased flexibility of the SDR by allowing the same hardware to implement multiple transponder types by altering hardware logic - no change of analog hardware is required - all of which can be ultimately accomplished in orbit. This in turn would provide high capability and low cost

  1. Software Technology for Adaptable, Reliable Systems (STARS). Software Architecture Seminar Report: Central Archive for Reusable Defense Software (CARDS)

    DTIC Science & Technology

    1994-01-29

    products and information. As noted in the DoD Software Reuse Initiative Vision and Strategy, DoD aims: [tio drive the DoD software community from its...development of systems complying with approved architectures. The creation of generic components must be independent of development of fieldable production ... products are often built to different standards. However, these issues need to be considered from an architectural standpoint so that components will

  2. Fault Management Architectures and the Challenges of Providing Software Assurance

    NASA Technical Reports Server (NTRS)

    Savarino, Shirley; Fitz, Rhonda; Fesq, Lorraine; Whitman, Gerek

    2015-01-01

    The satellite systems Fault Management (FM) is focused on safety, the preservation of assets, and maintaining the desired functionality of the system. How FM is implemented varies among missions. Common to most is system complexity due to a need to establish a multi-dimensional structure across hardware, software and operations. This structure is necessary to identify and respond to system faults, mitigate technical risks and ensure operational continuity. These architecture, implementation and software assurance efforts increase with mission complexity. Because FM is a systems engineering discipline with a distributed implementation, providing efficient and effective verification and validation (VV) is challenging. A breakout session at the 2012 NASA Independent Verification Validation (IVV) Annual Workshop titled VV of Fault Management: Challenges and Successes exposed these issues in terms of VV for a representative set of architectures. NASA's IVV is funded by NASA's Software Assurance Research Program (SARP) in partnership with NASA's Jet Propulsion Laboratory (JPL) to extend the work performed at the Workshop session. NASA IVV will extract FM architectures across the IVV portfolio and evaluate the data set for robustness, assess visibility for validation and test, and define software assurance methods that could be applied to the various architectures and designs. This work focuses efforts on FM architectures from critical and complex projects within NASA. The identification of particular FM architectures, visibility, and associated VVIVV techniques provides a data set that can enable higher assurance that a satellite system will adequately detect and respond to adverse conditions. Ultimately, results from this activity will be incorporated into the NASA Fault Management Handbook providing dissemination across NASA, other agencies and the satellite community. This paper discusses the approach taken to perform the evaluations and preliminary findings from the

  3. Scalable software architectures for decision support.

    PubMed

    Musen, M A

    1999-12-01

    Interest in decision-support programs for clinical medicine soared in the 1970s. Since that time, workers in medical informatics have been particularly attracted to rule-based systems as a means of providing clinical decision support. Although developers have built many successful applications using production rules, they also have discovered that creation and maintenance of large rule bases is quite problematic. In the 1980s, several groups of investigators began to explore alternative programming abstractions that can be used to build decision-support systems. As a result, the notions of "generic tasks" and of reusable problem-solving methods became extremely influential. By the 1990s, academic centers were experimenting with architectures for intelligent systems based on two classes of reusable components: (1) problem-solving methods--domain-independent algorithms for automating stereotypical tasks--and (2) domain ontologies that captured the essential concepts (and relationships among those concepts) in particular application areas. This paper highlights how developers can construct large, maintainable decision-support systems using these kinds of building blocks. The creation of domain ontologies and problem-solving methods is the fundamental end product of basic research in medical informatics. Consequently, these concepts need more attention by our scientific community.

  4. Generic Software Architecture for Prognostics (GSAP) User Guide

    NASA Technical Reports Server (NTRS)

    Teubert, Christopher Allen; Daigle, Matthew John; Watkins, Jason; Sankararaman, Shankar; Goebel, Kai

    2016-01-01

    The Generic Software Architecture for Prognostics (GSAP) is a framework for applying prognostics. It makes applying prognostics easier by implementing many of the common elements across prognostic applications. The standard interface enables reuse of prognostic algorithms and models across systems using the GSAP framework.

  5. Software Defined Radios - Architectures, Systems and Functions

    NASA Technical Reports Server (NTRS)

    Sims, Herb

    2017-01-01

    Software Defined Radio (SDR) technology has been proven in the commercial sector since the early 90's. Today's rapid advancement in mobile telephone reliability and power management capabilities exemplifies the effectiveness of the SDR technology for the modern communications market. SDR technology offers potential to revolutionize satellite transponder technology by increasing science data through-put capability by at least an order of magnitude. While the SDR is adaptive in nature and is "One-size-fits-all" by design, conventional transponders are built to a specific platform and must be redesigned for every new bus. The SDR uses a minimum amount of analog/Radio Frequency (RF) components to up/down-convert the RF signal to/from a digital format. Once analog data is digitized, all processing is performed using hardware logic. Typical SDR processes include; filtering, modulation, up/down converting and demodulation. These innovations have reduced the cost of transceivers, a decrease in power requirements and a commensurate reduction in volume. An additional pay-off is the increased flexibility of the SDR: allowing the same hardware to implement multiple transponder types by altering hardware logic -no change of analog hardware is required -all of which can be ultimately accomplished in orbit.

  6. Fault Management Architectures and the Challenges of Providing Software Assurance

    NASA Technical Reports Server (NTRS)

    Savarino, Shirley; Fitz, Rhonda; Fesq, Lorraine; Whitman, Gerek

    2015-01-01

    Fault Management (FM) is focused on safety, the preservation of assets, and maintaining the desired functionality of the system. How FM is implemented varies among missions. Common to most missions is system complexity due to a need to establish a multi-dimensional structure across hardware, software and spacecraft operations. FM is necessary to identify and respond to system faults, mitigate technical risks and ensure operational continuity. Generally, FM architecture, implementation, and software assurance efforts increase with mission complexity. Because FM is a systems engineering discipline with a distributed implementation, providing efficient and effective verification and validation (V&V) is challenging. A breakout session at the 2012 NASA Independent Verification & Validation (IV&V) Annual Workshop titled "V&V of Fault Management: Challenges and Successes" exposed this issue in terms of V&V for a representative set of architectures. NASA's Software Assurance Research Program (SARP) has provided funds to NASA IV&V to extend the work performed at the Workshop session in partnership with NASA's Jet Propulsion Laboratory (JPL). NASA IV&V will extract FM architectures across the IV&V portfolio and evaluate the data set, assess visibility for validation and test, and define software assurance methods that could be applied to the various architectures and designs. This SARP initiative focuses efforts on FM architectures from critical and complex projects within NASA. The identification of particular FM architectures and associated V&V/IV&V techniques provides a data set that can enable improved assurance that a system will adequately detect and respond to adverse conditions. Ultimately, results from this activity will be incorporated into the NASA Fault Management Handbook providing dissemination across NASA, other agencies and the space community. This paper discusses the approach taken to perform the evaluations and preliminary findings from the research.

  7. Software architecture for the ORNL large-coil test facility data system

    NASA Astrophysics Data System (ADS)

    Blair, E. T.; Baylor, L. R.

    1986-08-01

    The VAX-based data-acquisition system for the International Fusion Superconducting Magnet Test Facility (IFSMTF) at Oak Ridge National Laboratory (ORNL) is a second-generation system that evolved from a PDP-11/60-based system used during the initial phase of facility testing. The VAX-based software represents a layered implementation that provides integrated access to all of the data sources within the system, decoupling end-user data retrieval from various front-end data sources through a combination of software architecture and instrumentation data bases. Independent VAX processes manage the various front-end data sources, each being responsible for controlling, monitoring, acquiring, and disposing data and control parameters for access from the data retrieval software. This paper describes the software architecture and the functionality incorporated into the various layers of the data system.

  8. A Scalable Software Architecture Booting and Configuring Nodes in the Whitney Commodity Computing Testbed

    NASA Technical Reports Server (NTRS)

    Fineberg, Samuel A.; Kutler, Paul (Technical Monitor)

    1997-01-01

    The Whitney project is integrating commodity off-the-shelf PC hardware and software technology to build a parallel supercomputer with hundreds to thousands of nodes. To build such a system, one must have a scalable software model, and the installation and maintenance of the system software must be completely automated. We describe the design of an architecture for booting, installing, and configuring nodes in such a system with particular consideration given to scalability and ease of maintenance. This system has been implemented on a 40-node prototype of Whitney and is to be used on the 500 processor Whitney system to be built in 1998.

  9. Architecture for a Generalized Emergency Management Software System

    SciTech Connect

    Hoza, Mark; Bower, John C.; Stoops, LaMar R.; Downing, Timothy R.; Carter, Richard J.; Millard, W. David

    2002-12-19

    The Federal Emergency Management Information System (FEMIS) was originally developed for the Chemical Stockpile Emergency Preparedness Program (CSEPP). It has evolved from a CSEPP-specific emergency management software system to a general-purpose system that supports multiple types of hazards. The latest step in the evolution is the adoption of a hazard analysis architecture that enables the incorporation of hazard models for each of the hazards such that the model is seamlessly incorporated into the FEMIS hazard analysis subsystem. This paper describes that new architecture.

  10. Measurements of the LHCb software stack on the ARM architecture

    NASA Astrophysics Data System (ADS)

    Vijay Kartik, S.; Couturier, Ben; Clemencic, Marco; Neufeld, Niko

    2014-06-01

    The ARM architecture is a power-efficient design that is used in most processors in mobile devices all around the world today since they provide reasonable compute performance per watt. The current LHCb software stack is designed (and thus expected) to build and run on machines with the x86/x86_64 architecture. This paper outlines the process of measuring the performance of the LHCb software stack on the ARM architecture - specifically, the ARMv7 architecture on Cortex-A9 processors from NVIDIA and on full-fledged ARM servers with chipsets from Calxeda - and makes comparisons with the performance on x86_64 architectures on the Intel Xeon L5520/X5650 and AMD Opteron 6272. The paper emphasises the aspects of performance per core with respect to the power drawn by the compute nodes for the given performance - this ensures a fair real-world comparison with much more 'powerful' Intel/AMD processors. The comparisons of these real workloads in the context of LHCb are also complemented with the standard synthetic benchmarks HEPSPEC and Coremark. The pitfalls and solutions for the non-trivial task of porting the source code to build for the ARMv7 instruction set are presented. The specific changes in the build process needed for ARM-specific portions of the software stack are described, to serve as pointers for further attempts taken up by other groups in this direction. Cases where architecture-specific tweaks at the assembler lever (both in ROOT and the LHCb software stack) were needed for a successful compile are detailed - these cases are good indicators of where/how the software stack as well as the build system can be made more portable and multi-arch friendly. The experience gained from the tasks described in this paper are intended to i) assist in making an informed choice about ARM-based server solutions as a feasible low-power alternative to the current compute nodes, and ii) revisit the software design and build system for portability and generic improvements.

  11. DAQ: Software Architecture for Data Acquisition in Sounding Rockets

    NASA Technical Reports Server (NTRS)

    Ahmad, Mohammad; Tran, Thanh; Nichols, Heidi; Bowles-Martinez, Jessica N.

    2011-01-01

    A multithreaded software application was developed by Jet Propulsion Lab (JPL) to collect a set of correlated imagery, Inertial Measurement Unit (IMU) and GPS data for a Wallops Flight Facility (WFF) sounding rocket flight. The data set will be used to advance Terrain Relative Navigation (TRN) technology algorithms being researched at JPL. This paper describes the software architecture and the tests used to meet the timing and data rate requirements for the software used to collect the dataset. Also discussed are the challenges of using commercial off the shelf (COTS) flight hardware and open source software. This includes multiple Camera Link (C-link) based cameras, a Pentium-M based computer, and Linux Fedora 11 operating system. Additionally, the paper talks about the history of the software architecture's usage in other JPL projects and its applicability for future missions, such as cubesats, UAVs, and research planes/balloons. Also talked about will be the human aspect of project especially JPL's Phaeton program and the results of the launch.

  12. Key software architecture decisions for the automated planet finder

    NASA Astrophysics Data System (ADS)

    Lanclos, Kyle; Deich, William T. S.; Holden, Bradford P.; Allen, S. L.

    2016-08-01

    The Automated Planet Finder (APF) at Lick Observatory on Mount Hamilton is a modern 2.4 meter computer controlled telescope. At one Nasmyth focus is the Levy Spectrometer, at present the sole instrument used with the APF. The primary research mission of the APF and the Levy Spectrometer is high-precision Doppler spectroscopy. Observing at the APF is unattended; custom software written by diverse authors in diverse languages manage all aspects of a night's observing. This paper will cover some of the key software architecture decisions made in the development of autonomous observing at the APF. The relevance to future projects of these decisions will be emphasized throughout.

  13. Investigating the Acquisition of Software Systems that Rely on Open Architecture and Open Source Software

    DTIC Science & Technology

    2010-03-01

    Investigating the Acquisition of Software Systems that Rely on Open Architecture and Open Source Software March 2010 by Dr. Walt Scacchi ...including project teams operating as virtual organizations [ Scacchi 2002, 2007]. There is a basic need to understand how to identify an optimal mix of OSS...ecosystem. However, the relationship among OA, OSS, requirements, and acquisition is poorly understood [cf. Scacchi 2002, Naegle and Petross 2007

  14. Software architecture of the Magdalena Ridge Observatory Interferometer

    NASA Astrophysics Data System (ADS)

    Farris, Allen; Klinglesmith, Dan; Seamons, John; Torres, Nicolas; Buscher, David; Young, John

    2010-07-01

    Merging software from 36 independent work packages into a coherent, unified software system with a lifespan of twenty years is the challenge faced by the Magdalena Ridge Observatory Interferometer (MROI). We solve this problem by using standardized interface software automatically generated from simple highlevel descriptions of these systems, relying only on Linux, GNU, and POSIX without complex software such as CORBA. This approach, based on gigabit Ethernet with a TCP/IP protocol, provides the flexibility to integrate and manage diverse, independent systems using a centralized supervisory system that provides a database manager, data collectors, fault handling, and an operator interface.

  15. Software Architecture of Sensor Data Distribution In Planetary Exploration

    NASA Technical Reports Server (NTRS)

    Lee, Charles; Alena, Richard; Stone, Thom; Ossenfort, John; Walker, Ed; Notario, Hugo

    2006-01-01

    Data from mobile and stationary sensors will be vital in planetary surface exploration. The distribution and collection of sensor data in an ad-hoc wireless network presents a challenge. Irregular terrain, mobile nodes, new associations with access points and repeaters with stronger signals as the network reconfigures to adapt to new conditions, signal fade and hardware failures can cause: a) Data errors; b) Out of sequence packets; c) Duplicate packets; and d) Drop out periods (when node is not connected). To mitigate the effects of these impairments, a robust and reliable software architecture must be implemented. This architecture must also be tolerant of communications outages. This paper describes such a robust and reliable software infrastructure that meets the challenges of a distributed ad hoc network in a difficult environment and presents the results of actual field experiments testing the principles and actual code developed.

  16. Conceptual Architecture of Building Energy Management Open Source Software (BEMOSS)

    SciTech Connect

    Khamphanchai, Warodom; Saha, Avijit; Rathinavel, Kruthika; Kuzlu, Murat; Pipattanasomporn, Manisa; Rahman, Saifur; Akyol, Bora A.; Haack, Jereme N.

    2014-12-01

    The objective of this paper is to present a conceptual architecture of a Building Energy Management Open Source Software (BEMOSS) platform. The proposed BEMOSS platform is expected to improve sensing and control of equipment in small- and medium-sized buildings, reduce energy consumption and help implement demand response (DR). It aims to offer: scalability, robustness, plug and play, open protocol, interoperability, cost-effectiveness, as well as local and remote monitoring. In this paper, four essential layers of BEMOSS software architecture -- namely User Interface, Application and Data Management, Operating System and Framework, and Connectivity layers -- are presented. A laboratory test bed to demonstrate the functionality of BEMOSS located at the Advanced Research Institute of Virginia Tech is also briefly described.

  17. Supporting Community Emergency Management Planning Through a Geocollaboration Software Architecture

    NASA Astrophysics Data System (ADS)

    Schafer, Wendy A.; Ganoe, Craig H.; Carroll, John M.

    Emergency management is more than just events occurring within an emergency situation. It encompasses a variety of persistent activities such as planning, training, assessment, and organizational change. We are studying emergency management planning practices in which geographic communities (towns and regions) prepare to respond efficiently to significant emergency events. Community emergency management planning is an extensive collaboration involving numerous stakeholders throughout the community and both reflecting and challenging the community’s structure and resources. Geocollaboration is one aspect of the effort. Emergency managers, public works directors, first responders, and local transportation managers need to exchange information relating to possible emergency event locations and their surrounding areas. They need to examine geospatial maps together and collaboratively develop emergency plans and procedures. Issues such as emergency vehicle traffic routes and staging areas for command posts, arriving media, and personal first responders’ vehicles must be agreed upon prior to an emergency event to ensure an efficient and effective response. This work presents a software architecture that facilitates the development of geocollaboration solutions. The architecture extends prior geocollaboration research and reuses existing geospatial information models. Emergency management planning is one application domain for the architecture. Geocollaboration tools can be developed that support community-wide emergency management planning and preparedness. This chapter describes how the software architecture can be used for the geospatial, emergency management planning activities of one community.

  18. Extensive Evaluation of Using a Game Project in a Software Architecture Course

    ERIC Educational Resources Information Center

    Wang, Alf Inge

    2011-01-01

    This article describes an extensive evaluation of introducing a game project to a software architecture course. In this project, university students have to construct and design a type of software architecture, evaluate the architecture, implement an application based on the architecture, and test this implementation. In previous years, the domain…

  19. Extensive Evaluation of Using a Game Project in a Software Architecture Course

    ERIC Educational Resources Information Center

    Wang, Alf Inge

    2011-01-01

    This article describes an extensive evaluation of introducing a game project to a software architecture course. In this project, university students have to construct and design a type of software architecture, evaluate the architecture, implement an application based on the architecture, and test this implementation. In previous years, the domain…

  20. An enterprise software architecture for the Green Bank Telescope (GBT)

    NASA Astrophysics Data System (ADS)

    Radziwill, Nicole M.; Mello, Melinda; Sessoms, Eric; Shelton, Amy

    2004-09-01

    The enterprise architecture presents a view of how software utilities and applications are related to one another under unifying rules and principles of development. By constructing an enterprise architecture, an organization will be able to manage the components of its systems within a solid conceptual framework. This largely prevents duplication of effort, focuses the organization on its core technical competencies, and ultimately makes software more maintainable. In the beginning of 2003, several prominent challenges faced software development at the GBT. The telescope was not easily configurable, and observing often presented a challenge, particularly to new users. High priority projects required new experimental developments on short time scales. Migration paths were required for applications which had proven difficult to maintain. In order to solve these challenges, an enterprise architecture was created, consisting of five layers: 1) the telescope control system, and the raw data produced during an observation, 2) Low-level Application Programming Interfaces (APIs) in C++, for managing interactions with the telescope control system and its data, 3) High-Level APIs in Python, which can be used by astronomers or software developers to create custom applications, 4) Application Components in Python, which can be either standalone applications or plug-in modules to applications, and 5) Application Management Systems in Python, which package application components for use by a particular user group (astronomers, engineers or operators) in terms of resource configurations. This presentation describes how these layers combine to make the GBT easier to use, while concurrently making the software easier to develop and maintain.

  1. Towards an Open, Distributed Software Architecture for UxS Operations

    NASA Technical Reports Server (NTRS)

    Cross, Charles D.; Motter, Mark A.; Neilan, James H.; Qualls, Garry D.; Rothhaar, Paul M.; Tran, Loc; Trujillo, Anna C.; Allen, B. Danette

    2015-01-01

    To address the growing need to evaluate, test, and certify an ever expanding ecosystem of UxS platforms in preparation of cultural integration, NASA Langley Research Center's Autonomy Incubator (AI) has taken on the challenge of developing a software framework in which UxS platforms developed by third parties can be integrated into a single system which provides evaluation and testing, mission planning and operation, and out-of-the-box autonomy and data fusion capabilities. This software framework, named AEON (Autonomous Entity Operations Network), has two main goals. The first goal is the development of a cross-platform, extensible, onboard software system that provides autonomy at the mission execution and course-planning level, a highly configurable data fusion framework sensitive to the platform's available sensor hardware, and plug-and-play compatibility with a wide array of computer systems, sensors, software, and controls hardware. The second goal is the development of a ground control system that acts as a test-bed for integration of the proposed heterogeneous fleet, and allows for complex mission planning, tracking, and debugging capabilities. The ground control system should also be highly extensible and allow plug-and-play interoperability with third party software systems. In order to achieve these goals, this paper proposes an open, distributed software architecture which utilizes at its core the Data Distribution Service (DDS) standards, established by the Object Management Group (OMG), for inter-process communication and data flow. The design decisions proposed herein leverage the advantages of existing robotics software architectures and the DDS standards to develop software that is scalable, high-performance, fault tolerant, modular, and readily interoperable with external platforms and software.

  2. Application of parallelized software architecture to an autonomous ground vehicle

    NASA Astrophysics Data System (ADS)

    Shakya, Rahul; Wright, Adam; Shin, Young Ho; Momin, Orko; Petkovsek, Steven; Wortman, Paul; Gautam, Prasanna; Norton, Adam

    2011-01-01

    This paper presents improvements made to Q, an autonomous ground vehicle designed to participate in the Intelligent Ground Vehicle Competition (IGVC). For the 2010 IGVC, Q was upgraded with a new parallelized software architecture and a new vision processor. Improvements were made to the power system reducing the number of batteries required for operation from six to one. In previous years, a single state machine was used to execute the bulk of processing activities including sensor interfacing, data processing, path planning, navigation algorithms and motor control. This inefficient approach led to poor software performance and made it difficult to maintain or modify. For IGVC 2010, the team implemented a modular parallel architecture using the National Instruments (NI) LabVIEW programming language. The new architecture divides all the necessary tasks - motor control, navigation, sensor data collection, etc. into well-organized components that execute in parallel, providing considerable flexibility and facilitating efficient use of processing power. Computer vision is used to detect white lines on the ground and determine their location relative to the robot. With the new vision processor and some optimization of the image processing algorithm used last year, two frames can be acquired and processed in 70ms. With all these improvements, Q placed 2nd in the autonomous challenge.

  3. Next-generation digital camera integration and software development issues

    NASA Astrophysics Data System (ADS)

    Venkataraman, Shyam; Peters, Ken; Hecht, Richard

    1998-04-01

    This paper investigates the complexities associated with the development of next generation digital cameras due to requirements in connectivity and interoperability. Each successive generation of digital camera improves drastically in cost, performance, resolution, image quality and interoperability features. This is being accomplished by advancements in a number of areas: research, silicon, standards, etc. As the capabilities of these cameras increase, so do the requirements for both hardware and software. Today, there are two single chip camera solutions in the market including the Motorola MPC 823 and LSI DCAM- 101. Real time constraints for a digital camera may be defined by the maximum time allowable between capture of images. Constraints in the design of an embedded digital camera include processor architecture, memory, processing speed and the real-time operating systems. This paper will present the LSI DCAM-101, a single-chip digital camera solution. It will present an overview of the architecture and the challenges in hardware and software for supporting streaming video in such a complex device. Issues presented include the development of the data flow software architecture, testing and integration on this complex silicon device. The strategy for optimizing performance on the architecture will also be presented.

  4. Space Telecommunications Radio System Software Architecture Concepts and Analysis

    NASA Technical Reports Server (NTRS)

    Handler, Louis M.; Hall, Charles S.; Briones, Janette C.; Blaser, Tammy M.

    2008-01-01

    The Space Telecommunications Radio System (STRS) project investigated various Software Defined Radio (SDR) architectures for Space. An STRS architecture has been selected that separates the STRS operating environment from its various waveforms and also abstracts any specialized hardware to limit its effect on the operating environment. The design supports software evolution where new functionality is incorporated into the radio. Radio hardware functionality has been moving from hardware based ASICs into firmware and software based processors such as FPGAs, DSPs and General Purpose Processors (GPPs). Use cases capture the requirements of a system by describing how the system should interact with the users or other systems (the actors) to achieve a specific goal. The Unified Modeling Language (UML) is used to illustrate the Use Cases in a variety of ways. The Top Level Use Case diagram shows groupings of the use cases and how the actors are involved. The state diagrams depict the various states that a system or object may be in and the transitions between those states. The sequence diagrams show the main flow of activity as described in the use cases.

  5. Software architecture for time-constrained machine vision applications

    NASA Astrophysics Data System (ADS)

    Usamentiaga, Rubén; Molleda, Julio; García, Daniel F.; Bulnes, Francisco G.

    2013-01-01

    Real-time image and video processing applications require skilled architects, and recent trends in the hardware platform make the design and implementation of these applications increasingly complex. Many frameworks and libraries have been proposed or commercialized to simplify the design and tuning of real-time image processing applications. However, they tend to lack flexibility, because they are normally oriented toward particular types of applications, or they impose specific data processing models such as the pipeline. Other issues include large memory footprints, difficulty for reuse, and inefficient execution on multicore processors. We present a novel software architecture for time-constrained machine vision applications that addresses these issues. The architecture is divided into three layers. The platform abstraction layer provides a high-level application programming interface for the rest of the architecture. The messaging layer provides a message-passing interface based on a dynamic publish/subscribe pattern. A topic-based filtering in which messages are published to topics is used to route the messages from the publishers to the subscribers interested in a particular type of message. The application layer provides a repository for reusable application modules designed for machine vision applications. These modules, which include acquisition, visualization, communication, user interface, and data processing, take advantage of the power of well-known libraries such as OpenCV, Intel IPP, or CUDA. Finally, the proposed architecture is applied to a real machine vision application: a jam detector for steel pickling lines.

  6. A Software Architecture for Adaptive Modular Sensing Systems

    PubMed Central

    Lyle, Andrew C.; Naish, Michael D.

    2010-01-01

    By combining a number of simple transducer modules, an arbitrarily complex sensing system may be produced to accommodate a wide range of applications. This work outlines a novel software architecture and knowledge representation scheme that has been developed to support this type of flexible and reconfigurable modular sensing system. Template algorithms are used to embed intelligence within each module. As modules are added or removed, the composite sensor is able to automatically determine its overall geometry and assume an appropriate collective identity. A virtual machine-based middleware layer runs on top of a real-time operating system with a pre-emptive kernel, enabling platform-independent template algorithms to be written once and run on any module, irrespective of its underlying hardware architecture. Applications that may benefit from easily reconfigurable modular sensing systems include flexible inspection, mobile robotics, surveillance, and space exploration. PMID:22163614

  7. A software architecture for adaptive modular sensing systems.

    PubMed

    Lyle, Andrew C; Naish, Michael D

    2010-01-01

    By combining a number of simple transducer modules, an arbitrarily complex sensing system may be produced to accommodate a wide range of applications. This work outlines a novel software architecture and knowledge representation scheme that has been developed to support this type of flexible and reconfigurable modular sensing system. Template algorithms are used to embed intelligence within each module. As modules are added or removed, the composite sensor is able to automatically determine its overall geometry and assume an appropriate collective identity. A virtual machine-based middleware layer runs on top of a real-time operating system with a pre-emptive kernel, enabling platform-independent template algorithms to be written once and run on any module, irrespective of its underlying hardware architecture. Applications that may benefit from easily reconfigurable modular sensing systems include flexible inspection, mobile robotics, surveillance, and space exploration.

  8. Requirements for an Integrated UAS CNS Architecture

    NASA Technical Reports Server (NTRS)

    Templin, Fred L.; Jain, Raj; Sheffield, Greg; Taboso-Ballesteros, Pedro; Ponchak, Denise

    2017-01-01

    Communications, Navigation and Surveillance (CNS) requirements must be developed in order to establish a CNS architecture supporting Unmanned Air Systems integration in the National Air Space (UAS in the NAS). These requirements must address cybersecurity, future communications, satellite-based navigation and APNT, and scalable surveillance and situational awareness. CNS integration, consolidation and miniaturization requirements are also important to support the explosive growth in small UAS deployment. Air Traffic Management (ATM) must also be accommodated to support critical Command and Control (C2) for Air Traffic Controllers (ATC). This document therefore presents UAS CNS requirements that will guide the architecture.

  9. Crew Launch Vehicle (CLV) Avionics and Software Integration Overview

    NASA Technical Reports Server (NTRS)

    Monell, Donald W.; Flynn, Kevin C.; Maroney, Johnny

    2006-01-01

    On January 14, 2004, the President of the United States announced a new plan to explore space and extend a human presence across our solar system. The National Aeronautics and Space Administration (NASA) established the Exploration Systems Mission Directorate (ESMD) to develop and field a Constellation Architecture that will bring the Space Exploration vision to fruition. The Constellation Architecture includes a human-rated Crew Launch Vehicle (CLV) segment, managed by the Marshall Space Flight Center (MSFC), comprised of the First Stage (FS), Upper Stage (US), and Upper Stage Engine (USE) elements. The CLV s purpose is to provide safe and reliable crew and cargo transportation into Low Earth Orbit (LEO), as well as insertion into trans-lunar trajectories. The architecture's Spacecraft segment includes, among other elements, the Crew Exploration Vehicle (CEV), managed by the Johnson Space Flight Center (JSC), which is launched atop the CLV. MSFC is also responsible for CLV and CEV stack integration. This paper provides an overview of the Avionics and Software integration approach (which includes the Integrated System Health Management (ISHM) functions), both within the CLV, and across the CEV interface; it addresses the requirements to be met, logistics of meeting those requirements, and the roles of the various groups. The Avionics Integration and Vehicle Systems Test (ANST) Office was established at the MSFC with system engineering responsibilities for defining and developing the integrated CLV Avionics and Software system. The AIVST Office has defined two Groups, the Avionics and Software Integration Group (AVSIG), and the Integrated System Simulation and Test Integration Group (ISSTIG), and four Panels which will direct trade studies and analyses to ensure the CLV avionics and software meet CLV system and CEV interface requirements. The four panels are: 1) Avionics Integration Panel (AIP), 2) Software Integration Panel, 3) EEE Panel, and 4) Systems Simulation

  10. Demonstration of integrated optimization software

    SciTech Connect

    2008-01-01

    NeuCO has designed and demonstrated the integration of five system control modules using its proprietary ProcessLink{reg_sign} technology of neural networks, advanced algorithms and fuzzy logic to maximize performance of coal-fired plants. The separate modules control cyclone combustion, sootblowing, SCR operations, performance and equipment maintenance. ProcessLink{reg_sign} provides overall plant-level integration of controls responsive to plant operator and corporate criteria. Benefits of an integrated approach include NOx reduction improvement in heat rate, availability, efficiency and reliability; extension of SCR catalyst life; and reduced consumption of ammonia. All translate into cost savings. As plant complexity increases through retrofit, repowering or other plant modifications, this integrated process optimization approach will be an important tool for plant operators. 1 fig., 1 photo.

  11. Extreme Scaling of Production Visualization Software on Diverse Architectures

    SciTech Connect

    Childs, Henry; Pugmire, David; Ahern, Sean; Whitlock, Brad; Howison, Mark; Weber, Gunther; Bethel, E. Wes

    2009-12-22

    We present the results of a series of experiments studying how visualization software scales to massive data sets. Although several paradigms exist for processing large data, we focus on pure parallelism, the dominant approach for production software. These experiments utilized multiple visualization algorithms and were run on multiple architectures. Two types of experiments were performed. For the first, we examined performance at massive scale: 16,000 or more cores and one trillion or more cells. For the second, we studied weak scaling performance. These experiments were performed on the largest data set sizes published to date in visualization literature, and the findings on scaling characteristics and bottlenecks contribute to understanding of how pure parallelism will perform at high levels of concurrency and with very large data sets.

  12. REAGERE: a reaction-based architecture for integration and control

    NASA Astrophysics Data System (ADS)

    Berry, Nina M.; Kumara, Soundar R. T.

    1997-01-01

    This research is concerned with the design, development and implementation of a unique reaction-based multi-agent architecture (REAGERE) to integrate and control a manufacturing domain, by combining concepts from distributed problem solving and multi-agent systems. This architecture represents an emerging concept of reifying the parts, equipment, and software packages of the domain as individual agent entities. This research also improves on earlier top- down automated manufacturing systems, that suffered from lack of flexibility, upgradability, overhead difficulties, and performance problems when presented with the uncertainty and dynamics of modern competitive environments. The versatility of the domain is enhanced with the independent development of the agents and the object-oriented events that permit the agents to communicate through the underlying blackboard architecture BB1. This bottom-up concept permits the architecture's integration to rely on the agents' interactions and their perceptions of the current environmental problem(s). Hence the control and coordination of the architecture are adaptable to the agents' reactions to dynamic situations. REAGERE was applied to a simulated predefined automated manufacturing domain for the purpose of controlling and coordinating the internal processes of this domain.

  13. Enhancing User Customization through Novel Software Architecture for Utility Scale Solar Siting Software

    SciTech Connect

    Brant Peery; Sam Alessi; Randy Lee; Leng Vang; Scott Brown; David Solan; Dan Ames

    2014-06-01

    There is a need for a spatial decision support application that allows users to create customized metrics for comparing proposed locations of a new solar installation. This document discusses how PVMapper was designed to overcome the customization problem through the development of loosely coupled spatial and decision components in a JavaScript plugin architecture. This allows the user to easily add functionality and data to the system. The paper also explains how PVMapper provides the user with a dynamic and customizable decision tool that enables them to visually modify the formulas that are used in the decision algorithms that convert data to comparable metrics. The technologies that make up the presentation and calculation software stack are outlined. This document also explains the architecture that allows the tool to grow through custom plugins created by the software users. Some discussion is given on the difficulties encountered while designing the system.

  14. Control software and electronics architecture design in the framework of the E-ELT instrumentation

    NASA Astrophysics Data System (ADS)

    Di Marcantonio, P.; Coretti, I.; Cirami, R.; Comari, M.; Santin, P.; Pucillo, M.

    2010-07-01

    During the last years the European Southern Observatory (ESO), in collaboration with other European astronomical institutes, has started several feasibility studies for the E-ELT (European-Extremely Large Telescope) instrumentation and post-focal adaptive optics. The goal is to create a flexible suite of instruments to deal with the wide variety of scientific questions astronomers would like to see solved in the coming decades. In this framework INAF-Astronomical Observatory of Trieste (INAF-AOTs) is currently responsible of carrying out the analysis and the preliminary study of the architecture of the electronics and control software of three instruments: CODEX (control software and electronics) and OPTIMOS-EVE/OPTIMOS-DIORAMAS (control software). To cope with the increased complexity and new requirements for stability, precision, real-time latency and communications among sub-systems imposed by these instruments, new solutions have been investigated by our group. In this paper we present the proposed software and electronics architecture based on a distributed common framework centered on the Component/Container model that uses OPC Unified Architecture as a standard layer to communicate with COTS components of three different vendors. We describe three working prototypes that have been set-up in our laboratory and discuss their performances, integration complexity and ease of deployment.

  15. Flexible software architecture for user-interface and machine control in laboratory automation.

    PubMed

    Arutunian, E B; Meldrum, D R; Friedman, N A; Moody, S E

    1998-10-01

    We describe a modular, layered software architecture for automated laboratory instruments. The design consists of a sophisticated user interface, a machine controller and multiple individual hardware subsystems, each interacting through a client-server architecture built entirely on top of open Internet standards. In our implementation, the user-interface components are built as Java applets that are downloaded from a server integrated into the machine controller. The user-interface client can thereby provide laboratory personnel with a familiar environment for experiment design through a standard World Wide Web browser. Data management and security are seamlessly integrated at the machine-controller layer using QNX, a real-time operating system. This layer also controls hardware subsystems through a second client-server interface. This architecture has proven flexible and relatively easy to implement and allows users to operate laboratory automation instruments remotely through an Internet connection. The software architecture was implemented and demonstrated on the Acapella, an automated fluid-sample-processing system that is under development at the University of Washington.

  16. The Effective Use of System and Software Architecture Standards for Software Technology Readiness Assessments

    DTIC Science & Technology

    2011-05-01

    DEV V1.2” SEI Course. 2 Acknowledgements • This work would not have been possible without the help of the following people of The Aerospace...Motivation • Technology Readiness Assessments – the 64,000-foot View • Tutorial Scope • Risks of Software CTE Identification • Missing TRA...Definitions • Algorithms • Department of Defense Architecture Framework Version 2 0 . • Why the Work Breakdown Structure is Inadequate for CTE

  17. Integrated computer control system architectural overview

    SciTech Connect

    Van Arsdall, P.

    1997-06-18

    This overview introduces the NIF Integrated Control System (ICCS) architecture. The design is abstract to allow the construction of many similar applications from a common framework. This summary lays the essential foundation for understanding the model-based engineering approach used to execute the design.

  18. Overview and Software Architecture of the Copernicus Trajectory Design and Optimization System

    NASA Technical Reports Server (NTRS)

    Williams, Jacob; Senent, Juan S.; Ocampo, Cesar; Mathur, Ravi; Davis, Elizabeth C.

    2010-01-01

    The Copernicus Trajectory Design and Optimization System represents an innovative and comprehensive approach to on-orbit mission design, trajectory analysis and optimization. Copernicus integrates state of the art algorithms in optimization, interactive visualization, spacecraft state propagation, and data input-output interfaces, allowing the analyst to design spacecraft missions to all possible Solar System destinations. All of these features are incorporated within a single architecture that can be used interactively via a comprehensive GUI interface, or passively via external interfaces that execute batch processes. This paper describes the Copernicus software architecture together with the challenges associated with its implementation. Additionally, future development and planned new capabilities are discussed. Key words: Copernicus, Spacecraft Trajectory Optimization Software.

  19. Intelligent Platform Management Controller Software Architecture in ATCA Modules for Fast Control Systems

    NASA Astrophysics Data System (ADS)

    Rodrigues, A. P.; Correia, M.; Batista, A.; Carvalho, P. F.; Santos, B.; Carvalho, B. B.; Sousa, J.; Gonçalves, B.; Correia, C. M. B.; Varandas, C. A. F.

    2014-08-01

    The Intelligent Platform Management Controller (IPMC) is one of the main elements for hardware management in an Advanced Telecommunications Computing Architecture (ATCA) crate. Hardware failure detection, redundancy procedures, and hot insertion/removal of the boards are tasks that ensure high availability to a control and data acquisition system. Therefore, the Instituto de Plasmas e Fusão Nuclear of the Instituto Superior Técnico de Lisboa (IPFN/IST) decided to develop software to an IPMC sodimm module from CoreIPM and integrate it in the ATCA boards that are being developed. The objective is to include those boards in the instrumentation catalogue of the tokamak International Thermonuclear Experimental Reactor (ITER). This paper describes the architecture and the main tasks performed by the IPMC software module.

  20. Component-based integration of chemistry and optimization software.

    PubMed

    Kenny, Joseph P; Benson, Steven J; Alexeev, Yuri; Sarich, Jason; Janssen, Curtis L; McInnes, Lois Curfman; Krishnan, Manojkumar; Nieplocha, Jarek; Jurrus, Elizabeth; Fahlstrom, Carl; Windus, Theresa L

    2004-11-15

    Typical scientific software designs make rigid assumptions regarding programming language and data structures, frustrating software interoperability and scientific collaboration. Component-based software engineering is an emerging approach to managing the increasing complexity of scientific software. Component technology facilitates code interoperability and reuse. Through the adoption of methodology and tools developed by the Common Component Architecture Forum, we have developed a component architecture for molecular structure optimization. Using the NWChem and Massively Parallel Quantum Chemistry packages, we have produced chemistry components that provide capacity for energy and energy derivative evaluation. We have constructed geometry optimization applications by integrating the Toolkit for Advanced Optimization, Portable Extensible Toolkit for Scientific Computation, and Global Arrays packages, which provide optimization and linear algebra capabilities. We present a brief overview of the component development process and a description of abstract interfaces for chemical optimizations. The components conforming to these abstract interfaces allow the construction of applications using different chemistry and mathematics packages interchangeably. Initial numerical results for the component software demonstrate good performance, and highlight potential research enabled by this platform.

  1. Integrated Support Software System (ISSS).

    DTIC Science & Technology

    1985-04-01

    Electrical Harness Data Systems ( EHDS ) 18 5.2.3 Defense Mapping Agency (DMA) Contract 19 * *-5.2.4 Advanced Technology 19 6.0 Lessons Learned 20 6.1...design philosophy was the discovery of the subtle distinction between "integrating" and "interlacing" tools. Not all of the various components of ISSS...Systems ( EHDS ), Defense Mapping Agency (DMA) Modern Programming Environment (MPE) contract, and the Advanced Technology Group. 5.2.1 Use by Production

  2. Open Architecture Standard for NASA's Software-Defined Space Telecommunications Radio Systems

    NASA Technical Reports Server (NTRS)

    Reinhart, Richard C.; Johnson, Sandra K.; Kacpura, Thomas J.; Hall, Charles S.; Smith, Carl R.; Liebetreu, John

    2008-01-01

    NASA is developing an architecture standard for software-defined radios used in space- and ground-based platforms to enable commonality among radio developments to enhance capability and services while reducing mission and programmatic risk. Transceivers (or transponders) with functionality primarily defined in software (e.g., firmware) have the ability to change their functional behavior through software alone. This radio architecture standard offers value by employing common waveform software interfaces, method of instantiation, operation, and testing among different compliant hardware and software products. These common interfaces within the architecture abstract application software from the underlying hardware to enable technology insertion independently at either the software or hardware layer. This paper presents the initial Space Telecommunications Radio System (STRS) Architecture for NASA missions to provide the desired software abstraction and flexibility while minimizing the resources necessary to support the architecture.

  3. The Diamond Beamline Controls and Data Acquisition Software Architecture

    NASA Astrophysics Data System (ADS)

    Rees, N.

    2010-06-01

    The software for the Diamond Light Source beamlines[1] is based on two complementary software frameworks: low level control is provided by the Experimental Physics and Industrial Control System (EPICS) framework[2][3] and the high level user interface is provided by the Java based Generic Data Acquisition or GDA[4][5]. EPICS provides a widely used, robust, generic interface across a wide range of hardware where the user interfaces are focused on serving the needs of engineers and beamline scientists to obtain detailed low level views of all aspects of the beamline control systems. The GDA system provides a high-level system that combines an understanding of scientific concepts, such as reciprocal lattice coordinates, a flexible python syntax scripting interface for the scientific user to control their data acquisition, and graphical user interfaces where necessary. This paper describes the beamline software architecture in more detail, highlighting how these complementary frameworks provide a flexible system that can accommodate a wide range of requirements.

  4. Towards multi-platform software architecture for Collaborative Teleoperation

    SciTech Connect

    Domingues, Christophe; Otmane, Samir; Davesne, Frederic; Mallem, Malik

    2009-03-05

    Augmented Reality (AR) can provide to a Human Operator (HO) a real help in achieving complex tasks, such as remote control of robots and cooperative teleassistance. Using appropriate augmentations, the HO can interact faster, safer and easier with the remote real world. In this paper, we present an extension of an existing distributed software and network architecture for collaborative teleoperation based on networked human-scaled mixed reality and mobile platform. The first teleoperation system was composed by a VR application and a Web application. However the 2 systems cannot be used together and it is impossible to control a distant robot simultaneously. Our goal is to update the teleoperation system to permit a heterogeneous collaborative teleoperation between the 2 platforms. An important feature of this interface is based on the use of different Virtual Reality platforms and different Mobile platforms to control one or many robots.

  5. Project Integration Architecture: Formulation of Semantic Parameters

    NASA Technical Reports Server (NTRS)

    Jones, William Henry

    2005-01-01

    One of several key elements of the Project Integration Architecture (PIA) is the intention to formulate parameter objects which convey meaningful semantic information. In so doing, it is expected that a level of automation can be achieved in the consumption of information content by PIA-consuming clients outside the programmatic boundary of a presenting PIA-wrapped application. This paper discusses the steps that have been recently taken in formulating such semantically-meaningful parameters.

  6. The use of software agents and distributed objects to integrate enterprises: Compatible or competing technologies?

    SciTech Connect

    Pancerella, C.M.

    1998-04-01

    Distributed object and software agent technologies are two integration methods for connecting enterprises. The two technologies have overlapping goals--interoperability and architectural support for integrating software components--though to date little or no integration of the two technologies has been made at the enterprise level. The primary difference between these two technologies is that distributed object technologies focus on the problems inherent in connecting distributed heterogeneous systems whereas software agent technologies focus on the problems involved with coordination and knowledge exchange across domain boundaries. This paper addresses the integration of these technologies in support of enterprise integration across organizational and geographic boundaries. The authors discuss enterprise integration issues, review their experiences with both technologies, and make recommendations for future work. Neither technology is a panacea. Good software engineering techniques must be applied to integrate an enterprise because scalability and a distributed software development team are realities.

  7. A Distributed, Cross-Agency Software Architecture for Sharing Climate Models and Observational Data Sets (Invited)

    NASA Astrophysics Data System (ADS)

    Crichton, D. J.; Mattmann, C. A.; Braverman, A. J.; Cinquini, L.

    2010-12-01

    The Jet Propulsion Laboratory (JPL) has been developing a distributed infrastructure to supporting access and sharing of Earth Science observational data sets with climate models to support model-to-data intercomparison for climate research. The Climate Data Exchange (CDX), a framework for linking distributed repositories coupled with tailored distributed services to support the intercomparison, provides mechanisms to discover, access, transform and share observational and model output data [2]. These services are critical to allowing data to remain distributed, but be pulled together to support analysis. The architecture itself provides a services-based approach allowing for integrating and working with other computing infrastructures through well-defined software interfaces. Specifically, JPL has worked very closely with the Earth System Grid (ESG) and the Program for Climate Model Diagnostics and Intercomparisons (PCMDI) at Lawrence Livermore National Laboratory (LLNL) to integrate NASA science data systems with the Earth System Grid to support federation across organizational and agency boundaries [1]. Of particular interest near-term is enabling access to NASA observational data along-side climate models for the Coupled Model Intercomparison Project known as CMIP5. CMIP5 is the protocol that will be used for the next International Panel for Climate Change (IPCC) Assessment Report (AR5) on climate change. JPL and NASA are currently engaged in a project to ensure that observational data are available to the climate research community through the Earth System Grid. By both developing a software architecture and working with the key architects for the ESG, JPL has been successful at building a prototype for AR5. This presentation will review the software architecture including core principles, models and interfaces, the Climate Data Exchange project and specific goals to support access to both observational data and models for AR5. It will highlight the progress

  8. Proceedings of the Second Software Architecture Technology User Network (SATURN) Workshop

    DTIC Science & Technology

    2006-08-01

    architect for The Open Group Architecture Framework ( TOGAF ), Version 8.1, ATAM Evaluator (SEI), and Software Architecture Profes- 8 | CMU/SEI-2006-TR...quality attribute UML extensions − The DoDAF − The Object Management Group (OMG) − The Open Group Architecture Framework ( TOGAF ) • Describe tactics...Measurement and Analysis SOA service-oriented architecture SoS system of systems TIG Technical Interest Group TOGAF The Open Group Architecture

  9. Architecture for Payload Planning System (PPS) Software Distribution

    NASA Technical Reports Server (NTRS)

    Howell, Eric; Hagopian, Jeff

    1995-01-01

    The complex and diverse nature of the pay load operations to be performed on the Space Station requires a robust and flexible planning approach, and the proper software tools which tools to support that approach. To date, the planning software for most manned operations in space has been utilized in a centralized planning environment. Centralized planning is characterized by the following: performed by a small team of people, performed at a single location, and performed using single-user planning systems. This approach, while valid for short duration flights, is not conducive to the long duration and highly distributed payload operations environment of the Space Station. The Payload Planning System (PPS) is being designed specifically to support the planning needs of the large number of geographically distributed users of the Space Station. This paper problem provides a general description of the distributed planning architecture that PPS must support and describes the concepts proposed for making PPS available to the Space Station payload user community.

  10. Adaptive software architecture based on confident HCI for the deployment of sensitive services in Smart Homes.

    PubMed

    Vega-Barbas, Mario; Pau, Iván; Martín-Ruiz, María Luisa; Seoane, Fernando

    2015-03-25

    Smart spaces foster the development of natural and appropriate forms of human-computer interaction by taking advantage of home customization. The interaction potential of the Smart Home, which is a special type of smart space, is of particular interest in fields in which the acceptance of new technologies is limited and restrictive. The integration of smart home design patterns with sensitive solutions can increase user acceptance. In this paper, we present the main challenges that have been identified in the literature for the successful deployment of sensitive services (e.g., telemedicine and assistive services) in smart spaces and a software architecture that models the functionalities of a Smart Home platform that are required to maintain and support such sensitive services. This architecture emphasizes user interaction as a key concept to facilitate the acceptance of sensitive services by end-users and utilizes activity theory to support its innovative design. The application of activity theory to the architecture eases the handling of novel concepts, such as understanding of the system by patients at home or the affordability of assistive services. Finally, we provide a proof-of-concept implementation of the architecture and compare the results with other architectures from the literature.

  11. Adaptive Software Architecture Based on Confident HCI for the Deployment of Sensitive Services in Smart Homes

    PubMed Central

    Vega-Barbas, Mario; Pau, Iván; Martín-Ruiz, María Luisa; Seoane, Fernando

    2015-01-01

    Smart spaces foster the development of natural and appropriate forms of human-computer interaction by taking advantage of home customization. The interaction potential of the Smart Home, which is a special type of smart space, is of particular interest in fields in which the acceptance of new technologies is limited and restrictive. The integration of smart home design patterns with sensitive solutions can increase user acceptance. In this paper, we present the main challenges that have been identified in the literature for the successful deployment of sensitive services (e.g., telemedicine and assistive services) in smart spaces and a software architecture that models the functionalities of a Smart Home platform that are required to maintain and support such sensitive services. This architecture emphasizes user interaction as a key concept to facilitate the acceptance of sensitive services by end-users and utilizes activity theory to support its innovative design. The application of activity theory to the architecture eases the handling of novel concepts, such as understanding of the system by patients at home or the affordability of assistive services. Finally, we provide a proof-of-concept implementation of the architecture and compare the results with other architectures from the literature. PMID:25815449

  12. Human-Centered Software Engineering: Software Engineering Architectures, Patterns, and Sodels for Human Computer Interaction

    NASA Astrophysics Data System (ADS)

    Seffah, Ahmed; Vanderdonckt, Jean; Desmarais, Michel C.

    The Computer-Human Interaction and Software Engineering (CHISE) series of edited volumes originated from a number of workshops and discussions over the latest research and developments in the field of Human Computer Interaction (HCI) and Software Engineering (SE) integration, convergence and cross-pollination. A first volume in this series (CHISE Volume I - Human-Centered Software Engineering: Integrating Usability in the Development Lifecycle) aims at bridging the gap between the field of SE and HCI, and addresses specifically the concerns of integrating usability and user-centered systems design methods and tools into the software development lifecycle and practices. This has been done by defining techniques, tools and practices that can fit into the entire software engineering lifecycle as well as by defining ways of addressing the knowledge and skills needed, and the attitudes and basic values that a user-centered development methodology requires. The first volume has been edited as Vol. 8 in the Springer HCI Series (Seffah, Gulliksen and Desmarais, 2005).

  13. A Novel Software Architecture for the Provision of Context-Aware Semantic Transport Information

    PubMed Central

    Moreno, Asier; Perallos, Asier; López-de-Ipiña, Diego; Onieva, Enrique; Salaberria, Itziar; Masegosa, Antonio D.

    2015-01-01

    The effectiveness of Intelligent Transportation Systems depends largely on the ability to integrate information from diverse sources and the suitability of this information for the specific user. This paper describes a new approach for the management and exchange of this information, related to multimodal transportation. A novel software architecture is presented, with particular emphasis on the design of the data model and the enablement of services for information retrieval, thereby obtaining a semantic model for the representation of transport information. The publication of transport data as semantic information is established through the development of a Multimodal Transport Ontology (MTO) and the design of a distributed architecture allowing dynamic integration of transport data. The advantages afforded by the proposed system due to the use of Linked Open Data and a distributed architecture are stated, comparing it with other existing solutions. The adequacy of the information generated in regard to the specific user’s context is also addressed. Finally, a working solution of a semantic trip planner using actual transport data and running on the proposed architecture is presented, as a demonstration and validation of the system. PMID:26016915

  14. A novel software architecture for the provision of context-aware semantic transport information.

    PubMed

    Moreno, Asier; Perallos, Asier; López-de-Ipiña, Diego; Onieva, Enrique; Salaberria, Itziar; Masegosa, Antonio D

    2015-05-26

    The effectiveness of Intelligent Transportation Systems depends largely on the ability to integrate information from diverse sources and the suitability of this information for the specific user. This paper describes a new approach for the management and exchange of this information, related to multimodal transportation. A novel software architecture is presented, with particular emphasis on the design of the data model and the enablement of services for information retrieval, thereby obtaining a semantic model for the representation of transport information. The publication of transport data as semantic information is established through the development of a Multimodal Transport Ontology (MTO) and the design of a distributed architecture allowing dynamic integration of transport data. The advantages afforded by the proposed system due to the use of Linked Open Data and a distributed architecture are stated, comparing it with other existing solutions. The adequacy of the information generated in regard to the specific user's context is also addressed. Finally, a working solution of a semantic trip planner using actual transport data and running on the proposed architecture is presented, as a demonstration and validation of the system.

  15. Integrated Network Architecture for NASA's Orion Missions

    NASA Technical Reports Server (NTRS)

    Bhasin, Kul B.; Hayden, Jeffrey L.; Sartwell, Thomas; Miller, Ronald A.; Hudiburg, John J.

    2008-01-01

    NASA is planning a series of short and long duration human and robotic missions to explore the Moon and then Mars. The series of missions will begin with a new crew exploration vehicle (called Orion) that will initially provide crew exchange and cargo supply support to the International Space Station (ISS) and then become a human conveyance for travel to the Moon. The Orion vehicle will be mounted atop the Ares I launch vehicle for a series of pre-launch tests and then launched and inserted into low Earth orbit (LEO) for crew exchange missions to the ISS. The Orion and Ares I comprise the initial vehicles in the Constellation system of systems that later includes Ares V, Earth departure stage, lunar lander, and other lunar surface systems for the lunar exploration missions. These key systems will enable the lunar surface exploration missions to be initiated in 2018. The complexity of the Constellation system of systems and missions will require a communication and navigation infrastructure to provide low and high rate forward and return communication services, tracking services, and ground network services. The infrastructure must provide robust, reliable, safe, sustainable, and autonomous operations at minimum cost while maximizing the exploration capabilities and science return. The infrastructure will be based on a network of networks architecture that will integrate NASA legacy communication, modified elements, and navigation systems. New networks will be added to extend communication, navigation, and timing services for the Moon missions. Internet protocol (IP) and network management systems within the networks will enable interoperability throughout the Constellation system of systems. An integrated network architecture has developed based on the emerging Constellation requirements for Orion missions. The architecture, as presented in this paper, addresses the early Orion missions to the ISS with communication, navigation, and network services over five

  16. Architecture for Integrated Medical Model Dynamic Probabilistic Risk Assessment

    NASA Technical Reports Server (NTRS)

    Jaworske, D. A.; Myers, J. G.; Goodenow, D.; Young, M.; Arellano, J. D.

    2016-01-01

    Probabilistic Risk Assessment (PRA) is a modeling tool used to predict potential outcomes of a complex system based on a statistical understanding of many initiating events. Utilizing a Monte Carlo method, thousands of instances of the model are considered and outcomes are collected. PRA is considered static, utilizing probabilities alone to calculate outcomes. Dynamic Probabilistic Risk Assessment (dPRA) is an advanced concept where modeling predicts the outcomes of a complex system based not only on the probabilities of many initiating events, but also on a progression of dependencies brought about by progressing down a time line. Events are placed in a single time line, adding each event to a queue, as managed by a planner. Progression down the time line is guided by rules, as managed by a scheduler. The recently developed Integrated Medical Model (IMM) summarizes astronaut health as governed by the probabilities of medical events and mitigation strategies. Managing the software architecture process provides a systematic means of creating, documenting, and communicating a software design early in the development process. The software architecture process begins with establishing requirements and the design is then derived from the requirements.

  17. Universal discrete Fourier optics RF photonic integrated circuit architecture.

    PubMed

    Hall, Trevor J; Hasan, Mehedi

    2016-04-04

    This paper describes a coherent electro-optic circuit architecture that generates a frequency comb consisting of N spatially separated orders using a generalised Mach-Zenhder interferometer (MZI) with its N × 1 combiner replaced by an optical N × N Discrete Fourier Transform (DFT). Advantage may be taken of the tight optical path-length control, component and circuit symmetries and emerging trimming algorithms offered by photonic integration in any platform that offers linear electro-optic phase modulation such as LiNbO3, silicon, III-V or hybrid technology. The circuit architecture subsumes all MZI-based RF photonic circuit architectures in the prior art given an appropriate choice of output port(s) and dimension N although the principal application envisaged is phase correlated subcarrier generation for all optical orthogonal frequency division multiplexing. A transfer matrix approach is used to model the operation of the architecture. The predictions of the model are validated by simulations performed using an industry standard software tool. Implementation is found to be practical.

  18. Beyond the Renderer: Software Architecture for Parallel Graphics and Visualization

    NASA Technical Reports Server (NTRS)

    Crockett, Thomas W.

    1996-01-01

    As numerous implementations have demonstrated, software-based parallel rendering is an effective way to obtain the needed computational power for a variety of challenging applications in computer graphics and scientific visualization. To fully realize their potential, however, parallel renderers need to be integrated into a complete environment for generating, manipulating, and delivering visual data. We examine the structure and components of such an environment, including the programming and user interfaces, rendering engines, and image delivery systems. We consider some of the constraints imposed by real-world applications and discuss the problems and issues involved in bringing parallel rendering out of the lab and into production.

  19. Software Architecture of the NASA Shuttle Ground Operations Simulator - SGOS

    NASA Technical Reports Server (NTRS)

    Cook, Robert P.; Lostroscio, Charles T.

    2005-01-01

    The SGOS executive and its subsystems have been an integral component of the Shuttle Launch Safety Program for almost thirty years. It is usable (via the LAN) by over 2000 NASA employees at the Kennedy Space Center and 11,000 contractors. SGOS supports over 800 models comprised of several hundred thousand lines of code and over 1,000 MCP procedures. Yet neither language has a for loop!! The simulation software described in this paper is used to train ground controllers and to certify launch countdown readiness.

  20. Integrated Operations Architecture Technology Assessment Study

    NASA Technical Reports Server (NTRS)

    2001-01-01

    As part of NASA's Integrated Operations Architecture (IOA) Baseline, NASA will consolidate all communications operations. including ground-based, near-earth, and deep-space communications, into a single integrated network. This network will make maximum use of commercial equipment, services and standards. It will be an Internet Protocol (IP) based network. This study supports technology development planning for the IOA. The technical problems that may arise when LEO mission spacecraft interoperate with commercial satellite services were investigated. Commercial technology and services that could support the IOA were surveyed, and gaps in the capability of existing technology and techniques were identified. Recommendations were made on which gaps should be closed by means of NASA research and development funding. Several findings emerged from the interoperability assessment: in the NASA mission set, there is a preponderance of small. inexpensive, low data rate science missions; proposed commercial satellite communications services could potentially provide TDRSS-like data relay functions; and. IP and related protocols, such as TCP, require augmentation to operate in the mobile networking environment required by the space-to-ground portion of the IOA. Five case studies were performed in the technology assessment. Each case represented a realistic implementation of the near-earth portion of the IOA. The cases included the use of frequencies at L-band, Ka-band and the optical spectrum. The cases also represented both space relay architectures and direct-to-ground architectures. Some of the main recommendations resulting from the case studies are: select an architecture for the LEO/MEO communications network; pursue the development of a Ka-band space-qualified transmitter (and possibly a receiver), and a low-cost Ka-band ground terminal for a direct-to-ground network, pursue the development of an Inmarsat (L-band) space-qualified transceiver to implement a global, low

  1. Achieving Better Buying Power through Acquisition of Open Architecture Software Systems for Web and Mobile Devices

    DTIC Science & Technology

    2016-02-22

    SPONSORED REPORT SERIES Achieving Better Buying Power through Acquisition of Open Architecture Software Systems for Web and Mobile Devices 22...ACQUISITION RESEARCH PROGRAM SPONSORED REPORT SERIES Achieving Better Buying Power through Acquisition of Open Architecture Software Systems for Web ...Policy Naval Postgraduate School Executive Summary Many people within large enterprises rely on up to four Web -based or mobile devices for their

  2. Software architecture standard for simulation virtual machine, version 2.0

    NASA Technical Reports Server (NTRS)

    Sturtevant, Robert; Wessale, William

    1994-01-01

    The Simulation Virtual Machine (SBM) is an Ada architecture which eases the effort involved in the real-time software maintenance and sustaining engineering. The Software Architecture Standard defines the infrastructure which all the simulation models are built from. SVM was developed for and used in the Space Station Verification and Training Facility.

  3. Software Defined Radio Standard Architecture and its Application to NASA Space Missions

    NASA Technical Reports Server (NTRS)

    Andro, Monty; Reinhart, Richard C.

    2006-01-01

    A software defined radio (SDR) architecture used in space-based platforms proposes to standardize certain aspects of radio development such as interface definitions, functional control and execution, and application software and firmware development. NASA has charted a team to develop an open software defined radio hardware and software architecture to support NASA missions and determine the viability of an Agency-wide Standard. A draft concept of the proposed standard has been released and discussed among organizations in the SDR community. Appropriate leveraging of the JTRS SCA, OMG's SWRadio Architecture and other aspects are considered. A standard radio architecture offers potential value by employing common waveform software instantiation, operation, testing and software maintenance. While software defined radios offer greater flexibility, they also poses challenges to the radio development for the space environment in terms of size, mass and power consumption and available technology. An SDR architecture for space must recognize and address the constraints of space flight hardware, and systems along with flight heritage and culture. NASA is actively participating in the development of technology and standards related to software defined radios. As NASA considers a standard radio architecture for space communications, input and coordination from government agencies, the industry, academia, and standards bodies is key to a successful architecture. The unique aspects of space require thorough investigation of relevant terrestrial technologies properly adapted to space. The talk will describe NASA's current effort to investigate SDR applications to space missions and a brief overview of a candidate architecture under consideration for space based platforms.

  4. On the Role of Connectors in Modeling and Implementing Software Architectures

    DTIC Science & Technology

    1998-02-15

    On the Role of Connectors in Modeling and Implementing Software Architectures Peyman Oreizy, David S. Rosenblum, and Richard N. Taylor Department of...Std Z39-18 On the Role of Connectors in Modeling and Implementing Software Architectures Peyman Oreizy David S. Rosenblum Richard N. Taylor

  5. Integrated architectures for a horticultural application

    NASA Astrophysics Data System (ADS)

    Spooner, Natalie R.; Rodrigo, T. Surangi

    1998-10-01

    For many applications, which involve the processing and handling of highly variable natural products, conventional automation techniques are inadequate. Field applications involving the processing and handling of these products have the additional complication of dealing with a dynamically changing environment. Automated systems for these applications must be capable of sensing the variability of each product item and adjusting the way each product item is processed to accommodate that variability. For automation to be feasible, both fast processing of sensor information and fast determination of how product items are handled, is vital. The combination of sensor equipped mobile robotic systems with artificial intelligence techniques is a potential solution for the automation of many of these applications. The aim of this research is to develop a software architecture which incorporates robotic task planning and control for a variety of applications involving the processing of naturally varying products. In this paper we discuss the results from the initial laboratory trials for an asparagus harvesting application.

  6. Integrating interface slicing into software engineering processes

    NASA Technical Reports Server (NTRS)

    Beck, Jon

    1993-01-01

    Interface slicing is a tool which was developed to facilitate software engineering. As previously presented, it was described in terms of its techniques and mechanisms. The integration of interface slicing into specific software engineering activities is considered by discussing a number of potential applications of interface slicing. The applications discussed specifically address the problems, issues, or concerns raised in a previous project. Because a complete interface slicer is still under development, these applications must be phrased in future tenses. Nonetheless, the interface slicing techniques which were presented can be implemented using current compiler and static analysis technology. Whether implemented as a standalone tool or as a module in an integrated development or reverse engineering environment, they require analysis no more complex than that required for current system development environments. By contrast, conventional slicing is a methodology which, while showing much promise and intuitive appeal, has yet to be fully implemented in a production language environment despite 12 years of development.

  7. An Audio Architecture Integrating Sound and Live Voice for Virtual Environments

    NASA Astrophysics Data System (ADS)

    Krebs, Eric M.

    2002-09-01

    The purpose behind this thesis was to design and implement audio system architecture, both in hardware and in software, for use in virtual environments The hardware and software design requirements were aimed at implementing acoustical models, such as reverberation and occlusion, and live audio streaming to any simulation employing this architecture, Several free or open-source sound APIs were evaluated, and DirectSound3DTM was selected as the core component of the audio architecture, Creative Technology Ltd, Environmental Audio Extensions (EAXTM 3,0) were integrated into the architecture to provide environmental effects such as reverberation, occlusion, obstruction, and exclusion, Voice over IP (VoIP) technology was evaluated to provide live, streaming voice to any virtual environment DirectVoice was selected as the voice component of the VoIP architecture due to its integration with DirectSound3DTM, However, extremely high latency considerations with DirectVoice, and any other VoIP application or software, required further research into alternative live voice architectures for inclusion in virtual environments Ausim3D's GoldServe Audio System was evaluated and integrated into the hardware component of the audio architecture to provide an extremely low-latency, live, streaming voice capability.

  8. Experimenting with an Evolving Ground/Space-based Software Architecture to Enable Sensor Webs

    NASA Technical Reports Server (NTRS)

    mandl, Daniel; Frye, Stuart

    2005-01-01

    A series of ongoing experiments are being conducted at the NASA Goddard Space Flight Center to explore integrated ground and space-based software architectures enabling sensor webs. A sensor web, as defined by Steve Talabac at NASA Goddard Space Flight Center(GSFC), is a coherent set of distributed nodes interconnected by a communications fabric, that collectively behave as a single, dynamically adaptive, observing system. The nodes can be comprised of satellites, ground instruments, computing nodes etc. Sensor web capability requires autonomous management of constellation resources. This becomes progressively more important as more and more satellites share resource, such as communication channels and ground station,s while automatically coordinating their activities. There have been five ongoing activities which include an effort to standardize a set of middleware. This paper will describe one set of activities using the Earth Observing 1 satellite, which used a variety of ground and flight software along with other satellites and ground sensors to prototype a sensor web. This activity allowed us to explore where the difficulties that occur in the assembly of sensor webs given today s technology. We will present an overview of the software system architecture, some key experiments and lessons learned to facilitate better sensor webs in the future.

  9. Integration and Testing of LCS Software

    NASA Technical Reports Server (NTRS)

    Wang, John

    2014-01-01

    Kennedy Space Center is in the midst of developing a command and control system for the launch of the next generation manned space vehicle. The Space Launch System (SLS) will launch using the new Spaceport Command and Control System (SCCS). As a member of the Software Integration and Test (SWIT) Team, command scripts, and bash scripts were written to assist in integration and testing of the Launch Control System (LCS), which is a component of SCCS. The short term and midterm tasks are for the most part completed. The long term tasks if time permits will require a presentation and demonstration.

  10. Integration and Testing of LCS Software

    NASA Technical Reports Server (NTRS)

    Wang, John

    2014-01-01

    Kennedy Space Center is in the midst of developing a command and control system for the launch of the next generation manned space vehicle. The Space Launch System (SLS) will launch using the new Spaceport Command and Control System (SCCS). As a member of the Software Integration and Test (SWIT) Team, command scripts, and bash scripts were written to assist in integration and testing of the Launch Control System (LCS), which is a component of SCCS. The short term and midterm tasks are for the most part completed. The long term tasks if time permits will require a presentation and demonstration.

  11. Software architecture for intelligent image processing using Prolog

    NASA Astrophysics Data System (ADS)

    Jones, Andrew C.; Batchelor, Bruce G.

    1994-10-01

    We describe a prototype system for interactive image processing using Prolog, implemented by the first author on an Apple Macintosh computer. This system is inspired by Prolog+, but differs from it in two particularly important respects. The first is that whereas Prolog+ assumes the availability of dedicated image processing hardware, with which the Prolog system communicates, our present system implements image processing functions in software using the C programming language. The second difference is that although our present system supports Prolog+ commands, these are implemented in terms of lower-level Prolog predicates which provide a more flexible approach to image manipulation. We discuss the impact of the Apple Macintosh operating system upon the implementation of the image-processing functions, and the interface between these functions and the Prolog system. We also explain how the Prolog+ commands have been implemented. The system described in this paper is a fairly early prototype, and we outline how we intend to develop the system, a task which is expedited by the extensible architecture we have implemented.

  12. Intelligent Software Agents: Sensor Integration and Response

    SciTech Connect

    Kulesz, James J; Lee, Ronald W

    2013-01-01

    Abstract In a post Macondo world the buzzwords are Integrity Management and Incident Response Management. The twin processes are not new but the opportunity to link the two is novel. Intelligent software agents can be used with sensor networks in distributed and centralized computing systems to enhance real-time monitoring of system integrity as well as manage the follow-on incident response to changing, and potentially hazardous, environmental conditions. The software components are embedded at the sensor network nodes in surveillance systems used for monitoring unusual events. When an event occurs, the software agents establish a new concept of operation at the sensing node, post the event status to a blackboard for software agents at other nodes to see , and then react quickly and efficiently to monitor the scale of the event. The technology addresses a current challenge in sensor networks that prevents a rapid and efficient response when a sensor measurement indicates that an event has occurred. By using intelligent software agents - which can be stationary or mobile, interact socially, and adapt to changing situations - the technology offers features that are particularly important when systems need to adapt to active circumstances. For example, when a release is detected, the local software agent collaborates with other agents at the node to exercise the appropriate operation, such as: targeted detection, increased detection frequency, decreased detection frequency for other non-alarming sensors, and determination of environmental conditions so that adjacent nodes can be informed that an event is occurring and when it will arrive. The software agents at the nodes can also post the data in a targeted manner, so that agents at other nodes and the command center can exercise appropriate operations to recalibrate the overall sensor network and associated intelligence systems. The paper describes the concepts and provides examples of real-world implementations

  13. A Proactive Means for Incorporating a Software Architecture Evaluation in a DoD System Acquisition

    DTIC Science & Technology

    2009-07-01

    A Proactive Means for Incorporating a Software Architecture Evaluation in a DoD System Acquisition John K. Bergey July 2009 TECHNICAL...www.sei.cmu.edu/publications/documents/06.reports/06tr012.html. [ Bergey 2001] Bergey , J. & Fisher, M. Use of the ATAM in the Acquisition of Software...publications/documents/01.reports/01tn009.html. [ Bergey 2005] Bergey , John K. & Clements, Paul C. Software Architecture in DoD Acquisition: An Approach and

  14. Requirements for an Integrated UAS CNS Architecture

    NASA Technical Reports Server (NTRS)

    Templin, Fred; Jain, Raj; Sheffield, Greg; Taboso, Pedro; Ponchak, Denise

    2017-01-01

    The National Aeronautics and Space Administration (NASA) Glenn Research Center (GRC) is investigating revolutionary and advanced universal, reliable, always available, cyber secure and affordable Communication, Navigation, Surveillance (CNS) options for all altitudes of UAS operations. In Spring 2015, NASA issued a Call for Proposals under NASA Research Announcements (NRA) NNH15ZEA001N, Amendment 7 Subtopic 2.4. Boeing was selected to conduct a study with the objective to determine the most promising candidate technologies for Unmanned Air Systems (UAS) air-to-air and air-to-ground data exchange and analyze their suitability in a post-NextGen NAS environment. The overall objectives are to develop UAS CNS requirements and then develop architectures that satisfy the requirements for UAS in both controlled and uncontrolled air space. This contract is funded under NASAs Aeronautics Research Mission Directorates (ARMD) Aviation Operations and Safety Program (AOSP) Safe Autonomous Systems Operations (SASO) project and proposes technologies for the Unmanned Air Systems Traffic Management (UTM) service. Communications, Navigation and Surveillance (CNS) requirements must be developed in order to establish a CNS architecture supporting Unmanned Air Systems integration in the National Air Space (UAS in the NAS). These requirements must address cybersecurity, future communications, satellite-based navigation APNT, and scalable surveillance and situational awareness. CNS integration, consolidation and miniaturization requirements are also important to support the explosive growth in small UAS deployment. Air Traffic Management (ATM) must also be accommodated to support critical Command and Control (C2) for Air Traffic Controllers (ATC). This document therefore presents UAS CNS requirements that will guide the architecture.

  15. Agile Development & Software Architecture - Crossing the Great Divide

    DTIC Science & Technology

    2010-04-22

    University What is Architecture? Structure A Thematic Analysis System Qualities Decisions / Governance Multi-Dimensional SEI IEEE TOGAF Rozanski & Woods 12...TWITTER Hashtag #seiwebinar Crossing the Great Divide Brown , 4/22/2010 © 2010 Carnegie Mellon University Architectural Themes SEI IEEE TOGAF Rozanski...2010 © 2010 Carnegie Mellon University Structure System Qualities Decisions / Governance Multi-Dimensional Architectural Themes SEI IEEE TOGAF Rozanski

  16. Semi-automated software service integration in virtual organisations

    NASA Astrophysics Data System (ADS)

    Afsarmanesh, Hamideh; Sargolzaei, Mahdi; Shadi, Mahdieh

    2015-08-01

    To enhance their business opportunities, organisations involved in many service industries are increasingly active in pursuit of both online provision of their business services (BSs) and collaborating with others. Collaborative Networks (CNs) in service industry sector, however, face many challenges related to sharing and integration of their collection of provided BSs and their corresponding software services. Therefore, the topic of service interoperability for which this article introduces a framework is gaining momentum in research for supporting CNs. It contributes to generation of formal machine readable specification for business processes, aimed at providing their unambiguous definitions, as needed for developing their equivalent software services. The framework provides a model and implementation architecture for discovery and composition of shared services, to support the semi-automated development of integrated value-added services. In support of service discovery, a main contribution of this research is the formal representation of services' behaviour and applying desired service behaviour specified by users for automated matchmaking with other existing services. Furthermore, to support service integration, mechanisms are developed for automated selection of the most suitable service(s) according to a number of service quality aspects. Two scenario cases are presented, which exemplify several specific features related to service discovery and service integration aspects.

  17. Using UML Modeling to Facilitate Three-Tier Architecture Projects in Software Engineering Courses

    ERIC Educational Resources Information Center

    Mitra, Sandeep

    2014-01-01

    This article presents the use of a model-centric approach to facilitate software development projects conforming to the three-tier architecture in undergraduate software engineering courses. Many instructors intend that such projects create software applications for use by real-world customers. While it is important that the first version of these…

  18. Using UML Modeling to Facilitate Three-Tier Architecture Projects in Software Engineering Courses

    ERIC Educational Resources Information Center

    Mitra, Sandeep

    2014-01-01

    This article presents the use of a model-centric approach to facilitate software development projects conforming to the three-tier architecture in undergraduate software engineering courses. Many instructors intend that such projects create software applications for use by real-world customers. While it is important that the first version of these…

  19. Video signals integrator (VSI) system architecture

    NASA Astrophysics Data System (ADS)

    Kasprowicz, Grzegorz; Pastuszak, Grzegorz; Poźniak, Krzysztof; Trochimiuk, Maciej; Abramowski, Andrzej; Gaska, Michal; Bukowiecka, Danuta; Tyburska, Agata; Struniawski, Jarosław; Jastrzebski, Pawel; Jewartowski, Blazej; Frasunek, Przemysław; Nalbach-Moszynska, Małgorzata; Brawata, Sebastian; Bubak, Iwona; Gloza, Małgorzata

    2016-09-01

    The purpose of the project is development of a platform which integrates video signals from many sources. The signals can be sourced by existing analogue CCTV surveillance installations, recent internet-protocol (IP) cameras or single cameras of any type. The system will consist of portable devices that provide conversion, encoding, transmission and archiving. The sharing subsystem will use distributed file system and also user console which provides simultaneous access to any of video streams in real time. The system is fully modular so its extension is possible, both from hardware and software side. Due to standard modular technology used, partial technology modernization is also possible during a long exploitation period.

  20. Method for critical software event execution reliability in high integrity software

    SciTech Connect

    Kidd, M.E.

    1997-11-01

    This report contains viewgraphs on a method called SEER, which provides a high level of confidence that critical software driven event execution sequences faithfully exceute in the face of transient computer architecture failures in both normal and abnormal operating environments.

  1. An Integrated Architecture for Onboard Spacecraft

    NASA Technical Reports Server (NTRS)

    Figueiredo, Marco A.; Stakem, Patrick H.; Flatley, Thomas P.; Hines, Tonjua M.

    1999-01-01

    As increasingly complex scientific and environmental observation spacecraft are deployed, the burden on the downlink assets, and ground-based systems complexity and cost is becoming a major problem. Already, the limitations of communications bandwidth and processing throughput limit the science data gathering, both in volume and in rate. This poses a dilemma to the scientist experimenter forcing choices between data collection and bandwidth/processing/archiving. Advances in ground based processing and space-to-Earth links have fallen behind the requirements for observation data, at increasing rates, over the last few decades. As NASA achieves its 40th anniversary, the ability to observe and capture phenomena of theoretical and practical interest to life on Earth far outstrips the ability to transfer, process, or store these data. NASA recognizes the need to invest on technological advancements that will enable both the space and ground systems to address the limitations. Spacecraft onboard computing power is a clear one. The capability of creating data products onboard the spacecraft adds a new level of flexibility to address the more demanding observation needs. Current spacecraft computing power is limited and incapable of addressing the needs of the new generation of observation satellites because extensive onboard data processing is required. Traditional spacecraft architectures only collect, package, and transmit to Earth the data acquired by multiple instruments. Conversely, the experience on developing ground data systems shows the need for high performance computing systems to process and create information from the instrumentation data. The expectation is that supercomputing technology is required to enable spacecraft to create information onboard. Moving supercomputing capability onboard spacecraft requires an approach that considers an integrated data architecture. Otherwise, it may simply convert a compute-bound problem into a communications bound

  2. An Integrated Architecture for Onboard Spacecraft

    NASA Technical Reports Server (NTRS)

    Figueiredo, Marco A.; Stakem, Patrick H.; Flatley, Thomas P.; Hines, Tonjua M.

    1999-01-01

    As increasingly complex scientific and environmental observation spacecraft are deployed, the burden on the downlink assets, and ground-based systems complexity and cost is becoming a major problem. Already, the limitations of communications bandwidth and processing throughput limit the science data gathering, both in volume and in rate. This poses a dilemma to the scientist experimenter forcing choices between data collection and bandwidth/processing/archiving. Advances in ground based processing and space-to-Earth links have fallen behind the requirements for observation data, at increasing rates, over the last few decades. As NASA achieves its 40th anniversary, the ability to observe and capture phenomena of theoretical and practical interest to life on Earth far outstrips the ability to transfer, process, or store these data. NASA recognizes the need to invest on technological advancements that will enable both the space and ground systems to address the limitations. Spacecraft onboard computing power is a clear one. The capability of creating data products onboard the spacecraft adds a new level of flexibility to address the more demanding observation needs. Current spacecraft computing power is limited and incapable of addressing the needs of the new generation of observation satellites because extensive onboard data processing is required. Traditional spacecraft architectures only collect, package, and transmit to Earth the data acquired by multiple instruments. Conversely, the experience on developing ground data systems shows the need for high performance computing systems to process and create information from the instrumentation data. The expectation is that supercomputing technology is required to enable spacecraft to create information onboard. Moving supercomputing capability onboard spacecraft requires an approach that considers an integrated data architecture. Otherwise, it may simply convert a compute-bound problem into a communications bound

  3. Reference Architecture Test-Bed for Avionics (RASTA): A Software Building Blocks Overview

    NASA Astrophysics Data System (ADS)

    Viana Sanchez, Aitor; Taylor, Chris

    2010-08-01

    This paper presents an overview of the Reference Architecture System Test-bed for Avionics (RASTA) being developed within the ESA Estec Data Systems Division. This activity aims to benefit from interface standardization to provide a hardware/software reference infrastructure into which incoming R&D activities can be integrated, thus providing a generic but standardized test and development environment rather than dedicated facilities for each activity. Rasta is composed of by both HW and SW building blocks constituting the main elements of a typical Data Handling System. This includes a core processor (LEON2), Telemetry and Telecommand links, digital interfaces, and mass memory. The range of digital serial interfaces includes CAN bus, MIL-STD-1553 and SpaceWire. The paper will focus on the Software aspects of RASTA and in particular the software building blocks provided to ease development activities and allow hardware independency. To support the take-up of RASTA by European Industry, all RASTA software developed internally by ESA is provided free under license. Significant outputs are already available and include: Basic SW and SW drivers (CAN/1553/SpW, TT&C), OS abstraction layer, CFDP flight implementation, highly portable and independent file system for space, ground segment telecommand/telemetry router. In the future, additional SW building blocks are planned (e.g. ECSS CAN library). The present focus of RASTA is related to a prototype implementation of the SOIS services and protocols under development by the CCSDS (Consultative committee for Space Data Standards)

  4. Formal Foundations for the Specification of Software Architecture.

    DTIC Science & Technology

    1995-03-01

    Architectures For- mally: A Case-Study Using KWIC." Kestrel Institute, Palo Alto, CA 94304, April 1994. 58. Kang, Kyo C. Feature-Oriented Domain Analysis ( FODA ...6.3.5 Constraint-Based Architectures ................. 6-60 6.4 Summary ......... ............................. 6-63 VII. Analysis of Process-Based...between these architec- ture theories were investigated. A feasibility analysis on an image processing application demonstrated that architecture theories

  5. Integrated software package STAMP for minor planets

    NASA Technical Reports Server (NTRS)

    Kochetova, O. M.; Shor, Viktor A.

    1992-01-01

    The integrated software package STAMP allowed for rapid and exact reproduction of the tables of the year-book 'Ephemerides of Minor Planets.' Additionally, STAMP solved the typical problems connected with the use of the year-book. STAMP is described. The year-book 'Ephemerides of Minor Planets' (EMP) is a publication used in many astronomical institutions around the world. It contains all the necessary information on the orbits of the numbered minor planets. Also, the astronomical coordinates are provided for each planet during its suitable observation period.

  6. Design of an integrated airframe/propulsion control system architecture

    NASA Technical Reports Server (NTRS)

    Cohen, Gerald C.; Lee, C. William; Strickland, Michael J.; Torkelson, Thomas C.

    1990-01-01

    The design of an integrated airframe/propulsion control system architecture is described. The design is based on a prevalidation methodology that uses both reliability and performance. A detailed account is given for the testing associated with a subset of the architecture and concludes with general observations of applying the methodology to the architecture.

  7. An architecture for robotic system integration

    NASA Astrophysics Data System (ADS)

    Butler, P. L.; Reister, D. B.; Gourley, C. S.; Thayer, S. M.

    An architecture was developed to provide an object-oriented framework for the integration of multiple robotic subsystems into a single integrated system. By using an object-oriented approach, all subsystems can interface with each other, and still be able to be customized for specific subsystem interface needs. The object-oriented framework allows the communications between subsystems to be hidden from the interface specification itself. Thus, system designers can concentrate on what the subsystems are to do, not how to communicate. This system was developed for the Environmental Restoration and Waste Management Decontamination and Decommissioning Project at Oak Ridge National Laboratory. In this system, multiple subsystems are defined to separate the functional units of the integrated system. For example, a Human-Machine Interface (HMI) subsystem handles the high-level machine coordination and subsystem status display. The HMI also provides status-logging facilities and safety facilities for use by the remaining subsystems. Other subsystems have been developed to provide specific functionality, and many of these can be reused by other projects.

  8. Integrated Software Framework for Geophysical Data Processing

    NASA Astrophysics Data System (ADS)

    Chubak, G. D.; Morozov, I. B.

    2005-12-01

    An integrated software framework for geophysical data processing was designed by extending a seismic processing system developed previously. Unlike other systems, the new processing monitor is essentially content-agnostic, supports structured multicomponent seismic data streams, multidimensional data objects, and employs a unique backpropagation execution logic. This results in an unusual flexibility of processing, allowing the system to handle nearly any geophysical data. The core package includes nearly 190 tools for seismic, travel-time, and potential-field processing, interfaces to popular graphics and other packages (such as Seismic Unix and GMT). The system also offers an extensive processing environment, including: 1) a modern and feature-rich Graphical User Interface allowing submission of processing jobs and interaction with them during run time, 2) parallel processing capabilities, including load distribution on Beowulf clusters or local area networks; 3) web service operation allowing submission of complex processing jobs to shared remote servers; 4) automated software update service for code distribution to multiple systems, 5) automated online documentation, and 6) software development utilities. The core package was used in several areas of seismology (shallow, reflection, crustal wide-angle, and teleseismic) and in 3D potential-field processing. As a first example of its application, the new web service component (http://seisweb.usask.ca/SIA/ws.php).was used to build a library of processing examples, ranging from simple (UTM coordinate transformations or calculation of great-arc distances) to more complex (such as synthetic seismic modeling).

  9. Achieving Better Buying Power through Acquisition of Open Architecture Software Systems: Volume 1

    DTIC Science & Technology

    2016-01-06

    SPONSORED REPORT SERIES Achieving Better Buying Power through Acquisition of Open Architecture Software Systems Volume I 6 January 2016 Dr. Walt...software component costs and cost reduction opportunities within the acquisition life cycle of open architecture (OA) systems for Web-based and mobile...he served as general co-chair of the 8th IFIP International Conference on Open Source Systems. Dr. Walt Scacchi, Senior Research Scientist

  10. Autonomous Underwater Vehicle Control Coordination Using a Tri-Level Hybrid Software Architecture

    DTIC Science & Technology

    1996-01-01

    Tri-Level Hybrid Software Architecture A. J. Healey1, D. B. Marco, R. B. McGhee Autonomous Underwater Vehicle Laboratory Naval Postgraduate School...the software architecture to implement them is often composed of three levels for ease of segregation and development of functionality. An...the development of coastal environmental understanding, new technology is being aimed at using (semi)autonomous vehicles, requiring acoustic

  11. Coordinating space telescope operations in an integrated planning and scheduling architecture

    NASA Technical Reports Server (NTRS)

    Muscettola, Nicola; Smith, Stephen F.; Cesta, Amedeo; D'Aloisi, Daniela

    1992-01-01

    The Heuristic Scheduling Testbed System (HSTS), a software architecture for integrated planning and scheduling, is discussed. The architecture has been applied to the problem of generating observation schedules for the Hubble Space Telescope. This problem is representative of the class of problems that can be addressed: their complexity lies in the interaction of resource allocation and auxiliary task expansion. The architecture deals with this interaction by viewing planning and scheduling as two complementary aspects of the more general process of constructing behaviors of a dynamical system. The principal components of the software architecture are described, indicating how to model the structure and dynamics of a system, how to represent schedules at multiple levels of abstraction in the temporal database, and how the problem solving machinery operates. A scheduler for the detailed management of Hubble Space Telescope operations that has been developed within HSTS is described. Experimental performance results are given that indicate the utility and practicality of the approach.

  12. Information management architecture for an integrated computing environment for the Environmental Restoration Program. Environmental Restoration Program, Volume 3, Interim technical architecture

    SciTech Connect

    Not Available

    1994-09-01

    This third volume of the Information Management Architecture for an Integrated Computing Environment for the Environmental Restoration Program--the Interim Technical Architecture (TA) (referred to throughout the remainder of this document as the ER TA)--represents a key milestone in establishing a coordinated information management environment in which information initiatives can be pursued with the confidence that redundancy and inconsistencies will be held to a minimum. This architecture is intended to be used as a reference by anyone whose responsibilities include the acquisition or development of information technology for use by the ER Program. The interim ER TA provides technical guidance at three levels. At the highest level, the technical architecture provides an overall computing philosophy or direction. At this level, the guidance does not address specific technologies or products but addresses more general concepts, such as the use of open systems, modular architectures, graphical user interfaces, and architecture-based development. At the next level, the technical architecture provides specific information technology recommendations regarding a wide variety of specific technologies. These technologies include computing hardware, operating systems, communications software, database management software, application development software, and personal productivity software, among others. These recommendations range from the adoption of specific industry or Martin Marietta Energy Systems, Inc. (Energy Systems) standards to the specification of individual products. At the third level, the architecture provides guidance regarding implementation strategies for the recommended technologies that can be applied to individual projects and to the ER Program as a whole.

  13. A Generic Software Architecture for Deception-Based Intrusion Detection and Response Systems

    DTIC Science & Technology

    2003-03-01

    known as Aikido [37]. Michael et al. [2] proposed a high-level architecture for software decoys, shown in Figure II.3. The architecture is based on...14, no.3, pp. 54-62, 1999. [37] Westbrook, A., Ratti, O., Aikido and the Dynamic Sphere, Charles E. Tuttle Co., September 1994. [38] Ellison, R.J

  14. Agile Development and Software Architecture: Understanding Scale and Risk

    DTIC Science & Technology

    2012-04-26

    In a Scrum project environment, the architectural runway may be established during Sprint 0. Sprint 0 might have a longer duration than the rest of...architecture In its simplest instantiation, a Scrum development environment consists of: • a single co-located, cross-functional team • with skills...cause analysis: Typical problem 1 Symptom • Scrum teams spend almost all of their time fixing defects, and new feature development is continuously

  15. Achieving Better Buying Power through Acquisition of Open Architecture Software Systems. Volume 2 Understanding Open Architecture Software Systems: Licensing and Security Research and Recommendations

    DTIC Science & Technology

    analyzing software component costs and cost reduction opportunities within the acquisition life cycle of open architecture (OA) systems for Web -based and...development and evolution of OA systems at design-time, build-time, and release and run-time. Developing formal foundations for establishing

  16. A reference architecture for integrated EHR in Colombia.

    PubMed

    de la Cruz, Edgar; Lopez, Diego M; Uribe, Gustavo; Gonzalez, Carolina; Blobel, Bernd

    2011-01-01

    The implementation of national EHR infrastructures has to start by a detailed definition of the overall structure and behavior of the EHR system (system architecture). Architectures have to be open, scalable, flexible, user accepted and user friendly, trustworthy, based on standards including terminologies and ontologies. The GCM provides an architectural framework created with the purpose of analyzing any kind of system, including EHR system´s architectures. The objective of this paper is to propose a reference architecture for the implementation of an integrated EHR in Colombia, based on the current state of system´s architectural models, and EHR standards. The proposed EHR architecture defines a set of services (elements) and their interfaces, to support the exchange of clinical documents, offering an open, scalable, flexible and semantically interoperable infrastructure. The architecture was tested in a pilot tele-consultation project in Colombia, where dental EHR are exchanged.

  17. THE EPA MULTIMEDIA INTEGRATED MODELING SYSTEM SOFTWARE SUITE

    EPA Science Inventory

    The U.S. EPA is developing a Multimedia Integrated Modeling System (MIMS) framework that will provide a software infrastructure or environment to support constructing, composing, executing, and evaluating complex modeling studies. The framework will include (1) common software ...

  18. THE EPA MULTIMEDIA INTEGRATED MODELING SYSTEM SOFTWARE SUITE

    EPA Science Inventory

    The U.S. EPA is developing a Multimedia Integrated Modeling System (MIMS) framework that will provide a software infrastructure or environment to support constructing, composing, executing, and evaluating complex modeling studies. The framework will include (1) common software ...

  19. Hybridization of Architectural Styles for Integrated Enterprise Information Systems

    NASA Astrophysics Data System (ADS)

    Bagusyte, Lina; Lupeikiene, Audrone

    Current enterprise systems engineering theory does not provide adequate support for the development of information systems on demand. To say more precisely, it is forming. This chapter proposes the main architectural decisions that underlie the design of integrated enterprise information systems. This chapter argues for the extending service-oriented architecture - for merging it with component-based paradigm at the design stage and using connectors of different architectural styles. The suitability of general-purpose language SysML for the modeling of integrated enterprise information systems architectures is described and arguments pros are presented.

  20. Modeling of a 3DTV service in the software-defined networking architecture

    NASA Astrophysics Data System (ADS)

    Wilczewski, Grzegorz

    2014-11-01

    In this article a newly developed concept towards modeling of a multimedia service offering stereoscopic motion imagery is presented. Proposed model is based on the approach of utilization of Software-defined Networking or Software Defined Networks architecture (SDN). The definition of 3D television service spanning SDN concept is identified, exposing basic characteristic of a 3DTV service in a modern networking organization layout. Furthermore, exemplary functionalities of the proposed 3DTV model are depicted. It is indicated that modeling of a 3DTV service in the Software-defined Networking architecture leads to multiplicity of improvements, especially towards flexibility of a service supporting heterogeneity of end user devices.

  1. Continuous Software Integration and Quality Control during Software Development

    NASA Astrophysics Data System (ADS)

    Ettl, M.; Neidhardt, A.; Brisken, W.; Dassing, R.

    2012-12-01

    Modern software has to be stable, portable, fast, and reliable. This requires a sophisticated infrastructure supporting and providing the developers with additional information about the state and the quality of the project. That is why we have created a centralized software repository, where the whole code-base is managed and version controlled on a centralized server. Based on this, a hierarchical build system has been developed where each project and their sub-projects can be compiled by simply calling the top level Makefile. On the top of this, a nightly build system has been created where the top level Makefiles of each project are called every night. The results of the build including the compiler warnings are reported to the developers using generated HTML pages. In addition, all the source code is automatically checked using a static code analysis tool, called "cppcheck". This tool produces warnings, similar to those of a compiler, but more pedantic. The reports of this analysis are translated to HTML and reported to the developers similar to the nightly builds. Armed with this information,the developers can discover issues in their projects at an early development stage. In combination it reduces the number of possible issues in our software to ensure quality of our projects at different development stages. These checks are also offered to the community. They are currently used within the DiFX software correlator project.

  2. On-Board Software Reference Architecture for Payloads

    NASA Astrophysics Data System (ADS)

    Bos, Victor; Trcka, Adam

    2015-09-01

    This abstract summarizes the On-board Reference Architecture for Payloads activity carried out by Space Systems Finland (SSF) and Evolving Systems Consulting (ESC) under ESA contract. At the time of writing, the activity is ongoing. This abstract discusses study objectives, related activities, study approach, achieved and anticipated results, and directions for future work.

  3. Study of a unified hardware and software fault-tolerant architecture

    NASA Technical Reports Server (NTRS)

    Lala, Jaynarayan; Alger, Linda; Friend, Steven; Greeley, Gregory; Sacco, Stephen; Adams, Stuart

    1989-01-01

    A unified architectural concept, called the Fault Tolerant Processor Attached Processor (FTP-AP), that can tolerate hardware as well as software faults is proposed for applications requiring ultrareliable computation capability. An emulation of the FTP-AP architecture, consisting of a breadboard Motorola 68010-based quadruply redundant Fault Tolerant Processor, four VAX 750s as attached processors, and four versions of a transport aircraft yaw damper control law, is used as a testbed in the AIRLAB to examine a number of critical issues. Solutions of several basic problems associated with N-Version software are proposed and implemented on the testbed. This includes a confidence voter to resolve coincident errors in N-Version software. A reliability model of N-Version software that is based upon the recent understanding of software failure mechanisms is also developed. The basic FTP-AP architectural concept appears suitable for hosting N-Version application software while at the same time tolerating hardware failures. Architectural enhancements for greater efficiency, software reliability modeling, and N-Version issues that merit further research are identified.

  4. CompAction: Integrated compliance management software

    SciTech Connect

    Zipfel, J.M.

    1995-12-31

    CompAction is an integrated compliance management software tool for the solid waste disposal industry. The majority of environmental compliance software packages on the market allow users to access Federal and state regulations without increasing the usability of the information. By contrast, CompAction bridges the gap between regulatory requirements and the actions facilities must complete to ensure continued compliance. CompAction allows environmental compliance management personnel and consultants to schedule compliance assessment activities, verify, and track the related compliance status of the facility. CompAction modules allow facility managers to customize the system for specific Federal, state, local and permit requirements and assign. completion responsibilities to site personnel The system tracks completion of the assignment, the compliance status of the requirement and also an assigned plan of action for the requirements which are found to be deficient. CompAction may also assist facilities in demonstrating compliance with state audit privilege guidelines and is designed to adhere to compliance program requirements outlined by the USEPA and the Department of Justice. CompAction can schedule facility inspections and audits to ensure that the facility maintains an on-going compliance prevention and assessment program. Federal, State, local and permit Environmental, Health and Safety regulations can all be maintained by the system and modified as the requirements change. CompAction is an innovative compliance assessment and monitoring system designed for both public and private facilities. Use of CompAction will facilitate the maintenance of an efficient and effective environmental compliance management program for solid waste disposal facilities.

  5. Software Reliability, Measurement, and Testing Software Reliability and Test Integration

    DTIC Science & Technology

    1992-04-01

    very well suited. Descriptive analyses include preparing histograms and plots and other data presentations for use in interpreting results into...2.1.3 Selected Software Projects Four of these candidate projects were selected, two for use in the experiment and two others available as backup...Complementary characteristics (an advantage gained by using a test technique with one or more others ) and available tools and/or methods are also

  6. A preliminary architecture for building communication software from traffic captures

    NASA Astrophysics Data System (ADS)

    Acosta, Jaime C.; Estrada, Pedro

    2017-05-01

    Security analysts are tasked with identifying and mitigating network service vulnerabilities. A common problem associated with in-depth testing of network protocols is the availability of software that communicates across disparate protocols. Many times, the software required to communicate with these services is not publicly available. Developing this software is a time-consuming undertaking that requires expertise and understanding of the protocol specification. The work described in this paper aims at developing a software package that is capable of automatically creating communication clients by using packet capture (pcap) and TShark dissectors. Currently, our focus is on simple protocols with fixed fields. The methodologies developed as part of this work will extend to other complex protocols such as the Gateway Load Balancing Protocol (GLBP), Port Aggregation Protocol (PAgP), and Open Shortest Path First (OSPF). Thus far, we have architected a modular pipeline for an automatic traffic-based software generator. We start the transformation of captured network traffic by employing TShark to convert packets into a Packet Details Markup Language (PDML) file. The PDML file contains a parsed, textual, representation of the packet data. Then, we extract field data, types, along with inter and intra-packet dependencies. This information is then utilized to construct an XML file that encompasses the protocol state machine and field vocabulary. Finally, this XML is converted into executable code. Using our methodology, and as a starting point, we have succeeded in automatically generating software that communicates with other hosts using an automatically generated Internet Control Message Protocol (ICMP) client program.

  7. A Method for Aligning Acquisition Strategies and Software Architectures

    DTIC Science & Technology

    2014-09-01

    vii Abstract ix 1 Introduction 1 1.1 Premise of This Work 1 1.2 Definitions 1 1.3 What Is Unique About This Approach ? 2 1.3.1 Programmatic Issue...a method that explicitly addresses key entities that are critical to alignment or misalignment, we provide a useful approach for organizations and...Architecture Framework, or DoDAF) • Acquisition Strategy: a business and technical management approach designed to achieve program objectives within the

  8. Principles for Evaluating the Quality Attributes of a Software Architecture.

    DTIC Science & Technology

    1997-03-01

    and effects analysis ( FMEA ), and failure modes effects and criticality analysis (FMECA). All of these techniques are standard practices in other...bandwidth of any secondary data channel that is identified in the system). In performance, analysis methods have grown out of two separate schools of...necessary to verify that the system can fulfill each of its obligations. Realizing that an architectural design is still a relatively high -level design

  9. RT 24 - Architecture, Modeling & Simulation, and Software Design

    DTIC Science & Technology

    2010-11-01

    focus on tool extensions (UPDM, SysML, SoaML, BPMN ) Leverage “best of breed” architecture methodologies Provide tooling to support the methodology DoDAF...Capability 10 Example: BPMN 11 DoDAF 2.0 MetaModel BPMN MetaModel Mapping SysML to DoDAF 2.0 12 DoDAF V2.0 Models OV-2 SysML Diagrams Requirement

  10. Agile Development and Software Architecture: Understanding Scale and Risk

    DTIC Science & Technology

    2011-10-24

    SEIVirtualForum Symptoms of failure  Teams (e.g., Scrum teams, product development teams, component teams, feature teams) spend almost all of...stability to support the next n iterations of development. In a Scrum project environment, the architectural runway may be established during...infrastructure Presentation Layer Common Service Common Service Common Service API APIData Access Layer Domain Layer Scrum Team A Scrum Team B Scrum Team C

  11. On the Prospects and Concerns of Integrating Open Source Software Environment in Software Engineering Education

    ERIC Educational Resources Information Center

    Kamthan, Pankaj

    2007-01-01

    Open Source Software (OSS) has introduced a new dimension in software community. As the development and use of OSS becomes prominent, the question of its integration in education arises. In this paper, the following practices fundamental to projects and processes in software engineering are examined from an OSS perspective: project management;…

  12. On the Prospects and Concerns of Integrating Open Source Software Environment in Software Engineering Education

    ERIC Educational Resources Information Center

    Kamthan, Pankaj

    2007-01-01

    Open Source Software (OSS) has introduced a new dimension in software community. As the development and use of OSS becomes prominent, the question of its integration in education arises. In this paper, the following practices fundamental to projects and processes in software engineering are examined from an OSS perspective: project management;…

  13. The Exploration of Green Architecture Design Integration Teaching Mode

    ERIC Educational Resources Information Center

    Shuang, Liang; Yibin, Han

    2016-01-01

    With the deepening of the concept of green building design, the course of university education gradually exposed many problems in the teaching of architectural design theory; based on the existing mode of teaching and combined with the needs of architectural design practice it proposed the "integrated" method of green building design. It…

  14. Case Studies of Software Development Tools for Parallel Architectures

    DTIC Science & Technology

    1993-06-01

    67 PIE ...surveyed (descriptions of these, and all other tools mentioned in this report are provided in appendix B): GARDEN FIELD PIE Prometheus Faust CODE...PARALLEL SOFTWARE ENGINEERING PROBLEMS Tool Spec Design Co A1g. Par Dam Part Load Comp Cam Debug Reuse Nu Test Se Eval Dist Bal RefI /Test Procs PIE X X

  15. Peeling the Onion: Okapi System Architecture and Software Design Issues.

    ERIC Educational Resources Information Center

    Jones, S.; And Others

    1997-01-01

    Discusses software design issues for Okapi, an information retrieval system that incorporates both search engine and user interface and supports weighted searching, relevance feedback, and query expansion. The basic search system, adjacency searching, and moving toward a distributed system are discussed. (Author/LRW)

  16. Peeling the Onion: Okapi System Architecture and Software Design Issues.

    ERIC Educational Resources Information Center

    Jones, S.; And Others

    1997-01-01

    Discusses software design issues for Okapi, an information retrieval system that incorporates both search engine and user interface and supports weighted searching, relevance feedback, and query expansion. The basic search system, adjacency searching, and moving toward a distributed system are discussed. (Author/LRW)

  17. Streamlining the Process of Acquiring Secure Open Architecture Software Systems

    DTIC Science & Technology

    2013-10-08

    Android apps and the Apple App Store also offer software (widget) components for their respective computing platforms ( Android and iPhone smartphones, or...Centralized vs . Decentralized ......................................................................... 106  Is OSSD Efficient...Enhanced (SE) Android . 2012 Android Builder’s Summit. Retrieved from https://events.linuxfoundation.org/images/stories/pdf/lf_abs12_smalley.pdf Smith

  18. Software architecture for large scale, distributed, data-intensive systems

    NASA Technical Reports Server (NTRS)

    Mattmann, Chris A.; Medvidovic, Nenad; Ramirez, Paul M.

    2004-01-01

    This paper presents our experience with OODT, a novel software architectual style, and middlware-based implementation for data-intensive systems. To date, OODT has been successfully evaluated in several different science domains including Cancer Research with the National Cancer Institute (NCI), and Planetary Science with NASA's Planetary Data System (PDS).

  19. Software architecture of the light weight kernel, catamount.

    SciTech Connect

    Kelly, Suzanne Marie

    2005-05-01

    Catamount is designed to be a low overhead operating system for a parallel computing environment. Functionality is limited to the minimum set needed to run a scientific computation. The design choices and implementations will be presented. A massively parallel processor (MPP), high performance computing (HPC) system is particularly sensitive to operating system overhead. Traditional, multi-purpose, operating systems are designed to support a wide range of usage models and requirements. To support the range of needs, a large number of system processes are provided and are often interdependent on each other. The overhead of these processes leads to an unpredictable amount of processor time available to a parallel application. Except in the case of the most embarrassingly parallel of applications, an MPP application must share interim results with its peers before it can make further progress. These synchronization events are made at specific points in the application code. If one processor takes longer to reach that point than all the other processors, everyone must wait. The overall finish time is increased. Sandia National Laboratories began addressing this problem more than a decade ago with an architecture based on node specialization. Sets of nodes in an MPP are designated to perform specific tasks, each running an operating system best suited to the specialized function. Sandia chose to not use a multi-purpose operating system for the computational nodes and instead began developing its first light weight operating system, SUNMOS, which ran on the compute nodes on the Intel Paragon system. Based on its viability, the architecture evolved into the PUMA operating system. Intel ported PUMA to the ASCI Red TFLOPS system, thus creating the Cougar operating system. Most recently, Cougar has been ported to Cray's XT3 system and renamed to Catamount. As the references indicate, there are a number of descriptions of the predecessor operating systems. While the majority

  20. Developing an Evaluation Method for Middleware-Based Software Architectures of Airborne Mission Systems

    DTIC Science & Technology

    2007-07-01

    documented using an architecture knowledge management tool also developed at NICTA. 31 DSTO-TR-2204 9. References [Ali- Babar & Gorton 2004] [Ali... Babar et al. 2005] [Allen et al. 2002] [Bachmann et al. 2003] [Barbacci et al. 1995] [Bass et al. 2003] [Basse/al. 2001] [Bengstsson et al. 2004...Boehm&In 1996] [CORBA 2006] [Clements et al. 2001] Ali- Babar , M. & Gorton, I. (2004) Comparison of Scenario-Based Software Architecture

  1. Integrated software system for improving medical equipment management.

    PubMed

    Bliznakov, Z; Pappous, G; Bliznakova, K; Pallikarakis, N

    2003-01-01

    The evolution of biomedical technology has led to an extraordinary use of medical devices in health care delivery. During the last decade, clinical engineering departments (CEDs) turned toward computerization and application of specific software systems for medical equipment management in order to improve their services and monitor outcomes. Recently, much emphasis has been given to patient safety. Through its Medical Device Directives, the European Union has required all member nations to use a vigilance system to prevent the reoccurrence of adverse events that could lead to injuries or death of patients or personnel as a result of equipment malfunction or improper use. The World Health Organization also has made this issue a high priority and has prepared a number of actions and recommendations. In the present workplace, a new integrated, Windows-oriented system is proposed, addressing all tasks of CEDs but also offering a global approach to their management needs, including vigilance. The system architecture is based on a star model, consisting of a central core module and peripheral units. Its development has been based on the integration of 3 software modules, each one addressing specific predefined tasks. The main features of this system include equipment acquisition and replacement management, inventory archiving and monitoring, follow up on scheduled maintenance, corrective maintenance, user training, data analysis, and reports. It also incorporates vigilance monitoring and information exchange for adverse events, together with a specific application for quality-control procedures. The system offers clinical engineers the ability to monitor and evaluate the quality and cost-effectiveness of the service provided by means of quality and cost indicators. Particular emphasis has been placed on the use of harmonized standards with regard to medical device nomenclature and classification. The system's practical applications have been demonstrated through a pilot

  2. A Quantitative Model for Assessing Visual Simulation Software Architecture

    DTIC Science & Technology

    2011-09-01

    complexity measure. The Computer Journal, 27, 340–347. Qingqing, Z., & Xinke, L. (2009). Complexity metrics for service-oriented systems. In 2009...Rudy Darken Professor of Computer Science Dissertation Supervisor Ted Lewis Professor of Computer Science Richard Riehle Professor of Practice...Software Engineering Arnold Buss Research Associate Professor of MOVES LtCol Jeff Boleng, PhD Associate Professor of Computer Science U.S. Air Force Academy

  3. Combining Architecture-Centric Engineering with the Team Software Process

    DTIC Science & Technology

    2010-12-01

    Bursatec, the IT arm of La Bolsa Mexicana de Valores (the Mexican Stock Exchange), to replace its main online stock trading engine with one that...a project at Bursatec. 4.1 Project Summary (to Date) In early 2009 Bursatec, the IT development organization of the Bolsa Mexicana de Valores (BMV...Scale: An Experience Report from Pilot Projects in Mexico (CMU/SEI-2009-TR-011). Pittsburgh, PA: Software Engineering Institute, Carnegie Mellon

  4. Relating Business Goals to Architecturally Significant Requirements for Software Systems

    DTIC Science & Technology

    2010-05-01

    Partington 2000] 21 Figure 7: Fifteen Important Business Goals [ Hofstede 2002] 30 Figure 8: List of Business Goals Discriminated by Self-Interest...of the business goals behind the system being developed. Business goals drive the conception , creation, and evolution of software-reliant systems...business literature and uses that survey to produce a classification of business goals. It introduces the concept of goal-subject (the person or

  5. Streamlining the Process of Acquiring Secure Open Architecture Software Systems

    DTIC Science & Technology

    2013-04-01

    streamline the acquisition process for secure OA software systems through a focus on doing more with limited resources. Along the way, we pay particular...SUBJECT TERMS 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT Same as Report (SAR) 18. NUMBER OF PAGES 22 19a. NAME OF RESPONSIBLE...be missing this year is the active participation and networking that has been the hallmark of previous symposia. By purposely limiting attendance to

  6. Integrated testing and verification system for research flight software

    NASA Technical Reports Server (NTRS)

    Taylor, R. N.

    1979-01-01

    The MUST (Multipurpose User-oriented Software Technology) program is being developed to cut the cost of producing research flight software through a system of software support tools. An integrated verification and testing capability was designed as part of MUST. Documentation, verification and test options are provided with special attention on real-time, multiprocessing issues. The needs of the entire software production cycle were considered, with effective management and reduced lifecycle costs as foremost goals.

  7. LOFAR Self-Calibration Using a Blackboard Software Architecture

    NASA Astrophysics Data System (ADS)

    Loose, G. M.

    2008-08-01

    One of the major challenges for the self-calibration of the new generation of radio telescopes is to handle the sheer amount of observational data. For LOFAR, an average observation consists of several tens of terabytes of data. Fortunately, many operations can be done in parallel on only part of the data. So, one way to take up this challenge is to employ a large cluster of computers and to distribute both data and computing power. This paper focuses on the architectural design of the LOFAR self-calibration system, which is loosely based on the Blackboard architectural pattern. Key design consideration was to provide maximum scalability by complete separation of the global controller---issuing sequences of commands---on the one side; and the local controllers---controlling the so-called `kernels' that execute the commands---on the other side. In between, resides a database system that acts as a shared memory for the global and local controllers by storing the commands and the results.

  8. An Integrated Hybrid Transportation Architecture for Human Mars Expeditions

    NASA Technical Reports Server (NTRS)

    Merrill, Raymond G.; Chai, Patrick R.; Qu, Min

    2015-01-01

    NASA's Human Spaceflight Architecture Team is developing a reusable hybrid transportation architecture that uses both chemical and electric propulsion systems on the same vehicle to send crew and cargo to Mars destinations such as Phobos, Deimos, the surface of Mars, and other orbits around Mars. By applying chemical and electrical propulsion where each is most effective, the hybrid architecture enables a series of Mars trajectories that are more fuel-efficient than an all chemical architecture without significant increases in flight times. This paper presents an integrated Hybrid in-space transportation architecture for piloted missions and delivery of cargo. A concept for a Mars campaign including orbital and Mars surface missions is described in detail including a system concept of operations and conceptual design. Specific constraints, margin, and pinch points are identified for the architecture and opportunities for critical path commercial and international collaboration are discussed.

  9. NASA's Advanced Multimission Operations System: A Case Study in Formalizing Software Architecture Evolution

    NASA Technical Reports Server (NTRS)

    Barnes, Jeffrey M.

    2011-01-01

    All software systems of significant size and longevity eventually undergo changes to their basic architectural structure. Such changes may be prompted by evolving requirements, changing technology, or other reasons. Whatever the cause, software architecture evolution is commonplace in real world software projects. Recently, software architecture researchers have begun to study this phenomenon in depth. However, this work has suffered from problems of validation; research in this area has tended to make heavy use of toy examples and hypothetical scenarios and has not been well supported by real world examples. To help address this problem, I describe an ongoing effort at the Jet Propulsion Laboratory to re-architect the Advanced Multimission Operations System (AMMOS), which is used to operate NASA's deep-space and astrophysics missions. Based on examination of project documents and interviews with project personnel, I describe the goals and approach of this evolution effort and then present models that capture some of the key architectural changes. Finally, I demonstrate how approaches and formal methods from my previous research in architecture evolution may be applied to this evolution, while using languages and tools already in place at the Jet Propulsion Laboratory.

  10. NASA's Advanced Multimission Operations System: A Case Study in Formalizing Software Architecture Evolution

    NASA Technical Reports Server (NTRS)

    Barnes, Jeffrey M.

    2011-01-01

    All software systems of significant size and longevity eventually undergo changes to their basic architectural structure. Such changes may be prompted by evolving requirements, changing technology, or other reasons. Whatever the cause, software architecture evolution is commonplace in real world software projects. Recently, software architecture researchers have begun to study this phenomenon in depth. However, this work has suffered from problems of validation; research in this area has tended to make heavy use of toy examples and hypothetical scenarios and has not been well supported by real world examples. To help address this problem, I describe an ongoing effort at the Jet Propulsion Laboratory to re-architect the Advanced Multimission Operations System (AMMOS), which is used to operate NASA's deep-space and astrophysics missions. Based on examination of project documents and interviews with project personnel, I describe the goals and approach of this evolution effort and then present models that capture some of the key architectural changes. Finally, I demonstrate how approaches and formal methods from my previous research in architecture evolution may be applied to this evolution, while using languages and tools already in place at the Jet Propulsion Laboratory.

  11. Integrating deliberative planning in a robot architecture

    NASA Technical Reports Server (NTRS)

    Elsaesser, Chris; Slack, Marc G.

    1994-01-01

    The role of planning and reactive control in an architecture for autonomous agents is discussed. The postulated architecture seperates the general robot intelligence problem into three interacting pieces: (1) robot reactive skills, i.e., grasping, object tracking, etc.; (2) a sequencing capability to differentially ativate the reactive skills; and (3) a delibrative planning capability to reason in depth about goals, preconditions, resources, and timing constraints. Within the sequencing module, caching techniques are used for handling routine activities. The planning system then builds on these cached solutions to routine tasks to build larger grain sized primitives. This eliminates large numbers of essentially linear planning problems. The architecture will be used in the future to incorporate in robots cognitive capabilites normally associated with intelligent behavior.

  12. Requirements Engineering for Software Integrity and Safety

    NASA Technical Reports Server (NTRS)

    Leveson, Nancy G.

    2002-01-01

    Requirements flaws are the most common cause of errors and software-related accidents in operational software. Most aerospace firms list requirements as one of their most important outstanding software development problems and all of the recent, NASA spacecraft losses related to software (including the highly publicized Mars Program failures) can be traced to requirements flaws. In light of these facts, it is surprising that relatively little research is devoted to requirements in contrast with other software engineering topics. The research proposed built on our previous work. including both criteria for determining whether a requirements specification is acceptably complete and a new approach to structuring system specifications called Intent Specifications. This grant was to fund basic research on how these ideas could be extended to leverage innovative approaches to the problems of (1) reducing the impact of changing requirements, (2) finding requirements specification flaws early through formal and informal analysis, and (3) avoiding common flaws entirely through appropriate requirements specification language design.

  13. Using an Integrated Distributed Test Architecture to Develop an Architecture for Mars

    NASA Technical Reports Server (NTRS)

    Othon, William L.

    2016-01-01

    The creation of a crew-rated spacecraft architecture capable of sending humans to Mars requires the development and integration of multiple vehicle systems and subsystems. Important new technologies will be identified and matured within each technical discipline to support the mission. Architecture maturity also requires coordination with mission operations elements and ground infrastructure. During early architecture formulation, many of these assets will not be co-located and will required integrated, distributed test to show that the technologies and systems are being developed in a coordinated way. When complete, technologies must be shown to function together to achieve mission goals. In this presentation, an architecture will be described that promotes and advances integration of disparate systems within JSC and across NASA centers.

  14. GASP-PL/I Simulation of Integrated Avionic System Processor Architectures. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Brent, G. A.

    1978-01-01

    A development study sponsored by NASA was completed in July 1977 which proposed a complete integration of all aircraft instrumentation into a single modular system. Instead of using the current single-function aircraft instruments, computers compiled and displayed inflight information for the pilot. A processor architecture called the Team Architecture was proposed. This is a hardware/software approach to high-reliability computer systems. A follow-up study of the proposed Team Architecture is reported. GASP-PL/1 simulation models are used to evaluate the operating characteristics of the Team Architecture. The problem, model development, simulation programs and results at length are presented. Also included are program input formats, outputs and listings.

  15. Experimenting Maintenance of Flight Software in an Integrated Modular Avionics for Space

    NASA Astrophysics Data System (ADS)

    Hardy, Johan; Laroche, Thomas; Creten, Philippe; Parisis, Paul; Hiller, Martin

    2014-08-01

    This paper presents an experiment of Flight Software partitioning in an Integrated Modular Avionics for Space (IMA-SP) system. This experiment also tackles the maintenance aspects of IMA-SP systems. The presented case study is PROBA-2 Flight Software. The paper addresses and discusses the following subjects: On-Board Software Maintenance in IMA- SP, boot strategy for Time and Space Partitioning, considerations about the ground segment related to On-Board Software Maintenance in IMA-SP, and architectural impacts of Time and Space Partitioning for PROBA software's. Finally, this paper presents the results and the achievements of the study and it appeals at further perspectives for IMA-SP and Time and Space Partitioning.

  16. New Software Architecture Options for the TCL Data Acquisition System

    SciTech Connect

    Valenton, Emmanuel

    2014-09-01

    The Turbulent Combustion Laboratory (TCL) conducts research on combustion in turbulent flow environments. To conduct this research, the TCL utilizes several pulse lasers, a traversable wind tunnel, flow controllers, scientific grade CCD cameras, and numerous other components. Responsible for managing these different data-acquiring instruments and data processing components is the Data Acquisition (DAQ) software. However, the current system is constrained to running through VXI hardware—an instrument-computer interface—that is several years old, requiring the use of an outdated version of the visual programming language, LabVIEW. A new Acquisition System is being programmed which will borrow heavily from either a programming model known as the Current Value Table (CVT) System or another model known as the Server-Client System. The CVT System model is in essence, a giant spread sheet from which data or commands may be retrieved or written to, and the Server-Client System is based on network connections between a server and a client, very much like the Server-Client model of the Internet. Currently, the bare elements of a CVT DAQ Software have been implemented, consisting of client programs in addition to a server program that the CVT will run on. This system is being rigorously tested to evaluate the merits of pursuing the CVT System model and to uncover any potential flaws which may result in further implementation. If the CVT System is chosen, which is likely, then future work will consist of build up the system until enough client programs have been created to run the individual components of the lab. The advantages of such a System will be flexibility, portability, and polymorphism. Additionally, the new DAQ software will allow the Lab to replace the VXI with a newer instrument interface—the PXI—and take advantage of the capabilities of current and future versions of LabVIEW.

  17. Software architecture for improving accessibility to medical text-based information.

    PubMed

    Topac, Vasile; Stoicu-Tivadar, Vasile

    2009-01-01

    The paper presents a software architecture aiming to improve accessibility to information in specialized texts, with focus on medical texts. This software also addresses other problems related to text accessibility such as vision problems and language problems. It allows text input in any media format (text, image, sound) and outputs the text as digital text or sound, permitting the user to scan the medical papers and listen to the translated and adapted text.

  18. Fall 2014 SEI Research Review: Aligining Acquisition Strategy and Software Architecture

    DTIC Science & Technology

    2014-10-01

    CMU /SEI-2010-TN-018: “Relating Business Goals to Architecturally Significant Requirements for Software Systems“ 8 Fall 2014 SEI Research Review...occur for a given AQA  Created acquisition strategy tactics associated with AQAs *Results published in SEI TN CMU /SEI-2013-TN-026: “Results in...published in SEI TN CMU /SEI-2014-TN-019: “A Method for Aligning Acquisition Strategies and Software Architectures“ 12 Fall 2014 SEI Research Review

  19. Understanding the Role of Licenses and Evolution in Open Architecture Software Ecosystems

    DTIC Science & Technology

    2010-11-29

    Understanding the Role of Licenses and Evolution in Open Architecture Software Ecosystems Walt Scacchi Institute for Software Research. University of... Scacchi ), alspaugh@cs. georgetown.edu (Thomas A. Alspaugh) Preprint submitted to Elsevier November 29, 2010 Manuscript Click here to view linked References...2001-2002. 14. The Mono C# Compiler and Tools, Copyright 2005-2008 Novell, Inc. 15. libcurl. Copyright (c) 1996-2008, Daniel Stenberg < daniel @haxx.se

  20. Service-oriented architecture for the ARGOS instrument control software

    NASA Astrophysics Data System (ADS)

    Borelli, J.; Barl, L.; Gässler, W.; Kulas, M.; Rabien, Sebastian

    2012-09-01

    The Advanced Rayleigh Guided ground layer Adaptive optic System, ARGOS, equips the Large Binocular Telescope (LBT) with a constellation of six rayleigh laser guide stars. By correcting atmospheric turbulence near the ground, the system is designed to increase the image quality of the multi-object spectrograph LUCIFER approximately by a factor of 3 over a field of 4 arc minute diameter. The control software has the critical task of orchestrating several devices, instruments, and high level services, including the already existing adaptive optic system and the telescope control software. All these components are widely distributed over the telescope, adding more complexity to the system design. The approach used by the ARGOS engineers is to write loosely coupled and distributed services under the control of different ownership systems, providing a uniform mechanism to offer, discover, interact and use these distributed capabilities. The control system counts with several finite state machines, vibration and flexure compensation loops, and safety mechanism, such as interlocks, aircraft, and satellite avoidance systems.

  1. Initial SVS Integrated Technology Evaluation Flight Test Requirements and Hardware Architecture

    NASA Technical Reports Server (NTRS)

    Harrison, Stella V.; Kramer, Lynda J.; Bailey, Randall E.; Jones, Denise R.; Young, Steven D.; Harrah, Steven D.; Arthur, Jarvis J.; Parrish, Russell V.

    2003-01-01

    This document presents the flight test requirements for the Initial Synthetic Vision Systems Integrated Technology Evaluation flight Test to be flown aboard NASA Langley's ARIES aircraft and the final hardware architecture implemented to meet these requirements. Part I of this document contains the hardware, software, simulator, and flight operations requirements for this light test as they were defined in August 2002. The contents of this section are the actual requirements document that was signed for this flight test. Part II of this document contains information pertaining to the hardware architecture that was realized to meet these requirements as presented to and approved by a Critical Design Review Panel prior to installation on the B-757 Airborne Research Integrated Experiments Systems (ARIES) airplane. This information includes a description of the equipment, block diagrams of the architecture, layouts of the workstations, and pictures of the actual installations.

  2. Architecture-Centric Virtual Integration Workshop

    DTIC Science & Technology

    2014-09-29

    Latency Resource Consumption •Bandwidth •CPU time •Power consumption •Data precision/ accuracy •Temporal correctness •Confidence Data...Silvia Abrahao, Emilio Insfran and Bruce Lewis 1030-1100 - Coffee Break 1100-1130 - Towards an Architecture-Centric Approach dedicated to Model-Based...Multiprocessor Systems with AADL - Stéphane Rubini, Pierre Dissaux and Frank Singhoff 0330-0400 - Coffee Break 0400-0500 - BLESS Tutorial - Brian

  3. Adoption of new software and hardware solutions at the VLT: the ESPRESSO control architecture case

    NASA Astrophysics Data System (ADS)

    Cirami, R.; Di Marcantonio, P.; Coretti, I.; Santin, P.; Mannetta, M.; Baldini, V.; Cristiani, S.; Abreu, M.; Cabral, A.; Monteiro, M.; Mégevand, D.; Zerbi, F.

    2012-09-01

    ESPRESSO is a fiber-fed cross-dispersed echelle spectrograph which can be operated with one or up to 4 UT (Unit Telescope) of ESO's Very Large Telescope (VLT). It will be located in the Combined-Coudé Laboratory (CCL) of the VLT and it will be the first permanent instrument using a 16-m equivalent telescope. The ESPRESSO control software and electronics are in charge of the control of all instrument subsystems: the four Coudé Trains (one for each UT), the front-end and the fiber-fed spectrograph itself contained within a vacuum vessel. The spectrograph is installed inside a series of thermal enclosures following an onion-shell principle with increasing temperature stability from outside to inside. The proposed electronics architecture will use the OPC Unified Architecture (OPC UA) as a standard layer to communicate with PLCs (Programmable Logical Controller), replacing the old Instrument Local Control Units (LCUs) for ESO instruments based on VME technology. The instrument control software will be based on the VLT Control Software package and will use the IC0 Field Bus extension for the control of the instrument hardware. In this paper we present the ESPRESSO software architectural design proposed at the Preliminary Design Review as well as the control electronics architecture.

  4. System Architecture Virtual Integration: An Industrial Case Study

    DTIC Science & Technology

    2009-11-01

    System Architecture Virtual Integration: An Industrial Case Study Peter H. Feiler Jorgen Hansson Dionisio de Niz Lutz Wrage...Virtual Integration: An Industrial Case Study 5. FUNDING NUMBERS FA8721-05-C-0003 6. AUTHOR(S) Peter H. Feiler, Jorgen Hansson, Dionisio de Niz

  5. A near-miss management system architecture for the forensic investigation of software failures.

    PubMed

    Bella, M A Bihina; Eloff, J H P

    2016-02-01

    Digital forensics has been proposed as a methodology for doing root-cause analysis of major software failures for quite a while. Despite this, similar software failures still occur repeatedly. A reason for this is the difficulty of obtaining detailed evidence of software failures. Acquiring such evidence can be challenging, as the relevant data may be lost or corrupt following a software system's crash. This paper proposes the use of near-miss analysis to improve on the collection of evidence for software failures. Near-miss analysis is an incident investigation technique that detects and subsequently analyses indicators of failures. The results of a near-miss analysis investigation are then used to detect an upcoming failure before the failure unfolds. The detection of these indicators - known as near misses - therefore provides an opportunity to proactively collect relevant data that can be used as digital evidence, pertaining to software failures. A Near Miss Management System (NMS) architecture for the forensic investigation of software failures is proposed. The viability of the proposed architecture is demonstrated through a prototype. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  6. NEWFIRM Software--System Integration Using OPC

    NASA Astrophysics Data System (ADS)

    Daly, P. N.

    2004-07-01

    The NOAO Extremely Wide-Field Infra-Red Mosaic (NEWFIRM) camera is being built to satisfy the survey science requirements on the KPNO Mayall and CTIO Blanco 4m telescopes in an era of 8m+ aperture telescopes. Rather than re-invent the wheel, the software system to control the instrument has taken existing software packages and re-used what is appropriate. The result is an end-to-end observation control system using technology components from DRAMA, ORAC, observing tools, GWC, existing in-house motor controllers and new developments like the MONSOON pixel server.

  7. Scientific Data Management Integrated Software Infrastructure Center

    SciTech Connect

    Choudhary, A.; Liao, W.K.

    2008-10-29

    This work provides software that enables scientific applications to more efficiently access available storage resources at different levels of interfaces. We developed scalable techniques and optimizations for PVFS parallel file systems, MPI I/O, and parallel netCDF I/O library. These implementations were evaluated using production application I/O kernels as well as popular I/O benchmarks and demonstrated promising results. The software developed under this work has been made available to the public via MCS, ANL web sites.

  8. Software Testbed for Developing and Evaluating Integrated Autonomous Subsystems

    NASA Technical Reports Server (NTRS)

    Ong, James; Remolina, Emilio; Prompt, Axel; Robinson, Peter; Sweet, Adam; Nishikawa, David

    2015-01-01

    To implement fault tolerant autonomy in future space systems, it will be necessary to integrate planning, adaptive control, and state estimation subsystems. However, integrating these subsystems is difficult, time-consuming, and error-prone. This paper describes Intelliface/ADAPT, a software testbed that helps researchers develop and test alternative strategies for integrating planning, execution, and diagnosis subsystems more quickly and easily. The testbed's architecture, graphical data displays, and implementations of the integrated subsystems support easy plug and play of alternate components to support research and development in fault-tolerant control of autonomous vehicles and operations support systems. Intelliface/ADAPT controls NASA's Advanced Diagnostics and Prognostics Testbed (ADAPT), which comprises batteries, electrical loads (fans, pumps, and lights), relays, circuit breakers, invertors, and sensors. During plan execution, an experimentor can inject faults into the ADAPT testbed by tripping circuit breakers, changing fan speed settings, and closing valves to restrict fluid flow. The diagnostic subsystem, based on NASA's Hybrid Diagnosis Engine (HyDE), detects and isolates these faults to determine the new state of the plant, ADAPT. Intelliface/ADAPT then updates its model of the ADAPT system's resources and determines whether the current plan can be executed using the reduced resources. If not, the planning subsystem generates a new plan that reschedules tasks, reconfigures ADAPT, and reassigns the use of ADAPT resources as needed to work around the fault. The resource model, planning domain model, and planning goals are expressed using NASA's Action Notation Modeling Language (ANML). Parts of the ANML model are generated automatically, and other parts are constructed by hand using the Planning Model Integrated Development Environment, a visual Eclipse-based IDE that accelerates ANML model development. Because native ANML planners are currently

  9. The South African Astronomical Observatory instrumentation software architecture and the SHOC instruments

    NASA Astrophysics Data System (ADS)

    van Gend, Carel; Lombaard, Briehan; Sickafoose, Amanda; Whittal, Hamish

    2016-07-01

    Until recently, software for instruments on the smaller telescopes at the South African Astronomical Observatory (SAAO) has not been designed for remote accessibility and frequently has not been developed using modern software best-practice. We describe a software architecture we have implemented for use with new and upgraded instruments at the SAAO. The architecture was designed to allow for multiple components and to be fast, reliable, remotely- operable, support different user interfaces, employ as much non-proprietary software as possible, and to take future-proofing into consideration. Individual component drivers exist as standalone processes, communicating over a network. A controller layer coordinates the various components, and allows a variety of user interfaces to be used. The Sutherland High-speed Optical Cameras (SHOC) instruments incorporate an Andor electron-multiplying CCD camera, a GPS unit for accurate timing and a pair of filter wheels. We have applied the new architecture to the SHOC instruments, with the camera driver developed using Andor's software development kit. We have used this to develop an innovative web-based user-interface to the instrument.

  10. Integrating hospital information systems in healthcare institutions: a mediation architecture.

    PubMed

    El Azami, Ikram; Cherkaoui Malki, Mohammed Ouçamah; Tahon, Christian

    2012-10-01

    Many studies have examined the integration of information systems into healthcare institutions, leading to several standards in the healthcare domain (CORBAmed: Common Object Request Broker Architecture in Medicine; HL7: Health Level Seven International; DICOM: Digital Imaging and Communications in Medicine; and IHE: Integrating the Healthcare Enterprise). Due to the existence of a wide diversity of heterogeneous systems, three essential factors are necessary to fully integrate a system: data, functions and workflow. However, most of the previous studies have dealt with only one or two of these factors and this makes the system integration unsatisfactory. In this paper, we propose a flexible, scalable architecture for Hospital Information Systems (HIS). Our main purpose is to provide a practical solution to insure HIS interoperability so that healthcare institutions can communicate without being obliged to change their local information systems and without altering the tasks of the healthcare professionals. Our architecture is a mediation architecture with 3 levels: 1) a database level, 2) a middleware level and 3) a user interface level. The mediation is based on two central components: the Mediator and the Adapter. Using the XML format allows us to establish a structured, secured exchange of healthcare data. The notion of medical ontology is introduced to solve semantic conflicts and to unify the language used for the exchange. Our mediation architecture provides an effective, promising model that promotes the integration of hospital information systems that are autonomous, heterogeneous, semantically interoperable and platform-independent.

  11. GiPSi:a framework for open source/open architecture software development for organ-level surgical simulation.

    PubMed

    Cavuşoğlu, M Cenk; Göktekin, Tolga G; Tendick, Frank

    2006-04-01

    This paper presents the architectural details of an evolving open source/open architecture software framework for developing organ-level surgical simulations. Our goal is to facilitate shared development of reusable models, to accommodate heterogeneous models of computation, and to provide a framework for interfacing multiple heterogeneous models. The framework provides an application programming interface for interfacing dynamic models defined over spatial domains. It is specifically designed to be independent of the specifics of the modeling methods used, and therefore facilitates seamless integration of heterogeneous models and processes. Furthermore, each model has separate geometries for visualization, simulation, and interfacing, allowing the model developer to choose the most natural geometric representation for each case. Input/output interfaces for visualization and haptics for real-time interactive applications have also been provided.

  12. Software Architecture for a Virtual Environment for Nano Scale Assembly (VENSA).

    PubMed

    Lee, Yong-Gu; Lyons, Kevin W; Feng, Shaw C

    2004-01-01

    A Virtual Environment (VE) uses multiple computer-generated media to let a user experience situations that are temporally and spatially prohibiting. The information flow between the user and the VE is bidirectional and the user can influence the environment. The software development of a VE requires orchestrating multiple peripherals and computers in a synchronized way in real time. Although a multitude of useful software components for VEs exists, many of these are packaged within a complex framework and can not be used separately. In this paper, an architecture is presented which is designed to let multiple frameworks work together while being shielded from the application program. This architecture, which is called the Virtual Environment for Nano Scale Assembly (VENSA), has been constructed for interfacing with an optical tweezers instrument for nanotechnology development. However, this approach can be generalized for most virtual environments. Through the use of VENSA, the programmer can rely on existing solutions and concentrate more on the application software design.

  13. An Integrated System for Creating Educational Software.

    ERIC Educational Resources Information Center

    Horowitz, Ellis

    1988-01-01

    Describes the development of ScriptWriter, a computer program designed at the University of Southern California to help create software for computer assisted instruction. Topics discussed include the graphics editor; text editor; font editor; a programming language called IQ; its use with interactive video and speech; and current applications.…

  14. Mapping Network Centric Operational Architectures to C2 and Software Architectures

    DTIC Science & Technology

    2007-06-01

    reached in this paper are thus those of the SMES and do not represent exhaustive experimentation of these combined architectures. Again, these are...Each worker need make only fairly simple decisions.” For example, in far northern Australia, “magnetic termites ” build large termite mounds which are...oriented north-south and contain a complex ventilation system which controls temperature, humidity, and oxygen levels. But termite brains are too

  15. Integrated device architectures for electrochromic devices

    DOEpatents

    Frey, Jonathan Mack; Berland, Brian Spencer

    2015-04-21

    This disclosure describes systems and methods for creating monolithically integrated electrochromic devices which may be a flexible electrochromic device. Monolithic integration of thin film electrochromic devices may involve the electrical interconnection of multiple individual electrochromic devices through the creation of specific structures such as conductive pathway or insulating isolation trenches.

  16. Reconfigurable Transceiver and Software-Defined Radio Architecture and Technology Evaluated for NASA Space Communications

    NASA Technical Reports Server (NTRS)

    Reinhart, Richard C.; Kacpura, Thomas J.

    2004-01-01

    The NASA Glenn Research Center is investigating the development and suitability of a software-based open-architecture for space-based reconfigurable transceivers (RTs) and software-defined radios (SDRs). The main objectives of this project are to enable advanced operations and reduce mission costs. SDRs are becoming more common because of the capabilities of reconfigurable digital signal processing technologies such as field programmable gate arrays and digital signal processors, which place radio functions in firmware and software that were traditionally performed with analog hardware components. Features of interest of this communications architecture include nonproprietary open standards and application programming interfaces to enable software reuse and portability, independent hardware and software development, and hardware and software functional separation. The goals for RT and SDR technologies for NASA space missions include prelaunch and on-orbit frequency and waveform reconfigurability and programmability, high data rate capability, and overall communications and processing flexibility. These operational advances over current state-of-art transceivers will be provided to reduce the power, mass, and cost of RTs and SDRs for space communications. The open architecture for NASA communications will support existing (legacy) communications needs and capabilities while providing a path to more capable, advanced waveform development and mission concepts (e.g., ad hoc constellations with self-healing networks and high-rate science data return). A study was completed to assess the state of the art in RT architectures, implementations, and technologies. In-house researchers conducted literature searches and analysis, interviewed Government and industry contacts, and solicited information and white papers from industry on space-qualifiable RTs and SDRs and their associated technologies for space-based NASA applications. The white papers were evaluated, compiled, and

  17. The Integrated Airframe/Propulsion Control System Architecture program (IAPSA)

    NASA Technical Reports Server (NTRS)

    Palumbo, Daniel L.; Cohen, Gerald C.; Meissner, Charles W.

    1990-01-01

    The Integrated Airframe/Propulsion Control System Architecture program (IAPSA) is a two-phase program which was initiated by NASA in the early 80s. The first phase, IAPSA 1, studied different architectural approaches to the problem of integrating engine control systems with airframe control systems in an advanced tactical fighter. One of the conclusions of IAPSA 1 was that the technology to construct a suitable system was available, yet the ability to create these complex computer architectures has outpaced the ability to analyze the resulting system's performance. With this in mind, the second phase of IAPSA approached the same problem with the added constraint that the system be designed for validation. The intent of the design for validation requirement is that validation requirements should be shown to be achievable early in the design process. IAPSA 2 has demonstrated that despite diligent efforts, integrated systems can retain characteristics which are difficult to model and, therefore, difficult to validate.

  18. Integrated Sensor Architecture (ISA) for Live Virtual Constructive (LVC) environments

    NASA Astrophysics Data System (ADS)

    Moulton, Christine L.; Harkrider, Susan; Harrell, John; Hepp, Jared

    2014-06-01

    The Integrated Sensor Architecture (ISA) is an interoperability solution that allows for the sharing of information between sensors and systems in a dynamic tactical environment. The ISA created a Service Oriented Architecture (SOA) that identifies common standards and protocols which support a net-centric system of systems integration. Utilizing a common language, these systems are able to connect, publish their needs and capabilities, and interact with other systems even on disadvantaged networks. Within the ISA project, three levels of interoperability were defined and implemented and these levels were tested at many events. Extensible data models and capabilities that are scalable across multi-echelons are supported, as well as dynamic discovery of capabilities and sensor management. The ISA has been tested and integrated with multiple sensors, platforms, and over a variety of hardware architectures in operational environments.

  19. The Integrated Airframe/Propulsion Control System Architecture program (IAPSA)

    NASA Technical Reports Server (NTRS)

    Palumbo, Daniel L.; Cohen, Gerald C.; Meissner, Charles W.

    1990-01-01

    The Integrated Airframe/Propulsion Control System Architecture program (IAPSA) is a two-phase program which was initiated by NASA in the early 80s. The first phase, IAPSA 1, studied different architectural approaches to the problem of integrating engine control systems with airframe control systems in an advanced tactical fighter. One of the conclusions of IAPSA 1 was that the technology to construct a suitable system was available, yet the ability to create these complex computer architectures has outpaced the ability to analyze the resulting system's performance. With this in mind, the second phase of IAPSA approached the same problem with the added constraint that the system be designed for validation. The intent of the design for validation requirement is that validation requirements should be shown to be achievable early in the design process. IAPSA 2 has demonstrated that despite diligent efforts, integrated systems can retain characteristics which are difficult to model and, therefore, difficult to validate.

  20. Integrating medical devices in the operating room using service-oriented architectures.

    PubMed

    Ibach, Bastian; Benzko, Julia; Schlichting, Stefan; Zimolong, Andreas; Radermacher, Klaus

    2012-08-01

    Abstract With the increasing documentation requirements and communication capabilities of medical devices in the operating room, the integration and modular networking of these devices have become more and more important. Commercial integrated operating room systems are mainly proprietary developments using usually proprietary communication standards and interfaces, which reduce the possibility of integrating devices from different vendors. To overcome these limitations, there is a need for an open standardized architecture that is based on standard protocols and interfaces enabling the integration of devices from different vendors based on heterogeneous software and hardware components. Starting with an analysis of the requirements for device integration in the operating room and the techniques used for integrating devices in other industrial domains, a new concept for an integration architecture for the operating room based on the paradigm of a service-oriented architecture is developed. Standardized communication protocols and interface descriptions are used. As risk management is an important factor in the field of medical engineering, a risk analysis of the developed concept has been carried out and the first prototypes have been implemented.

  1. An Integrated Facet-Based Library for Arbitrary Software Components

    NASA Astrophysics Data System (ADS)

    Schmidt, Matthias; Polowinski, Jan; Johannes, Jendrik; Fernández, Miguel A.

    Reuse is an important means of reducing costs and effort during the development of complex software systems. A major challenge is to find suitable components in a large library with reasonable effort. This becomes even harder in today's development practice where a variety of artefacts such as models and documents play an equally important role as source code. Thus, different types of heterogeneous components exist and require consideration in a component search process. One flexible approach to structure (software component) libraries is faceted classification. Faceted classifications and in particular faceted browsing are nowadays widely used in online systems. This paper takes a fresh approach towards using faceted classification in heterogeneous software component libraries by transferring faceted browsing concepts from the web to software component libraries. It presents an architecture and implementation of such a library. This implementation is used to evaluate the applicability of facets in the context of an industry-driven case study.

  2. Software for integrated manufacturing systems, part 2

    NASA Technical Reports Server (NTRS)

    Volz, R. A.; Naylor, A. W.

    1987-01-01

    Part 1 presented an overview of the unified approach to manufacturing software. The specific characteristics of the approach that allow it to realize the goals of reduced cost, increased reliability and increased flexibility are considered. Why the blending of a components view, distributed languages, generics and formal models is important, why each individual part of this approach is essential, and why each component will typically have each of these parts are examined. An example of a specification for a real material handling system is presented using the approach and compared with the standard interface specification given by the manufacturer. Use of the component in a distributed manufacturing system is then compared with use of the traditional specification with a more traditional approach to designing the system. An overview is also provided of the underlying mechanisms used for implementing distributed manufacturing systems using the unified software/hardware component approach.

  3. Integrated software packages in the physical laboratory

    NASA Astrophysics Data System (ADS)

    Bok, J.; Barvík, I.; Praus, P.; Heřman, P.; Čermáková, D.

    1990-11-01

    The automation of a UV-VIS spectrometer and a single-photon counting apparatus by an IBM-AT is described. Software needed for the computer control, data acquisition and processing was developed in the ASYST environment. This enabled us to use its very good graphics, its support of I/O cards, and its other excellent properties. Also we show ways to overcome some minor shortcomings using the multilanguage programming.

  4. An Integrated Software Package for Flood Damage Analysis

    DTIC Science & Technology

    1989-02-01

    Table 4: : Effect of Flood Plain Management Measures Impacted Relationship’ Stage- Stage- Flow- Flow- Damage - Measure flow Damage Damage Frequency...US Army Corps of Engineers The Hydrologic Engineering Center AD-A206 232 An Integrated Software Packagi for Flood Damage Analysis / DTIC nELECTE f...does not constitute an official endorsement or approval of the use of such commercial products. AN INTEGRATED SOFTWARE PACKAGE FOR FLOOD DAMAGE

  5. An Integrated Software Package to Enable Predictive Simulation Capabilities

    SciTech Connect

    Chen, Yousu; Fitzhenry, Erin B.; Jin, Shuangshuang; Palmer, Bruce J.; Sharma, Poorva; Huang, Zhenyu

    2016-08-11

    The power grid is increasing in complexity due to the deployment of smart grid technologies. Such technologies vastly increase the size and complexity of power grid systems for simulation and modeling. This increasing complexity necessitates not only the use of high-performance-computing (HPC) techniques, but a smooth, well-integrated interplay between HPC applications. This paper presents a new integrated software package that integrates HPC applications and a web-based visualization tool based on a middleware framework. This framework can support the data communication between different applications. Case studies with a large power system demonstrate the predictive capability brought by the integrated software package, as well as the better situational awareness provided by the web-based visualization tool in a live mode. Test results validate the effectiveness and usability of the integrated software package.

  6. Software For Integration Of EVA And Telerobotics

    NASA Technical Reports Server (NTRS)

    Drews, Michael L.; Smith, Jeffrey H.; Estus, Jay M.; Heneghan, Cate; Zimmerman, Wayne; Fiorini, Paolo; Schenker, Paul S.; Mcaffee, Douglas A.

    1991-01-01

    Telerobotics/EVA Joint Analysis Systems (TEJAS) computer program is hypermedia information software system using object-oriented programming to bridge gap between crew-EVA and telerobotics activities. TEJAS Version 1.0 contains 20 HyperCard stacks using visual, customizable interface of icon buttons, pop-up menus, and relational commands to store, link, and standardize related information about primitives, technologies, tasks, assumptions, and open issues involved in space-telerobot or crew-EVA tasks. Runs on any Apple MacIntosh personal computer.

  7. T-SDN architecture for space and ground integrated optical transport network

    NASA Astrophysics Data System (ADS)

    Nie, Kunkun; Hu, Wenjing; Gao, Shenghua; Chang, Chengwu

    2015-11-01

    Integrated optical transport network is the development trend of the future space information backbone network. The space and ground integrated optical transport network(SGIOTN) may contain a variety of equipment and systems. Changing the network or meeting some innovation missions in the network will be an expensive implement. Software Defined Network(SDN) provides a good solution to flexibly adding process logic, timely control states and resources of the whole network, as well as shielding the differences of heterogeneous equipment and so on. According to the characteristics of SGIOTN, we propose an transport SDN architecture for it, with hierarchical control plane and data plane composed of packet networks and optical transport networks.

  8. SWIFT: A solar system integration software package

    NASA Astrophysics Data System (ADS)

    Levison, Harold F.; Duncan, Martin J.

    2013-03-01

    SWIFT follows the long-term dynamical evolution of a swarm of test particles in the solar system. The code efficiently and accurately handles close approaches between test particles and planets while retaining the powerful features of recently developed mixed variable symplectic integrators. Four integration techniques are included: Wisdom-Holman Mapping; Regularized Mixed Variable Symplectic (RMVS) method; fourth order T+U Symplectic (TU4) method; and Bulirsch-Stoer method. The package is designed so that the calls to each of these look identical so that it is trivial to replace one with another. Complex data manipulations and results can be analyzed with the graphics packace SwiftVis.

  9. New system for the real-time infrared scene simulator: system architecture and software design

    NASA Astrophysics Data System (ADS)

    Li, Haimin; Shen, Peiyi; Wu, Chengke

    1999-07-01

    This paper represents a new real-time infrared scene simulator. Hardware architecture of the simulator contains two PENTIUM 300 MHz CPUs, a hardware Z-BUFFER controller developed by EPLD, and a data transmission controller based on PCI bus, which is presented on basis of the timing analysis of scene simulator process. The software finishes the geometry transform, clipping, and infrared simulator of 3D model of a target to create data for hardware Z-BUFFER controller. The experimental results indicate that software can successfully cooperate with hardware to meet the great demand of application in practice.

  10. Integrated software tool automates MOV diagnosis

    SciTech Connect

    Joshi, B.D.; Upadhyaya, B.R.

    1996-04-01

    This article reports that researchers at the University of Tennessee have developed digital signal processing software that takes the guesswork out of motor current signature analysis (MCSA). The federal testing regulations for motor-operated valves (MOV) used in nuclear power plants have recently come under critical scrutiny by the Nuclear Regulatory Commission (NRC) and the American Society of Mechanical Engineers (ASME). New ASME testing specifications mandate that all valves performing a safety function are to be tested -- not just ASME Code 1, 2 and 3 valves. The NRC will likely endorse the ASME regulations in the near future. Because of these changes, several utility companies have voluntarily expanded the scope of their in-service testing programs for MOVs, in spite of the additional expense.

  11. On the design of a generic and scalable multilayer software architecture for data flow management in the intensive care unit.

    PubMed

    Decruyenaere, J; De Turck, F; Vanhastel, S; Vandermeulen, F; Demeester, P; de Moor, G

    2003-01-01

    The current Intensive Care Information Systems (IC-ISs) collect and store monitoring data in on automated way and can replace all paper forms by an electronic equivalent, resulting in a paperless ICU. Future development of IC-ISs will now have to focus on bedside clinical decision support. The current IC-ISs are data-driven systems, with a two-layer software architecture. This software architecture is hardly maintainable and probably not the most optimal architecture to make the transition towards future systems with-decision support. The aim of this research was to address the design of an alternative software architecture based on new paradigms. State-of-the art component, middleware and agent technology were deployed to design and implement a software architecture for ICU data flow management. An advanced multi-layer architecture for efficient data flow management in the ICU has been designed. The architecture is both generic and scalable, which means that it neither depends on a particular ICU nor on the deployed monitoring devices. Automatic device detection and Graphical User Interface generation are taken into account. Furthermore, a demonstrator has been developed as a proof that the proposed conceptual software architecture is feasible in practice. The core of the new architecture consists of Bed Decision Agents (BDAs). The introduction of BDAs, who perform specific dedicated tasks, improves the adaptability and maintainability of the future very complex IC-ISs. A software architecture, based on component, middleware and agent technology, is feasible and offers important advantages over the currently used two-layer software architecture.

  12. Architecture. Intermediate ThemeWorks. An Integrated Activity Bank.

    ERIC Educational Resources Information Center

    Stewart, Kelly

    This resource book offers an activity bank of learning experiences related to the theme of architecture. The activities, which are designed for use with students in grades 4-6, require active engagement of the students and integrate language arts, mathematics, science, social studies, and art experiences. Activities exploring the architectural…

  13. Architecture. Intermediate ThemeWorks. An Integrated Activity Bank.

    ERIC Educational Resources Information Center

    Stewart, Kelly

    This resource book offers an activity bank of learning experiences related to the theme of architecture. The activities, which are designed for use with students in grades 4-6, require active engagement of the students and integrate language arts, mathematics, science, social studies, and art experiences. Activities exploring the architectural…

  14. A software architecture for hard real-time execution of automatically synthesized plans or control laws

    NASA Technical Reports Server (NTRS)

    Schoppers, Marcel

    1994-01-01

    The design of a flexible, real-time software architecture for trajectory planning and automatic control of redundant manipulators is described. Emphasis is placed on a technique of designing control systems that are both flexible and robust yet have good real-time performance. The solution presented involves an artificial intelligence algorithm that dynamically reprograms the real-time control system while planning system behavior.

  15. Algorithms and software for solving finite element equations on serial and parallel architectures

    NASA Technical Reports Server (NTRS)

    George, Alan

    1989-01-01

    Over the past 15 years numerous new techniques have been developed for solving systems of equations and eigenvalue problems arising in finite element computations. A package called SPARSPAK has been developed by the author and his co-workers which exploits these new methods. The broad objective of this research project is to incorporate some of this software in the Computational Structural Mechanics (CSM) testbed, and to extend the techniques for use on multiprocessor architectures.

  16. A Federated Design for a Neurobiological Simulation Engine: The CBI Federated Software Architecture

    PubMed Central

    Cornelis, Hugo; Coop, Allan D.; Bower, James M.

    2012-01-01

    Simulator interoperability and extensibility has become a growing requirement in computational biology. To address this, we have developed a federated software architecture. It is federated by its union of independent disparate systems under a single cohesive view, provides interoperability through its capability to communicate, execute programs, or transfer data among different independent applications, and supports extensibility by enabling simulator expansion or enhancement without the need for major changes to system infrastructure. Historically, simulator interoperability has relied on development of declarative markup languages such as the neuron modeling language NeuroML, while simulator extension typically occurred through modification of existing functionality. The software architecture we describe here allows for both these approaches. However, it is designed to support alternative paradigms of interoperability and extensibility through the provision of logical relationships and defined application programming interfaces. They allow any appropriately configured component or software application to be incorporated into a simulator. The architecture defines independent functional modules that run stand-alone. They are arranged in logical layers that naturally correspond to the occurrence of high-level data (biological concepts) versus low-level data (numerical values) and distinguish data from control functions. The modular nature of the architecture and its independence from a given technology facilitates communication about similar concepts and functions for both users and developers. It provides several advantages for multiple independent contributions to software development. Importantly, these include: (1) Reduction in complexity of individual simulator components when compared to the complexity of a complete simulator, (2) Documentation of individual components in terms of their inputs and outputs, (3) Easy removal or replacement of unnecessary or

  17. A federated design for a neurobiological simulation engine: the CBI federated software architecture.

    PubMed

    Cornelis, Hugo; Coop, Allan D; Bower, James M

    2012-01-01

    Simulator interoperability and extensibility has become a growing requirement in computational biology. To address this, we have developed a federated software architecture. It is federated by its union of independent disparate systems under a single cohesive view, provides interoperability through its capability to communicate, execute programs, or transfer data among different independent applications, and supports extensibility by enabling simulator expansion or enhancement without the need for major changes to system infrastructure. Historically, simulator interoperability has relied on development of declarative markup languages such as the neuron modeling language NeuroML, while simulator extension typically occurred through modification of existing functionality. The software architecture we describe here allows for both these approaches. However, it is designed to support alternative paradigms of interoperability and extensibility through the provision of logical relationships and defined application programming interfaces. They allow any appropriately configured component or software application to be incorporated into a simulator. The architecture defines independent functional modules that run stand-alone. They are arranged in logical layers that naturally correspond to the occurrence of high-level data (biological concepts) versus low-level data (numerical values) and distinguish data from control functions. The modular nature of the architecture and its independence from a given technology facilitates communication about similar concepts and functions for both users and developers. It provides several advantages for multiple independent contributions to software development. Importantly, these include: (1) Reduction in complexity of individual simulator components when compared to the complexity of a complete simulator, (2) Documentation of individual components in terms of their inputs and outputs, (3) Easy removal or replacement of unnecessary or

  18. A Software Defined Radio Based Architecture for the Reagan Test Site Telemetry Modernization (RTM) Program-

    DTIC Science & Technology

    2015-10-26

    spectrum reallocation. A system design that is frequency agile and agnostic could adapt to these changes while being minimally impacted by a future...detectors, etc.) are instead implemented by means of software on a personal computer or embedded system [2]. The SDR architecture of the Modernized...and allowing a flexible design that can easily change as the needs of customers may change. [1] To maximize flexibility of the RTM system and to

  19. STOMP: A Software Architecture for the Design and Simulation UAV-Based Sensor Networks

    SciTech Connect

    Jones, E D; Roberts, R S; Hsia, T C S

    2002-10-28

    This paper presents the Simulation, Tactical Operations and Mission Planning (STOMP) software architecture and framework for simulating, controlling and communicating with unmanned air vehicles (UAVs) servicing large distributed sensor networks. STOMP provides hardware-in-the-loop capability enabling real UAVs and sensors to feedback state information, route data and receive command and control requests while interacting with other real or virtual objects thereby enhancing support for simulation of dynamic and complex events.

  20. Software Architecture Design of GIS Web Service Aggregation Based on Service Group

    NASA Astrophysics Data System (ADS)

    Liu, J.-C.; Yang, J.; Tan, M.-J.; Gan, Q.

    2011-08-01

    Based on the analysis of research status of domestic and international GIS web service aggregation and development tendency of public platform of GIS web service, the paper designed software architecture of GIS web service aggregation based on GIS web service group. Firstly, using heterogeneous GIS services model, the software architecture converted a variety of heterogeneous services to a unified interface of GIS services, and divided different types of GIS services into different service groups referring to description of GIS services. Secondly, a service aggregation process model was designed. This model completed the task of specific service aggregation instance, by automatically selecting member GIS Web services in the same service group. Dynamic capabilities and automatic adaptation of GIS Web services aggregation process were achieved. Thirdly, this paper designed a service evaluation model of GIS web service aggregation based on service group from three aspects, i.e. GIS Web Service itself, networking conditions and service consumer. This model implemented effective quality evaluation and performance monitoring of GIS web service aggregation. It could be used to guide the execution, monitor and service selection of aggregation process. Therefore, robustness of aggregated GIS web service was improved. Finally, the software architecture has been widely used in public platform of GIS web service and a number of geo-spatial framework constructions for digital city in Sichuan Province, and aggregated various GIS web services such as World Map(National Public Platform of Geo-spatial Service), ArcGIS, SuperMap, MapGIS, NewMap etc. Applications of items showed that this software architecture was practicability.

  1. Integrating and Managing Bim in GIS, Software Review

    NASA Astrophysics Data System (ADS)

    El Meouche, R.; Rezoug, M.; Hijazi, I.

    2013-08-01

    Since the advent of Computer-Aided Design (CAD) and Geographical Information System (GIS) tools, project participants have been increasingly leveraging these tools throughout the different phases of a civil infrastructure project. In recent years the number of GIS software that provides tools to enable the integration of Building information in geo context has risen sharply. More and more GIS software are added tools for this purposes and other software projects are regularly extending these tools. However, each software has its different strength and weakness and its purpose of use. This paper provides a thorough review to investigate the software capabilities and clarify its purpose. For this study, Autodesk Revit 2012 i.e. BIM editor software was used to create BIMs. In the first step, three building models were created, the resulted models were converted to BIM format and then the software was used to integrate it. For the evaluation of the software, general characteristics was studied such as the user interface, what formats are supported (import/export), and the way building information are imported.

  2. Software Defined Networking (SDN) controlled all optical switching networks with multi-dimensional switching architecture

    NASA Astrophysics Data System (ADS)

    Zhao, Yongli; Ji, Yuefeng; Zhang, Jie; Li, Hui; Xiong, Qianjin; Qiu, Shaofeng

    2014-08-01

    Ultrahigh throughout capacity requirement is challenging the current optical switching nodes with the fast development of data center networks. Pbit/s level all optical switching networks need to be deployed soon, which will cause the high complexity of node architecture. How to control the future network and node equipment together will become a new problem. An enhanced Software Defined Networking (eSDN) control architecture is proposed in the paper, which consists of Provider NOX (P-NOX) and Node NOX (N-NOX). With the cooperation of P-NOX and N-NOX, the flexible control of the entire network can be achieved. All optical switching network testbed has been experimentally demonstrated with efficient control of enhanced Software Defined Networking (eSDN). Pbit/s level all optical switching nodes in the testbed are implemented based on multi-dimensional switching architecture, i.e. multi-level and multi-planar. Due to the space and cost limitation, each optical switching node is only equipped with four input line boxes and four output line boxes respectively. Experimental results are given to verify the performance of our proposed control and switching architecture.

  3. Integrated Hybrid System Architecture for Risk Analysis

    NASA Technical Reports Server (NTRS)

    Moynihan, Gary P.; Fonseca, Daniel J.; Ray, Paul S.

    2010-01-01

    A conceptual design has been announced of an expert-system computer program, and the development of a prototype of the program, intended for use as a project-management tool. The program integrates schedule and risk data for the purpose of determining the schedule applications of safety risks and, somewhat conversely, the effects of changes in schedules on changes on safety. It is noted that the design has been delivered to a NASA client and that it is planned to disclose the design in a conference presentation.

  4. An Open Source Software Platform for Visualizing and Teaching Conservation Tasks in Architectural Heritage Environments

    NASA Astrophysics Data System (ADS)

    San Jose, I. Ignacio; Martinez, J.; Alvarez, N.; Fernandez, J. J.; Delgado, F.; Martinez, R.; Puche, J. C.; Finat, J.

    2013-07-01

    In this work we present a new software platform for interactive volumetric visualization of complex architectural objects and their applications to teaching and training conservation interventions in Architectural Cultural Heritage. Photogrammetric surveying is performed by processing the information arising from image- and range-based devices. Our visualization application is based on an adaptation of WebGL open standard; the performed adaptation allows to import open standards and an interactive navigation of 3D models in ordinary web navigators with a good performance. The Visualization platform is scalable and can be applied to urban environments, provided open source files be used; CityGML is an open standard based on a geometry -driven Ontology which is compatible with this approach. We illustrate our results with examples concerning to very damaged churches and a urban district of Segovia (World Cultural Heritage). Their connection with appropriate database eases the building evolution and interventions tracking. We have incorporated some preliminary examples to illustrate Advanced Visualization Tools and architectural e-Learning software platform which have been created for assessing conservation and restoration tasks in very damaged buildings. First version of the Advanced Visualization application has been developed in the framework of ADISPA Spanish Project Results. Our results are illustrated with the application of these software applications to several very damaged cultural heritage buildings in rural zones of Castilla y Leon (Spain).

  5. Considerations for an Integrated UAS CNS Architecture

    NASA Technical Reports Server (NTRS)

    Templin, Fred L.; Jain, Raj; Sheffield, Greg; Taboso-Bellesteros, Pedro; Ponchak, Denise

    2017-01-01

    The National Aeronautics and Space Administration (NASA) Glenn Research Center (GRC) is investigating revolutionary and advanced universal, reliable, always available, cyber secure and affordable Communication, Navigation, Surveillance (CNS) options for all altitudes of UAS operations. In Spring 2015, NASA issued a Call for Proposals under NASA Research Announcements (NRA) NNH15ZEA001N, Amendment 7 Subtopic 2.4. Boeing was selected to conduct a study with the objective to determine the most promising candidate technologies for Unmanned Air Systems (UAS) air-to-air and air-to-ground data exchange and analyze their suitability in a post-NextGen NAS environment. The overall objectives are to develop UAS CNS requirements and then develop architectures that satisfy the requirements for UAS in both controlled and uncontrolled air space. This contract is funded under NASAs Aeronautics Research Mission Directorates (ARMD) Aviation Operations and Safety Program (AOSP) Safe Autonomous Systems Operations (SASO) project and proposes technologies for the Unmanned Air Systems Traffic Management (UTM) service.There is a need for accommodating large-scale populations of Unmanned Air Systems (UAS) in the national air space. Scale obviously impacts capacity planning for Communication, Navigation, and Surveillance (CNS) technologies. For example, can wireless communications data links provide the necessary capacity for accommodating millions of small UASs (sUAS) nationwide? Does the communications network provide sufficient Internet Protocol (IP) address space to allow air traffic control to securely address both UAS teams as a whole as well as individual UAS within each team? Can navigation and surveillance approaches assure safe route planning and safe separation of vehicles even in crowded skies?Our objective is to identify revolutionary and advanced CNS alternatives supporting UASs operating at all altitudes and in all airspace while accurately navigating in the absence of

  6. Considerations for an Integrated UAS CNS Architecture

    NASA Technical Reports Server (NTRS)

    Templin, Fred L.; Jain, Raj; Sheffield, Greg; Taboso-Bellesteros, Pedro; Ponchak, Denise

    2017-01-01

    The National Aeronautics and Space Administration (NASA) Glenn Research Center (GRC) is investigating revolutionary and advanced universal, reliable, always available, cyber secure and affordable Communication, Navigation, Surveillance (CNS) options for all altitudes of UAS operations. In Spring 2015, NASA issued a Call for Proposals under NASA Research Announcements (NRA) NNH15ZEA001N, Amendment 7 Subtopic 2.4. Boeing was selected to conduct a study with the objective to determine the most promising candidate technologies for Unmanned Air Systems (UAS) air-to-air and air-to-ground data exchange and analyze their suitability in a post-NextGen NAS environment. The overall objectives are to develop UAS CNS requirements and then develop architectures that satisfy the requirements for UAS in both controlled and uncontrolled air space. This contract is funded under NASAs Aeronautics Research Mission Directorates (ARMD) Aviation Operations and Safety Program (AOSP) Safe Autonomous Systems Operations (SASO) project and proposes technologies for the Unmanned Air Systems Traffic Management (UTM) service.There is a need for accommodating large-scale populations of Unmanned Air Systems (UAS) in the national air space. Scale obviously impacts capacity planning for Communication, Navitation, and Surveillance (CNS) technologies. For example, can wireless communications data links provide the necessary capacity for accommodating millions of small UASs (sUAS) nationwide? Does the communications network provide sufficient Internet Protocol (IP) address space to allow air traffic control to securely address both UAS teams as a whole as well as individual UAS within each team? Can navigation and surveillance approaches assure safe route planning and safe separation of vehicles even in crowded skies?Our objective is to identify revolutionary and advanced CNS alternatives supporting UASs operating at all altitudes and in all airspace while accurately navigating in the absence of

  7. STS-2 - SOFTWARE INTEGRATION TESTS (SIT) - KSC

    NASA Image and Video Library

    1981-09-01

    S81-36331 (24 Aug. 1981) --- Astronauts Joe H. Engle, left, and Richard H. Truly pause before participating in the integrated test of the assembled space shuttle components scheduled for launch no earlier than Sept. 30, 1981. Moments later, Engle, STS-2 crew commander, and Truly, pilot, entered the cabin of the orbiter Columbia for a mission simulation. The shuttle integrated tests (SIT) are designed to check out every connection and signal path in the STS-2 vehicle composed of the orbiter, two solid rocket boosters (SRB) and an external fuel tank (ET) for Columbia?s main engines. Completion of the tests will clear the way for preparations for rollout to Pad A at Launch Complex 39, scheduled for the latter part of August or early September. Photo credit: NASA

  8. Control architecture for human-robot integration: application to a robotic wheelchair.

    PubMed

    Galindo, Cipriano; Gonzalez, Javier; Fernández-Madrigal, Juan-Antonio

    2006-10-01

    Completely autonomous performance of a mobile robot within noncontrolled and dynamic environments is not possible yet due to different reasons including environment uncertainty, sensor/software robustness, limited robotic abilities, etc. But in assistant applications in which a human is always present, she/he can make up for the lack of robot autonomy by helping it when needed. In this paper, the authors propose human-robot integration as a mechanism to augment/improve the robot autonomy in daily scenarios. Through the human-robot-integration concept, the authors take a further step in the typical human-robot relation, since they consider her/him as a constituent part of the human-robot system, which takes full advantage of the sum of their abilities. In order to materialize this human integration into the system, they present a control architecture, called architecture for human-robot integration, which enables her/him from a high decisional level, i.e., deliberating a plan, to a physical low level, i.e., opening a door. The presented control architecture has been implemented to test the human-robot integration on a real robotic application. In particular, several real experiences have been conducted on a robotic wheelchair aimed to provide mobility to elderly people.

  9. Integrated software package for laser diodes characterization

    NASA Astrophysics Data System (ADS)

    Sporea, Dan G.; Sporea, Radu A.

    2003-10-01

    The characteristics of laser diodes (wavelength of the emitted radiation, output optical power, embedded photodiode photocurrent, threshold current, serial resistance, external quantum efficiency) are strongly influenced by their driving circumstances (forward current, case temperature). In order to handle such a complex investigation in an efficient and objective manner, the operation of several instruments (a laser diode driver, a temperature controller, a wavelength meter, a power meter, and a laser beam analyzer) is synchronously controlled by a PC, through serial and GPIB communication. For each equipment, instruments drivers were designed using the industry standards graphical programming environment - LabVIEW from National Instruments. All the developed virtual instruments operate under the supervision of a managing virtual instrument, which sets the driving parameters for each unit under test. The manager virtual instrument scans as appropriate the driving current and case temperature values for the selected laser diode. The software enables data saving in Excel compatible files. In this way, sets of curves can be produced according to the testing cycle needs.

  10. Wafer-scale boundary value integrated circuit architecture

    SciTech Connect

    Delgado-Frias, J.G.

    1986-01-01

    Wafer scale integration (WSI) technology offers the potential for improving speed and reliability of a large integrated circuit system. An architecture is presented for a boundary value integrated circuit engine which lends itself to implementation in WSI. The philosophy underpinning this architecture includes local communication, cell regularity, and fault tolerance. The research described here proposes, investigates, and simulates this computer architecture and its flaw avoidance schemes for a WSI implementation. Boundary value differential equation computations are utilized in a number of scientific and engineering applications. A boundary value machine is ideally suited for solutions of finite difference and finite element problems with specified boundary values. The architecture is a 2-D array of computational cells. Each basic cell has four bit serial processing elements (PEs) and a local memory. Most communications is limited to transfer between adjacent PEs to reduce complexity, avoid long delays, and localize the effects of silicon flaws. Memory access time is kept short by restricting memory service to PEs in the same cell. I/O operation is performed by means of a row multiple single line I/O bus, which allows fast, reliable and independent data transference. WSI yield losses are due to gross defects and random defects. Gross defects which affect large portions of the wafer are usually fatal for any WSI implementation. Overcoming random defects which cover either a small area or points is achieved by defect avoidance schemes that are developed for this architecture. Those schemes are provided at array, cell, and communication level. Capabilities and limitations of the proposed WSI architecture can be observed through the simulations. Speed degradation of the array and the PE due to silicon defects is observed by means of simulation. Also, module and bus utilization are computed and presented.

  11. An integrated method for quantifying root architecture of field-grown maize.

    PubMed

    Wu, Jie; Guo, Yan

    2014-09-01

    A number of techniques have recently been developed for studying the root system architecture (RSA) of seedlings grown in various media. In contrast, methods for sampling and analysis of the RSA of field-grown plants, particularly for details of the lateral root components, are generally inadequate. An integrated methodology was developed that includes a custom-made root-core sampling system for extracting intact root systems of individual maize plants, a combination of proprietary software and a novel program used for collecting individual RSA information, and software for visualizing the measured individual nodal root architecture. Example experiments show that large root cores can be sampled, and topological and geometrical structure of field-grown maize root systems can be quantified and reconstructed using this method. Second- and higher order laterals are found to contribute substantially to total root number and length. The length of laterals of distinct orders varies significantly. Abundant higher order laterals can arise from a single first-order lateral, and they concentrate in the proximal axile branching zone. The new method allows more meaningful sampling than conventional methods because of its easily opened, wide corer and sampling machinery, and effective analysis of RSA using the software. This provides a novel technique for quantifying RSA of field-grown maize and also provides a unique evaluation of the contribution of lateral roots. The method also offers valuable potential for parameterization of root architectural models.

  12. An integrated method for quantifying root architecture of field-grown maize

    PubMed Central

    Wu, Jie; Guo, Yan

    2014-01-01

    Background and Aims A number of techniques have recently been developed for studying the root system architecture (RSA) of seedlings grown in various media. In contrast, methods for sampling and analysis of the RSA of field-grown plants, particularly for details of the lateral root components, are generally inadequate. Methods An integrated methodology was developed that includes a custom-made root-core sampling system for extracting intact root systems of individual maize plants, a combination of proprietary software and a novel program used for collecting individual RSA information, and software for visualizing the measured individual nodal root architecture. Key Results Example experiments show that large root cores can be sampled, and topological and geometrical structure of field-grown maize root systems can be quantified and reconstructed using this method. Second- and higher order laterals are found to contribute substantially to total root number and length. The length of laterals of distinct orders varies significantly. Abundant higher order laterals can arise from a single first-order lateral, and they concentrate in the proximal axile branching zone. Conclusions The new method allows more meaningful sampling than conventional methods because of its easily opened, wide corer and sampling machinery, and effective analysis of RSA using the software. This provides a novel technique for quantifying RSA of field-grown maize and also provides a unique evaluation of the contribution of lateral roots. The method also offers valuable potential for parameterization of root architectural models. PMID:24532646

  13. AERCam Autonomy: Intelligent Software Architecture for Robotic Free Flying Nanosatellite Inspection Vehicles

    NASA Technical Reports Server (NTRS)

    Fredrickson, Steven E.; Duran, Steve G.; Braun, Angela N.; Straube, Timothy M.; Mitchell, Jennifer D.

    2006-01-01

    with minimal impact on IVA operators and ground controllers, the Mini AERCam system architecture incorporates intelligent systems attributes that support various autonomous capabilities. 1) A robust command sequencer enables task-level command scripting. Command scripting is employed for operations such as automatic inspection scans over a region of interest, and operator-hands-off automated docking. 2) A system manager built on the same expert-system software as the command sequencer provides detection and smart-response capability for potential system-level anomalies, like loss of communications between the Free Flyer and control station. 3) An AERCam dynamics manager provides nominal and off-nominal management of guidance, navigation, and control (GN&C) functions. It is employed for safe trajectory monitoring, contingency maneuvering, and related roles. This paper will describe these architectural components of Mini AERCam autonomy, as well as the interaction of these elements with a human operator during supervised autonomous control.

  14. Thirty Meter Telescope: observatory software requirements, architecture, and preliminary implementation strategies

    NASA Astrophysics Data System (ADS)

    Silva, David R.; Angeli, George; Boyer, Corinne; Sirota, Mark; Trinh, Thang

    2008-07-01

    The Thirty Meter Telescope (TMT) will be a ground-based, 30-m optical-IR alt-az telescope with a highly segmented primary mirror located in a remote location. Efficient science operations require the asynchronous coordination of many different sub-systems including telescope mount, three independent active optics sub-systems, adaptive optics, laser guide stars, and user-configured science instrument. An important high-level requirement is target acquisition and observatory system configuration must be completed in less than 5 minutes (or 10 minutes if moving to a new instrument). To meet this coordination challenge and target acquisition time requirement, a distributed software architecture is envisioned consisting of software components linked by a service-based software communications backbone. A master sequencer coordinates the activities of mid-layer sequencers for the telescope, adaptive optics, and selected instrument. In turn, these mid-layer sequencers coordinate the activities of groups of sub-systems. In this paper, TMT observatory requirements are presented in more detail, followed by a description of the design reference software architecture and a discussion of preliminary implementation strategies.

  15. A software architecture for multi-cellular system simulations on graphics processing units.

    PubMed

    Jeannin-Girardon, Anne; Ballet, Pascal; Rodin, Vincent

    2013-09-01

    The first aim of simulation in virtual environment is to help biologists to have a better understanding of the simulated system. The cost of such simulation is significantly reduced compared to that of in vivo simulation. However, the inherent complexity of biological system makes it hard to simulate these systems on non-parallel architectures: models might be made of sub-models and take several scales into account; the number of simulated entities may be quite large. Today, graphics cards are used for general purpose computing which has been made easier thanks to frameworks like CUDA or OpenCL. Parallelization of models may however not be easy: parallel computer programing skills are often required; several hardware architectures may be used to execute models. In this paper, we present the software architecture we built in order to implement various models able to simulate multi-cellular system. This architecture is modular and it implements data structures adapted for graphics processing units architectures. It allows efficient simulation of biological mechanisms.

  16. Integrated FASTBUS, VME and CAMAC diagnostic software at Fermilab

    SciTech Connect

    Anderson, J.; Forster, R.; Franzen, J.; Wilcer, N.

    1992-10-01

    A fully integrated system for the diagnosis and repair of data acquisition hardware in FASTBUS, VME and CAMAC is described. A short cost/benefit analysis of using a distributed network of personal computers for diagnosis is presented. The SPUDS (Single Platform Uniting Diagnostic Software) software package developed at Fermilab by the authors is introduced. Examples of how SPUDS is currently used in the Fermilab equipment repair facility, as an evaluation tool and for field diagnostics are given.

  17. Spacelab software development and integration concepts study report, volume 1

    NASA Technical Reports Server (NTRS)

    Rose, P. L.; Willis, B. G.

    1973-01-01

    The proposed software guidelines to be followed by the European Space Research Organization in the development of software for the Spacelab being developed for use as a payload for the space shuttle are documented. Concepts, techniques, and tools needed to assure the success of a programming project are defined as they relate to operation of the data management subsystem, support of experiments and space applications, use with ground support equipment, and for integration testing.

  18. Integrated FASTBUS, VME and CAMAC diagnostic software at Fermilab

    SciTech Connect

    Anderson, J.; Forster, R.; Franzen, J.; Wilcer, N.

    1992-10-01

    A fully integrated system for the diagnosis and repair of data acquisition hardware in FASTBUS, VME and CAMAC is described. A short cost/benefit analysis of using a distributed network of personal computers for diagnosis is presented. The SPUDS (Single Platform Uniting Diagnostic Software) software package developed at Fermilab by the authors is introduced. Examples of how SPUDS is currently used in the Fermilab equipment repair facility, as an evaluation tool and for field diagnostics are given.

  19. Architecture in Mission Integration, Choreographing Constraints

    NASA Technical Reports Server (NTRS)

    Jones, Rod

    2000-01-01

    In any building project the Architect's role and skill is to balance the client's requirements with the available technology, a site and budget. Time, place and resources set the boundaries and constraints of the project. If these boundaries are correctly understood and respected by the Architect they can be choreographed into producing a facility that abides by those constraints and successfully meets the clients needs. The design and assembly of large scale space facilities whether in orbit around or on the surface of a planet require and employs these same skills. In this case the site is the International Space Station (ISS) which operates at a nominal rendezvous altitude of 220 nautical miles. With supplies to support a 7 day mission the Shuttle nominally has a cargo capacity of 35,000 pounds to that altitude. Through the Mission Integration process the Launch Package Management Team choreographs the constraints of ascent performance, hardware design, cargo, rendezvous, mission duration and assembly time in order to meet the mission objective.

  20. Achieving Better Buying Power for Mobile Open Architecture Software Systems Through Diverse Acquisition Scenarios

    DTIC Science & Technology

    2016-04-30

    qÜáêíÉÉåíÜ=^ååì~ä= ^Åèìáëáíáçå=oÉëÉ~êÅÜ= póãéçëáìã= tÉÇåÉëÇ~ó=pÉëëáçåë= sçäìãÉ=f= = Achieving Better Buying Power for Mobile Open Architecture Software ...Systems Through Diverse Acquisition Scenarios Walt Scacchi, Senior Research Scientist, Institute for Software Research, UC Irvine Thomas Alspaugh...Project Scientist, Institute for Software Research, UC Irvine Published April 30, 2016 Approved for public release; distribution is unlimited

  1. Report on the Second International Workshop on Development and Evolution of Software Architectures for Product Families

    DTIC Science & Technology

    1998-05-01

    evolved to a new ADL called Koala . The group working on analysis from the Polytechnical University of Madrid used various tools including the...Sligte, An Integral Hierarchy and Diversity Model for Describing Product Family Architecture 4. Rob van Ommering, Koala , a Component Model for Consumer

  2. SIFT - A Component-Based Integration Architecture for Enterprise Analytics

    SciTech Connect

    Thurman, David A.; Almquist, Justin P.; Gorton, Ian; Wynne, Adam S.; Chatterton, Jack

    2007-02-01

    Architectures and technologies for enterprise application integration are relatively mature, resulting in a range of standards-based and proprietary middleware technologies. In the domain of complex analytical applications, integration architectures are not so well understood. Analytical applications such as those used in scientific discovery, emergency response, financial and intelligence analysis exert unique demands on their underlying architecture. These demands make existing integration middleware inappropriate for use in enterprise analytics environments. In this paper we describe SIFT (Scalable Information Fusion and Triage), a platform designed for integrating the various components that comprise enterprise analytics applications. SIFT exploits a common pattern for composing analytical components, and extends an existing messaging platform with dynamic configuration mechanisms and scaling capabilities. We demonstrate the use of SIFT to create a decision support platform for quality control based on large volumes of incoming delivery data. The strengths of the SIFT solution are discussed, and we conclude by describing where further work is required to create a complete solution applicable to a wide range of analytical application domains.

  3. A Core Plug and Play Architecture for Reusable Flight Software Systems

    NASA Technical Reports Server (NTRS)

    Wilmot, Jonathan

    2006-01-01

    The Flight Software Branch, at Goddard Space Flight Center (GSFC), has been working on a run-time approach to facilitate a formal software reuse process. The reuse process is designed to enable rapid development and integration of high-quality software systems and to more accurately predict development costs and schedule. Previous reuse practices have been somewhat successful when the same teams are moved from project to project. But this typically requires taking the software system in an all-or-nothing approach where useful components cannot be easily extracted from the whole. As a result, the system is less flexible and scalable with limited applicability to new projects. This paper will focus on the rationale behind, and implementation of the run-time executive. This executive is the core for the component-based flight software commonality and reuse process adopted at Goddard.

  4. Software Defined GPS API: Development and Implementation of GPS Correlator Architectures Using MATLAB with Focus on SDR Implementations

    DTIC Science & Technology

    2014-05-18

    and Implementation of GPS Correlator Architectures Using MATLAB with Focus on SDR Implementations The Software Defined GPS API was created with the...documentation. 9. SPONSORING/MONITORING AGENCY NAME(S) AND ADDRESS (ES) U.S. Army Research Office P.O. Box 12211 Research Triangle Park, NC 27709-2211 SDR ...Implementation of GPS Correlator Architectures Using MATLAB with Focus on SDR Implementations Report Title The Software Defined GPS API was created

  5. Scalable, Low-Noise Architecture for Integrated Terahertz Imagers

    NASA Astrophysics Data System (ADS)

    Gergelyi, Domonkos; Földesy, Péter; Zarándy, Ákos

    2015-06-01

    We propose a scalable, low-noise imager architecture for terahertz recordings that helps to build large-scale integrated arrays from any field-effect transistor (FET)- or HEMT-based terahertz detector. It enhances the signal-to-noise ratio (SNR) by inherently enabling complex sampling schemes. The distinguishing feature of the architecture is the serially connected detectors with electronically controllable photoresponse. We show that this architecture facilitate room temperature imaging by decreasing the low-noise amplifier (LNA) noise to one-sixteenth of a non-serial sensor while also reducing the number of multiplexed signals in the same proportion. The serially coupled architecture can be combined with the existing read-out circuit organizations to create high-resolution, coarse-grain sensor arrays. Besides, it adds the capability to suppress overall noise with increasing array size. The theoretical considerations are proven on a 4 by 4 detector array manufactured on 180 nm feature sized standard CMOS technology. The detector array is integrated with a low-noise AC-coupled amplifier of 40 dB gain and has a resonant peak at 460 GHz with 200 kV/W overall sensitivity.

  6. A resilient and secure software platform and architecture for distributed spacecraft

    NASA Astrophysics Data System (ADS)

    Otte, William R.; Dubey, Abhishek; Karsai, Gabor

    2014-06-01

    A distributed spacecraft is a cluster of independent satellite modules flying in formation that communicate via ad-hoc wireless networks. This system in space is a cloud platform that facilitates sharing sensors and other computing and communication resources across multiple applications, potentially developed and maintained by different organizations. Effectively, such architecture can realize the functions of monolithic satellites at a reduced cost and with improved adaptivity and robustness. Openness of these architectures pose special challenges because the distributed software platform has to support applications from different security domains and organizations, and where information flows have to be carefully managed and compartmentalized. If the platform is used as a robust shared resource its management, configuration, and resilience becomes a challenge in itself. We have designed and prototyped a distributed software platform for such architectures. The core element of the platform is a new operating system whose services were designed to restrict access to the network and the file system, and to enforce resource management constraints for all non-privileged processes Mixed-criticality applications operating at different security labels are deployed and controlled by a privileged management process that is also pre-configuring all information flows. This paper describes the design and objective of this layer.

  7. Software architecture for multi-bed FDK-based reconstruction in X-ray CT scanners.

    PubMed

    Abella, M; Vaquero, J J; Sisniega, A; Pascau, J; Udías, A; García, V; Vidal, I; Desco, M

    2012-08-01

    Most small-animal X-ray computed tomography (CT) scanners are based on cone-beam geometry with a flat-panel detector orbiting in a circular trajectory. Image reconstruction in these systems is usually performed by approximate methods based on the algorithm proposed by Feldkamp et al. (FDK). Besides the implementation of the reconstruction algorithm itself, in order to design a real system it is necessary to take into account numerous issues so as to obtain the best quality images from the acquired data. This work presents a comprehensive, novel software architecture for small-animal CT scanners based on cone-beam geometry with circular scanning trajectory. The proposed architecture covers all the steps from the system calibration to the volume reconstruction and conversion into Hounsfield units. It includes an efficient implementation of an FDK-based reconstruction algorithm that takes advantage of system symmetries and allows for parallel reconstruction using a multiprocessor computer. Strategies for calibration and artifact correction are discussed to justify the strategies adopted. New procedures for multi-bed misalignment, beam-hardening, and Housfield units calibration are proposed. Experiments with phantoms and real data showed the suitability of the proposed software architecture for an X-ray small animal CT based on cone-beam geometry. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  8. Software architecture for a distributed real-time system in Ada, with application to telerobotics

    NASA Technical Reports Server (NTRS)

    Olsen, Douglas R.; Messiora, Steve; Leake, Stephen

    1992-01-01

    The architecture structure and software design methodology presented is described in the context of telerobotic application in Ada, specifically the Engineering Test Bed (ETB), which was developed to support the Flight Telerobotic Servicer (FTS) Program at GSFC. However, the nature of the architecture is such that it has applications to any multiprocessor distributed real-time system. The ETB architecture, which is a derivation of the NASA/NBS Standard Reference Model (NASREM), defines a hierarchy for representing a telerobot system. Within this hierarchy, a module is a logical entity consisting of the software associated with a set of related hardware components in the robot system. A module is comprised of submodules, which are cyclically executing processes that each perform a specific set of functions. The submodules in a module can run on separate processors. The submodules in the system communicate via command/status (C/S) interface channels, which are used to send commands down and relay status back up the system hierarchy. Submodules also communicate via setpoint data links, which are used to transfer control data from one submodule to another. A submodule invokes submodule algorithms (SMA's) to perform algorithmic operations. Data that describe or models a physical component of the system are stored as objects in the World Model (WM). The WM is a system-wide distributed database that is accessible to submodules in all modules of the system for creating, reading, and writing objects.

  9. Sequence System Building Blocks: Using a Component Architecture for Sequencing Software

    NASA Technical Reports Server (NTRS)

    Streiffert, Barbara A.; O'Reilly, Taifun

    2005-01-01

    Over the last few years software engineering has made significant strides in making more flexible architectures and designs possible. However, at the same time, spacecraft have become more complex and flight software has become more sophisticated. Typically spacecraft are often one-of-a-kind entities that have different hardware designs, different capabilities, different instruments, etc. Ground software has become more complex and operations teams have had to learn a myriad of tools that all have different user interfaces and represent data in different ways. At Jet Propulsion Laboratory (JPL) these themes have collided to require an new approach to producing ground system software. Two different groups have been looking at tackling this particular problem. One group is working for the JPL Mars Technology Program in the Mars Science Laboratory (MSL) Focused Technology area. The other group is the JPL Multi-Mission Planning and Sequencing Group . The major concept driving these two approaches on a similar path is to provide software that can be a more cohesive flexible system that provides a act of planning and sequencing system of services. This paper describes the efforts that have been made to date to create a unified approach from these disparate groups.

  10. Sequencing System Building Blocks: Using a Component Architecture for Sequencing Software

    NASA Technical Reports Server (NTRS)

    Streiffert, Barbara A.; O'Reilly, Taifun

    2006-01-01

    Over the last few years software engineering has made significant strides in making more flexible architectures and designs possible. However, at the same time, spacecraft have become more complex and flight software has become more sophisticated. Typically spacecraft are often one-of-a-kind entities that have different hardware designs, different capabilities, different instruments, etc. Ground software has become more complex and operations teams have had to learn a myriad of tools that all have different user interfaces and represent data in different ways. At Jet Propulsion Laboratory (JPL) these themes have collided to require a new approach to producing ground system software. Two different groups have been looking at tackling this particular problem. One group is working for the JPL Mars Technology Program in the Mars Science Laboratory (MSL) Focused Technology area. The other group is the JPL Multi-Mission Planning and Sequencing Group. The major concept driving these two approaches on a similar path is to provide software that can be a more cohesive flexible system that provides a set of planning and sequencing system of services. This paper describes the efforts that have been made to date to create a unified approach from these disparate groups.

  11. Sequencing System Building Blocks: Using a Component Architecture for Sequencing Software

    NASA Technical Reports Server (NTRS)

    Streiffert, Barbara A.; O'Reilly, Taifun

    2006-01-01

    Over the last few years software engineering has made significant strides in making more flexible architectures and designs possible. However, at the same time, spacecraft have become more complex and flight software has become more sophisticated. Typically spacecraft are often one-of-a-kind entities that have different hardware designs, different capabilities, different instruments, etc. Ground software has become more complex and operations teams have had to learn a myriad of tools that all have different user interfaces and represent data in different ways. At Jet Propulsion Laboratory (JPL) these themes have collided to require a new approach to producing ground system software. Two different groups have been looking at tackling this particular problem. One group is working for the JPL Mars Technology Program in the Mars Science Laboratory (MSL) Focused Technology area. The other group is the JPL Multi-Mission Planning and Sequencing Group. The major concept driving these two approaches on a similar path is to provide software that can be a more cohesive flexible system that provides a set of planning and sequencing system of services. This paper describes the efforts that have been made to date to create a unified approach from these disparate groups.

  12. Sequence System Building Blocks: Using a Component Architecture for Sequencing Software

    NASA Technical Reports Server (NTRS)

    Streiffert, Barbara A.; O'Reilly, Taifun

    2005-01-01

    Over the last few years software engineering has made significant strides in making more flexible architectures and designs possible. However, at the same time, spacecraft have become more complex and flight software has become more sophisticated. Typically spacecraft are often one-of-a-kind entities that have different hardware designs, different capabilities, different instruments, etc. Ground software has become more complex and operations teams have had to learn a myriad of tools that all have different user interfaces and represent data in different ways. At Jet Propulsion Laboratory (JPL) these themes have collided to require an new approach to producing ground system software. Two different groups have been looking at tackling this particular problem. One group is working for the JPL Mars Technology Program in the Mars Science Laboratory (MSL) Focused Technology area. The other group is the JPL Multi-Mission Planning and Sequencing Group . The major concept driving these two approaches on a similar path is to provide software that can be a more cohesive flexible system that provides a act of planning and sequencing system of services. This paper describes the efforts that have been made to date to create a unified approach from these disparate groups.

  13. Software Architecture for a Virtual Environment for Nano Scale Assembly (VENSA)

    PubMed Central

    Lee, Yong-Gu; Lyons, Kevin W.; Feng, Shaw C.

    2004-01-01

    A Virtual Environment (VE) uses multiple computer-generated media to let a user experience situations that are temporally and spatially prohibiting. The information flow between the user and the VE is bidirectional and the user can influence the environment. The software development of a VE requires orchestrating multiple peripherals and computers in a synchronized way in real time. Although a multitude of useful software components for VEs exists, many of these are packaged within a complex framework and can not be used separately. In this paper, an architecture is presented which is designed to let multiple frameworks work together while being shielded from the application program. This architecture, which is called the Virtual Environment for Nano Scale Assembly (VENSA), has been constructed for interfacing with an optical tweezers instrument for nanotechnology development. However, this approach can be generalized for most virtual environments. Through the use of VENSA, the programmer can rely on existing solutions and concentrate more on the application software design. PMID:27366610

  14. Project Integration Architecture: Formulation of Dimensionality in Semantic Parameters Outline

    NASA Technical Reports Server (NTRS)

    Jones, William Henry

    2005-01-01

    One of several key elements of the Project Integration Architecture (PIA) is the formulation of parameter objects which convey meaningful semantic information. The infusion of measurement dimensionality into such objects is an important part of that effort since it promises to automate the conversion of units between cooperating applications and, thereby, eliminate the mistakes that have occasionally beset other systems of information transport. This paper discusses the conceptualization of dimensionality developed as a result of that effort.

  15. Design of an integrated airframe/propulsion control system architecture

    NASA Technical Reports Server (NTRS)

    Cohen, Gerald C.; Lee, C. William; Strickland, Michael J.

    1990-01-01

    The design of an integrated airframe/propulsion control system architecture is described. The design is based on a prevalidation methodology that used both reliability and performance tools. An account is given of the motivation for the final design and problems associated with both reliability and performance modeling. The appendices contain a listing of the code for both the reliability and performance model used in the design.

  16. On theory integration: Toward developing affective components within cognitive architectures.

    PubMed

    Olds, Justin M; Marewski, Julian N

    2015-01-01

    In The Cognitive-Emotional Brain, Pessoa (2013) suggests that cognition and emotion should not be considered separately. We agree with this and argue that cognitive architectures can provide steady ground for this kind of theory integration and for investigating interactions among underlying cognitive processes. We briefly explore how affective components can be implemented and how neuroimaging measures can help validate models and influence theory development.

  17. Integrating Effective Planning Horizons into an Intelligent Systems Architecture

    DTIC Science & Technology

    2002-08-01

    Integrating Effective Planning Horizons into an Intelligent Systems Architecture J. P. Gunderson and L. F. Gunderson Gunderson and Gunderson, Inc...Intelligent System. 1 INTRODUCTION The world is not a perfect place, and if intelligent systems are to function effectively they must be capable of handling...machine based intelligence [6]. Thus, it appears that all intelligent systems (whether biologic or electronic) must make trade-offs between cognition

  18. Integrated Cognitive-neuroscience Architectures for Understanding Sensemaking (ICArUS): Transition to the Intelligence Community

    DTIC Science & Technology

    2014-12-01

    Integrated Cognitive- neuroscience Architectures for Understanding Sensemaking (ICArUS): Transition to the Intelligence Community Kevin...Integrated Cognitive- neuroscience Architectures for Understanding Sensemaking (ICArUS): A Computational Basis for ICArUS: Transition to the...Research Projects Activity) program ICArUS (Integrated Cognitive- neuroscience Architectures for Understanding Sensemaking) developed and tested brain

  19. Software Testbed for Developing and Evaluating Integrated Autonomous Systems

    DTIC Science & Technology

    2015-03-01

    978-1-4799-5380-6/15/$31.00 ©2015 IEEE 1 Software Testbed for Developing and Evaluating Integrated Autonomous Systems James Ong , Emilio...Remolina, Axel Prompt Stottler Henke Associates, Inc. 1670 S. Amphlett Blvd., suite 310 San Mateo, CA 94402 650-931-2700 ong , remolina, aprompt...www.stottlerhenke.com/datamontage/ [13] Ong , J., E. Remolina, D. E. Smith, M. S. Boddy (2013) A Visual Integrated Development Environment for Automated Planning

  20. Study on Spacelab software development and integration concepts

    NASA Technical Reports Server (NTRS)

    1974-01-01

    A study was conducted to define the complexity and magnitude of the Spacelab software challenge. The study was based on current Spacelab program concepts, anticipated flight schedules, and ground operation plans. The study was primarily directed toward identifying and solving problems related to the experiment flight application and tests and checkout software executing in the Spacelab onboard command and data management subsystem (CDMS) computers and electrical ground support equipment (EGSE). The study provides a conceptual base from which it is possible to proceed into the development phase of the Software Test and Integration Laboratory (STIL) and establishes guidelines for the definition of standards which will ensure that the total Spacelab software is understood prior to entering development.

  1. Software for systems biology: from tools to integrated platforms.

    PubMed

    Ghosh, Samik; Matsuoka, Yukiko; Asai, Yoshiyuki; Hsin, Kun-Yi; Kitano, Hiroaki

    2011-11-03

    Understanding complex biological systems requires extensive support from software tools. Such tools are needed at each step of a systems biology computational workflow, which typically consists of data handling, network inference, deep curation, dynamical simulation and model analysis. In addition, there are now efforts to develop integrated software platforms, so that tools that are used at different stages of the workflow and by different researchers can easily be used together. This Review describes the types of software tools that are required at different stages of systems biology research and the current options that are available for systems biology researchers. We also discuss the challenges and prospects for modelling the effects of genetic changes on physiology and the concept of an integrated platform.

  2. Architecture-Based Unit Testing of the Flight Software Product Line

    NASA Technical Reports Server (NTRS)

    Ganesan, Dharmalingam; Lindvall, Mikael; McComas, David; Bartholomew, Maureen; Slegel, Steve; Medina, Barbara

    2010-01-01

    This paper presents an analysis of the unit testing approach developed and used by the Core Flight Software (CFS) product line team at the NASA GSFC. The goal of the analysis is to understand, review, and reconunend strategies for improving the existing unit testing infrastructure as well as to capture lessons learned and best practices that can be used by other product line teams for their unit testing. The CFS unit testing framework is designed and implemented as a set of variation points, and thus testing support is built into the product line architecture. The analysis found that the CFS unit testing approach has many practical and good solutions that are worth considering when deciding how to design the testing architecture for a product line, which are documented in this paper along with some suggested innprovennents.

  3. Exploiting new CPU architectures in the SuperB software framework

    NASA Astrophysics Data System (ADS)

    Corvo, M.; Bianchi, F.; Ciaschini, V.; Delprete, D.; Di Simone, A.; Donvito, G.; Fella, A.; Giacomini, F.; Gianoli, A.; Longo, S.; Luitz, S.; Luppi, E.; Manzali, M.; Pardi, S.; Perez, A.; Rama, M.; Russo, G.; Santeramo, B.; Stroili, R.; Tomassetti, L.

    2012-12-01

    The SuperB asymmetric-energy e+e- collider and detector to be built at the newly founded Nicola Cabibbo Lab will provide a uniquely sensitive probe of New Physics in the flavour sector of the Standard Model. Studying minute effects in the heavy quark and heavy lepton sectors requires a data sample of 75ab-1 and a luminosity target of 1036cm-2s-1. These parameters require a substantial growth in computing requirements and performances. The SuperB collaboration is thus investigating the advantages of new CPU architectures (multi and many cores) and how to exploit their capability of task parallelization in the framework for simulation and analysis software. In this work we present the underlying architecture which we intend to use and some preliminary performance results of the first framework prototype.

  4. Software for the occupational health and safety integrated management system

    SciTech Connect

    Vătăsescu, Mihaela

    2015-03-10

    This paper intends to present the design and the production of a software for the Occupational Health and Safety Integrated Management System with the view to a rapid drawing up of the system documents in the field of occupational health and safety.

  5. Integrating Visual Imagery into Workplace Literacy Computer Software.

    ERIC Educational Resources Information Center

    Bixler, Brett; Spotts, John

    Growing interest in adult literacy has encouraged research and innovative programming in the field. This paper examines several pieces of software created by the Institute for the Study of Adult Literacy at Pennsylvania State University and explains their underlying conceptual foundations and the integration of visual materials. The Penn State…

  6. AIDA: An Integrated Authoring Environment for Educational Software.

    ERIC Educational Resources Information Center

    Mendes, Antonio Jose; Mendes, Teresa

    1996-01-01

    Describes an integrated authoring environment, AIDA ("Ambiente Integrado de Desenvolvimento de Aplicacoes educacionais"), that was developed at the University of Coimbra (Portugal) for educational software. Highlights include the design module, a prototyping tool that allows for multimedia, simulations, and modularity; execution module;…

  7. AIDA: An Integrated Authoring Environment for Educational Software.

    ERIC Educational Resources Information Center

    Mendes, Antonio Jose; Mendes, Teresa

    1996-01-01

    Describes an integrated authoring environment, AIDA ("Ambiente Integrado de Desenvolvimento de Aplicacoes educacionais"), that was developed at the University of Coimbra (Portugal) for educational software. Highlights include the design module, a prototyping tool that allows for multimedia, simulations, and modularity; execution module;…

  8. A common distributed language approach to software integration

    NASA Technical Reports Server (NTRS)

    Antonelli, Charles J.; Volz, Richard A.; Mudge, Trevor N.

    1989-01-01

    An important objective in software integration is the development of techniques to allow programs written in different languages to function together. Several approaches are discussed toward achieving this objective and the Common Distributed Language Approach is presented as the approach of choice.

  9. NETRA: A parallel architecture for integrated vision systems. 1: Architecture and organization

    NASA Technical Reports Server (NTRS)

    Choudhary, Alok N.; Patel, Janak H.; Ahuja, Narendra

    1989-01-01

    Computer vision is regarded as one of the most complex and computationally intensive problems. An integrated vision system (IVS) is considered to be a system that uses vision algorithms from all levels of processing for a high level application (such as object recognition). A model of computation is presented for parallel processing for an IVS. Using the model, desired features and capabilities of a parallel architecture suitable for IVSs are derived. Then a multiprocessor architecture (called NETRA) is presented. This architecture is highly flexible without the use of complex interconnection schemes. The topology of NETRA is recursively defined and hence is easily scalable from small to large systems. Homogeneity of NETRA permits fault tolerance and graceful degradation under faults. It is a recursively defined tree-type hierarchical architecture where each of the leaf nodes consists of a cluster of processors connected with a programmable crossbar with selective broadcast capability to provide for desired flexibility. A qualitative evaluation of NETRA is presented. Then general schemes are described to map parallel algorithms onto NETRA. Algorithms are classified according to their communication requirements for parallel processing. An extensive analysis of inter-cluster communication strategies in NETRA is presented, and parameters affecting performance of parallel algorithms when mapped on NETRA are discussed. Finally, a methodology to evaluate performance of algorithms on NETRA is described.

  10. Baobab: a software architecture and methodology for distributed simulation and interaction

    NASA Astrophysics Data System (ADS)

    Rosoff, Jared

    2000-06-01

    We present Baobab, a software architecture and methodology for distributed simulation and interaction. Using pervasive componentization throughout the system, Baobab provides a stable but extensible platform for the development of content-rich interactive simulation. Entities in the environment are simulated using dynamically loadable simulation modules (shared libraries, java byte codes, scripts, etc...). We provide an elegant API to the simulation module developer, allowing modules to interact with entities which they have never encountered before. This approach allows domain experts to develop simulation modules based on their expertise with limited knowledge of the inner-workings of a VE system.

  11. BH-ShaDe: A Software Tool That Assists Architecture Students in the III-Structured Task of Housing Design

    ERIC Educational Resources Information Center

    Millan, Eva; Belmonte, Maria-Victoria; Ruiz-Montiel, Manuela; Gavilanes, Juan; Perez-de-la-Cruz, Jose-Luis

    2016-01-01

    In this paper, we present BH-ShaDe, a new software tool to assist architecture students learning the ill-structured domain/task of housing design. The software tool provides students with automatic or interactively generated floor plan schemas for basic houses. The students can then use the generated schemas as initial seeds to develop complete…

  12. Robotic collaborative technology alliance: an open architecture approach to integrated research

    NASA Astrophysics Data System (ADS)

    Dean, Robert Michael S.; DiBerardino, Charles A.

    2014-06-01

    The Robotics Collaborative Technology Alliance (RCTA) seeks to provide adaptive robot capabilities which move beyond traditional metric algorithms to include cognitive capabilities [1]. Research occurs in 5 main Task Areas: Intelligence, Perception, Dexterous Manipulation and Unique Mobility (DMUM), Human Robot Interaction (HRI), and Integrated Research (IR). This last task of Integrated Research is especially critical and challenging. Individual research components can only be fully assessed when integrated onto a robot where they interact with other aspects of the system to create cross-Task capabilities which move beyond the State of the Art. Adding to the complexity, the RCTA is comprised of 12+ independent organizations across the United States. Each has its own constraints due to development environments, ITAR, "lab" vs "real-time" implementations, and legacy software investments from previous and ongoing programs. We have developed three main components to manage the Integration Task. The first is RFrame, a data-centric transport agnostic middleware which unifies the disparate environments, protocols, and data collection mechanisms. Second is the modular Intelligence Architecture built around the Common World Model (CWM). The CWM instantiates a Common Data Model and provides access services. Third is RIVET, an ITAR free Hardware-In-The-Loop simulator based on 3D game technology. RIVET provides each researcher a common test-bed for development prior to integration, and a regression test mechanism. Once components are integrated and verified, they are released back to the consortium to provide the RIVET baseline for further research. This approach allows Integration of new and legacy systems built upon different architectures, by application of Open Architecture principles.

  13. Migrating data from TcSE to DOORS : an evaluation of the T-Plan Integrator software application.

    SciTech Connect

    Post, Debra S.; Manzanares, David A.; Taylor, Jeffrey L.

    2011-02-01

    This report describes our evaluation of the T-Plan Integrator software application as it was used to transfer a real data set from the Teamcenter for Systems Engineering (TcSE) software application to the DOORS software application. The T-Plan Integrator was evaluated to determine if it would meet the needs of Sandia National Laboratories to migrate our existing data sets from TcSE to DOORS. This report presents the struggles of migrating data and focuses on how the Integrator can be used to map a data set and its data architecture from TcSE to DOORS. Finally, this report describes how the bulk of the migration can take place using the Integrator; however, about 20-30% of the data would need to be transferred from TcSE to DOORS manually. This report does not evaluate the transfer of data from DOORS to TcSE.

  14. An Analysis of Security and Privacy Issues in Smart Grid Software Architectures on Clouds

    SciTech Connect

    Simmhan, Yogesh; Kumbhare, Alok; Cao, Baohua; Prasanna, Viktor K.

    2011-07-09

    Power utilities globally are increasingly upgrading to Smart Grids that use bi-directional communication with the consumer to enable an information-driven approach to distributed energy management. Clouds offer features well suited for Smart Grid software platforms and applications, such as elastic resources and shared services. However, the security and privacy concerns inherent in an information rich Smart Grid environment are further exacerbated by their deployment on Clouds. Here, we present an analysis of security and privacy issues in a Smart Grids software architecture operating on different Cloud environments, in the form of a taxonomy. We use the Los Angeles Smart Grid Project that is underway in the largest U.S. municipal utility to drive this analysis that will benefit both Cloud practitioners targeting Smart Grid applications, and Cloud researchers investigating security and privacy.

  15. A Microwave Photonic Interference Canceller: Architectures, Systems, and Integration

    NASA Astrophysics Data System (ADS)

    Chang, Matthew P.

    This thesis is a comprehensive portfolio of work on a Microwave Photonic Self-Interference Canceller (MPC), a specialized optical system designed to eliminate interference from radio-frequency (RF) receivers. The novelty and value of the microwave photonic system lies in its ability to operate over bandwidths and frequencies that are orders of magnitude larger than what is possible using existing RF technology. The work begins, in 2012, with a discrete fiber-optic microwave photonic canceller, which prior work had demonstrated as a proof-of-concept, and culminates, in 2017, with the first ever monolithically integrated microwave photonic canceller. With an eye towards practical implementation, the thesis establishes novelty through three major project thrusts. (Fig. 1): (1) Extensive RF and system analysis to develop a full understanding of how, and through what mechanisms, MPCs affect an RF receiver. The first investigations of how a microwave photonic canceller performs in an actual wireless environment and a digital radio are also presented. (2) New architectures to improve the performance and functionality of MPCs, based on the analysis performed in Thrust 1. A novel balanced microwave photonic canceller architecture is developed and experimentally demonstrated. The balanced architecture shows significant improvements in link gain, noise figure, and dynamic range. Its main advantage is its ability to suppress common-mode noise and reduce noise figure by increasing the optical power. (3) Monolithic integration of the microwave photonic canceller into a photonic integrated circuit. This thrust presents the progression of integrating individual discrete devices into their semiconductor equivalent, as well as a full functional and RF analysis of the first ever integrated microwave photonic canceller.

  16. The Gaggle: An open-source software system for integrating bioinformatics software and data sources

    PubMed Central

    Shannon, Paul T; Reiss, David J; Bonneau, Richard; Baliga, Nitin S

    2006-01-01

    Background Systems biologists work with many kinds of data, from many different sources, using a variety of software tools. Each of these tools typically excels at one type of analysis, such as of microarrays, of metabolic networks and of predicted protein structure. A crucial challenge is to combine the capabilities of these (and other forthcoming) data resources and tools to create a data exploration and analysis environment that does justice to the variety and complexity of systems biology data sets. A solution to this problem should recognize that data types, formats and software in this high throughput age of biology are constantly changing. Results In this paper we describe the Gaggle -a simple, open-source Java software environment that helps to solve the problem of software and database integration. Guided by the classic software engineering strategy of separation of concerns and a policy of semantic flexibility, it integrates existing popular programs and web resources into a user-friendly, easily-extended environment. We demonstrate that four simple data types (names, matrices, networks, and associative arrays) are sufficient to bring together diverse databases and software. We highlight some capabilities of the Gaggle with an exploration of Helicobacter pylori pathogenesis genes, in which we identify a putative ricin-like protein -a discovery made possible by simultaneous data exploration using a wide range of publicly available data and a variety of popular bioinformatics software tools. Conclusion We have integrated diverse databases (for example, KEGG, BioCyc, String) and software (Cytoscape, DataMatrixViewer, R statistical environment, and TIGR Microarray Expression Viewer). Through this loose coupling of diverse software and databases the Gaggle enables simultaneous exploration of experimental data (mRNA and protein abundance, protein-protein and protein-DNA interactions), functional associations (operon, chromosomal proximity, phylogenetic pattern

  17. CONNJUR Workflow Builder: A software integration environment for spectral reconstruction

    PubMed Central

    Fenwick, Matthew; Weatherby, Gerard; Vyas, Jay; Sesanker, Colbert; Martyn, Timothy O.; Ellis, Heidi J.C.; Gryk, Michael R.

    2015-01-01

    CONNJUR Workflow Builder (WB) is an open-source software integration environment that leverages existing spectral reconstruction tools to create a synergistic, coherent platform for converting biomolecular NMR data from the time domain to the frequency domain. WB provides data integration of primary data and metadata using a relational database, and includes a library of pre-built workflows for processing time domain data. WB simplifies maximum entropy reconstruction, facilitating the processing of non-uniformly sampled time domain data. As will be shown in the paper, the unique features of WB provide it with novel abilities to enhance the quality, accuracy, and fidelity of the spectral reconstruction process. WB also provides features which promote collaboration, education, parameterization, and non-uniform data sets along with processing integrated with the Rowland NMR Toolkit (RNMRTK) and NMRPipe software packages. WB is available free of charge in perpetuity, dual-licensed under the MIT and GPL open source licenses. PMID:26066803

  18. A Software Tool for Integrated Optical Design Analysis

    NASA Technical Reports Server (NTRS)

    Moore, Jim; Troy, Ed; DePlachett, Charles; Montgomery, Edward (Technical Monitor)

    2001-01-01

    Design of large precision optical systems requires multi-disciplinary analysis, modeling, and design. Thermal, structural and optical characteristics of the hardware must be accurately understood in order to design a system capable of accomplishing the performance requirements. The interactions between each of the disciplines become stronger as systems are designed lighter weight for space applications. This coupling dictates a concurrent engineering design approach. In the past, integrated modeling tools have been developed that attempt to integrate all of the complex analysis within the framework of a single model. This often results in modeling simplifications and it requires engineering specialist to learn new applications. The software described in this presentation addresses the concurrent engineering task using a different approach. The software tool, Integrated Optical Design Analysis (IODA), uses data fusion technology to enable a cross discipline team of engineering experts to concurrently design an optical system using their standard validated engineering design tools.

  19. Texture analysis software: integration with a radiological workstation.

    PubMed

    Duvauferrier, Régis; Bezy, Joan; Bertaud, Valérie; Toussaint, Grégoire; Morelli, John; Lasbleiz, Jeremy

    2012-01-01

    Image analysis is the daily task of radiologists. The texture of a structure or imaging finding can be more difficult to describe than other parameters. Image processing can help the radiologist in completing this difficult task. The aim of this article is to explain how we have developed texture analysis software and integrated it into a standard radiological workstation. The texture analysis method has been divided into three steps: definition of primitive elements, counting, and statistical analysis. The software was developed in C++ and integrated into a Siemens workstation with a graphical user interface. The results of analyses may be exported in Excel format. The software allows users to perform texture analyses on any type of radiological image without the need for image transfer by simply placing a region of interest. This tool has already been used to assess the trabecular network of vertebra. The integration of such software into PACS extends the applicability of texture analysis beyond that of a mere research tool and facilitates its use in routine clinical practice.

  20. Evaluation of software maintain ability with open EHR - a comparison of architectures.

    PubMed

    Atalag, Koray; Yang, Hong Yul; Tempero, Ewan; Warren, James R

    2014-11-01

    To assess whether it is easier to maintain a clinical information system developed using open EHR model driven development versus mainstream methods. A new open source application (GastrOS) has been developed following open EHR's multi-level modelling approach using .Net/C# based on the same requirements of an existing clinically used application developed using Microsoft Visual Basic and Access database. Almost all the domain knowledge was embedded into the software code and data model in the latter. The same domain knowledge has been expressed as a set of open EHR Archetypes in GastrOS. We then introduced eight real-world change requests that had accumulated during live clinical usage, and implemented these in both systems while measuring time for various development tasks and change in software size for each change request. Overall it took half the time to implement changes in GastrOS. However it was the more difficult application to modify for one change request, suggesting the nature of change is also important. It was not possible to implement changes by modelling only. Comparison of relative measures of time and software size change within each application highlights how architectural differences affected maintain ability across change requests. The use of open EHR model driven development can result in better software maintain ability. The degree to which open EHR affects software maintain ability depends on the extent and nature of domain knowledge involved in changes. Although we used relative measures for time and software size, confounding factors could not be totally excluded as a controlled study design was not feasible. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  1. Debris Examination Using Ballistic and Radar Integrated Software

    NASA Technical Reports Server (NTRS)

    Griffith, Anthony; Schottel, Matthew; Lee, David; Scully, Robert; Hamilton, Joseph; Kent, Brian; Thomas, Christopher; Benson, Jonathan; Branch, Eric; Hardman, Paul; Stuble, Martin

    2012-01-01

    The Debris Examination Using Ballistic and Radar Integrated Software (DEBRIS) program was developed to provide rapid and accurate analysis of debris observed by the NASA Debris Radar (NDR). This software provides a greatly improved analysis capacity over earlier manual processes, allowing for up to four times as much data to be analyzed by one-quarter of the personnel required by earlier methods. There are two applications that comprise the DEBRIS system: the Automated Radar Debris Examination Tool (ARDENT) and the primary DEBRIS tool.

  2. An ontology-based architecture for integration of clinical trials management applications.

    PubMed

    Shankar, Ravi D; Martins, Susana B; O'Connor, Martin; Parrish, David B; Das, Amar K

    2007-10-11

    Management of complex clinical trials involves coordinated-use of a myriad of software applications by trial personnel. The applications typically use distinct knowledge representations and generate enormous amount of information during the course of a trial. It becomes vital that the applications exchange trial semantics in order for efficient management of the trials and subsequent analysis of clinical trial data. Existing model-based frameworks do not address the requirements of semantic integration of heterogeneous applications. We have built an ontology-based architecture to support interoperation of clinical trial software applications. Central to our approach is a suite of clinical trial ontologies, which we call Epoch, that define the vocabulary and semantics necessary to represent information on clinical trials. We are continuing to demonstrate and validate our approach with different clinical trials management applications and with growing number of clinical trials.

  3. An Ontology-based Architecture for Integration of Clinical Trials Management Applications

    PubMed Central

    Shankar, Ravi D.; Martins, Susana B.; O’Connor, Martin; Parrish, David B.; Das, Amar K.

    2007-01-01

    Management of complex clinical trials involves coordinated-use of a myriad of software applications by trial personnel. The applications typically use distinct knowledge representations and generate enormous amount of information during the course of a trial. It becomes vital that the applications exchange trial semantics in order for efficient management of the trials and subsequent analysis of clinical trial data. Existing model-based frameworks do not address the requirements of semantic integration of heterogeneous applications. We have built an ontology-based architecture to support interoperation of clinical trial software applications. Central to our approach is a suite of clinical trial ontologies, which we call Epoch, that define the vocabulary and semantics necessary to represent information on clinical trials. We are continuing to demonstrate and validate our approach with different clinical trials management applications and with growing number of clinical trials. PMID:18693919

  4. Integrated software suite for magnetocardiographic data analysis--a proposal based on an interactive programming environment.

    PubMed

    Comani, S; Mantini, D; Merlino, B; Reale, M; Di Luzio, S; Romani, G L

    2005-01-01

    This paper describes an integrated software suite (ISS) for the processing of magnetocardiographic (MCG) recordings obtained with super-conducting multi-channel systems having different characteristics. We aimed to develop a highly flexible suite including toolboxes for current MCG applications, organized consistently with an open architecture that allows function integrations and upgrades with minimal modifications; the suite was designed for the compliance not only of physicists and engineers but also of physicians, who have a different professional profile and are accustomed to retrieve information in different ways. The MCG-ISS was designed to work with all common graphical user interface operative systems. MATLAB was chosen as the interactive programming environment (IPE), and the software was developed to achieve usability, interactivity, reliability, modularity, expansibility, interoperability, adaptability and graphics style tailoring. Three users, already experienced in MCG data analysis, have intensively tested MCG-ISS for six months. A great amount of MCG data on normal subjects and patients was used to assess software performances in terms of user compliance and confidence and total analysis time. The proposed suite is an all-in-one analysis tool that succeeded in speeding MCG data analysis up to about 55% with respect to standard reference routines; it consequently enhanced analysis performance and user compliance. Those results, together with the MCG-ISS advantage of being independent on the acquisition system, suggest that software suites like the proposed one could uphold a wider diffusion of MCG as a diagnostic tool in the clinical setting.

  5. Integrated command, control, communications and computation system functional architecture

    NASA Technical Reports Server (NTRS)

    Cooley, C. G.; Gilbert, L. E.

    1981-01-01

    The functional architecture for an integrated command, control, communications, and computation system applicable to the command and control portion of the NASA End-to-End Data. System is described including the downlink data processing and analysis functions required to support the uplink processes. The functional architecture is composed of four elements: (1) the functional hierarchy which provides the decomposition and allocation of the command and control functions to the system elements; (2) the key system features which summarize the major system capabilities; (3) the operational activity threads which illustrate the interrelationahip between the system elements; and (4) the interfaces which illustrate those elements that originate or generate data and those elements that use the data. The interfaces also provide a description of the data and the data utilization and access techniques.

  6. Architecture and Implementation of OpenPET Firmware and Embedded Software.

    PubMed

    Abu-Nimeh, Faisal T; Ito, Jennifer; Moses, William W; Peng, Qiyu; Choong, Woon-Seng

    2016-04-01

    OpenPET is an open source, modular, extendible, and high-performance platform suitable for multi-channel data acquisition and analysis. Due to the flexibility of the hardware, firmware, and software architectures, the platform is capable of interfacing with a wide variety of detector modules not only in medical imaging but also in homeland security applications. Analog signals from radiation detectors share similar characteristics - a pulse whose area is proportional to the deposited energy and whose leading edge is used to extract a timing signal. As a result, a generic design method of the platform is adopted for the hardware, firmware, and software architectures and implementations. The analog front-end is hosted on a module called a Detector Board, where each board can filter, combine, timestamp, and process multiple channels independently. The processed data is formatted and sent through a backplane bus to a module called Support Board, where 1 Support Board can host up to eight Detector Board modules. The data in the Support Board, coming from 8 Detector Board modules, can be aggregated or correlated (if needed) depending on the algorithm implemented or runtime mode selected. It is then sent out to a computer workstation for further processing. The number of channels (detector modules), to be processed, mandates the overall OpenPET System Configuration, which is designed to handle up to 1,024 channels using 16-channel Detector Boards in the Standard System Configuration and 16,384 channels using 32-channel Detector Boards in the Large System Configuration.

  7. Software architecture for a multi-purpose real-time control unit for research purposes

    NASA Astrophysics Data System (ADS)

    Epple, S.; Jung, R.; Jalba, K.; Nasui, V.

    2017-05-01

    A new, freely programmable, scalable control system for academic research purposes was developed. The intention was, to have a control unit capable of handling multiple PT1000 temperature sensors at reasonable accuracy and temperature range, as well as digital input signals and providing powerful output signals. To take full advantage of the system, control-loops are run in real time. The whole eight bit system with very limited memory runs independently of a personal computer. The two on board RS232 connectors allow to connect further units or to connect other equipment, as required in real time. This paper describes the software architecture for the third prototype that now provides stable measurements and an improvement in accuracy compared to the previous designs. As test case a thermal solar system to produce hot tap water and assist heating in a single-family house was implemented. The solar fluid pump was power-controlled and several temperatures at different points in the hydraulic system were measured and used in the control algorithms. The software architecture proved suitable to test several different control strategies and their corresponding algorithms for the thermal solar system.

  8. Integration of CMM software standards for nanopositioning and nanomeasuring machines

    NASA Astrophysics Data System (ADS)

    Sparrer, E.; Machleidt, T.; Hausotte, T.; Manske, E.; Franke, K.-H.

    2011-06-01

    The paper focuses on the utilization of nanopositioning and nanomeasuring machines as a three dimensional coordinate measuring machine by means of the international harmonized communication protocol Inspection plus plus for Dimensional Measurement Equipment (abbreviated I++DME). I++DME was designed 1999 to enable the interoperability of different measuring hardware, like coordinate measuring machines, form tester, camshaft or crankshaft measuring machines, with a priori unknown third party controlling and analyzing software. Our recent work was focused on the implementation of a modular, standard conform command interpreter server for the Inspection plus plus protocol. This communication protocol enables the application of I++DME compliant graphical controlling software, which is easy to operate and less error prone than the currently used textural programming via MathWorks MATLab. The function and architecture of the I++DME command interpreter is discussed and the principle of operation is demonstrated by means of an example controlling a nanopositioning and nanomeasuring machine with Hexagon Metrology's controlling and analyzing software QUINDOS 7 via the I++DME command interpreter server.

  9. Toward an integrated software platform for systems pharmacology.

    PubMed

    Ghosh, Samik; Matsuoka, Yukiko; Asai, Yoshiyuki; Hsin, Kun-Yi; Kitano, Hiroaki

    2013-12-01

    Understanding complex biological systems requires the extensive support of computational tools. This is particularly true for systems pharmacology, which aims to understand the action of drugs and their interactions in a systems context. Computational models play an important role as they can be viewed as an explicit representation of biological hypotheses to be tested. A series of software and data resources are used for model development, verification and exploration of the possible behaviors of biological systems using the model that may not be possible or not cost effective by experiments. Software platforms play a dominant role in creativity and productivity support and have transformed many industries, techniques that can be applied to biology as well. Establishing an integrated software platform will be the next important step in the field.

  10. An approach to integrating and creating flexible software environments

    NASA Technical Reports Server (NTRS)

    Bellman, Kirstie L.

    1992-01-01

    Engineers and scientists are attempting to represent, analyze, and reason about increasingly complex systems. Many researchers have been developing new ways of creating increasingly open environments. In this research on VEHICLES, a conceptual design environment for space systems, an approach was developed, called 'wrapping', to flexibility and integration based on the collection and then processing of explicit qualitative descriptions of all the software resources in the environment. Currently, a simulation is available, VSIM, used to study both the types of wrapping descriptions and the processes necessary to use the metaknowledge to combine, select, adapt, and explain some of the software resources used in VEHICLES. What was learned about the types of knowledge necessary for the wrapping approach is described along with the implications of wrapping for several key software engineering issues.

  11. Integrated fiducial sample mount and software for correlated microscopy

    SciTech Connect

    Timothy R McJunkin; Jill R. Scott; Tammy L. Trowbridge; Karen E. Wright

    2014-02-01

    A novel design sample mount with integrated fiducials and software for assisting operators in easily and efficiently locating points of interest established in previous analytical sessions is described. The sample holder and software were evaluated with experiments to demonstrate the utility and ease of finding the same points of interest in two different microscopy instruments. Also, numerical analysis of expected errors in determining the same position with errors unbiased by a human operator was performed. Based on the results, issues related to acquiring reproducibility and best practices for using the sample mount and software were identified. Overall, the sample mount methodology allows data to be efficiently and easily collected on different instruments for the same sample location.

  12. Toward an integrated software platform for systems pharmacology

    PubMed Central

    Ghosh, Samik; Matsuoka, Yukiko; Asai, Yoshiyuki; Hsin, Kun-Yi; Kitano, Hiroaki

    2013-01-01

    Understanding complex biological systems requires the extensive support of computational tools. This is particularly true for systems pharmacology, which aims to understand the action of drugs and their interactions in a systems context. Computational models play an important role as they can be viewed as an explicit representation of biological hypotheses to be tested. A series of software and data resources are used for model development, verification and exploration of the possible behaviors of biological systems using the model that may not be possible or not cost effective by experiments. Software platforms play a dominant role in creativity and productivity support and have transformed many industries, techniques that can be applied to biology as well. Establishing an integrated software platform will be the next important step in the field. © 2013 The Authors. Biopharmaceutics & Drug Disposition published by John Wiley & Sons, Ltd. PMID:24150748

  13. ECLSS and Thermal Systems Integration Challenges Across the Constellation Architecture

    NASA Technical Reports Server (NTRS)

    Carrasquillo, Robyn

    2010-01-01

    As the Constellation Program completes its initial capability Preliminary Design Review milestone for the Initial Capability phase, systems engineering of the Environmental Control and Life Support (ECLS) and Thermal Systems for the various architecture elements has progressed from the requirements to design phase. As designs have matured for the Ares, Orion, Ground Systems, and Extravehicular (EVA) System, a number of integration challenges have arisen requiring analyses and trades, resulting in changes to the design and/or requirements. This paper will address some of the key integration issues and results, including the Orion-to-Ares shared compartment venting and purging, Orion-to-EVA suit loop integration issues with the suit system, Orion-to-ISS and Orion-to-Altair intermodule ventilation, and Orion and Ground Systems impacts from post-landing environments.

  14. Hydra: a service oriented architecture for scientific simulation integration

    SciTech Connect

    Bent, Russell; Djidjev, Tatiana; Hayes, Birch P; Holland, Joe V; Khalsa, Hari S; Linger, Steve P; Mathis, Mark M; Mniszewski, Sue M; Bush, Brian

    2008-01-01

    One of the current major challenges in scientific modeling and simulation, in particular in the infrastructure-analysis community, is the development of techniques for efficiently and automatically coupling disparate tools that exist in separate locations on different platforms, implemented in a variety of languages and designed to be standalone. Recent advances in web-based platforms for integrating systems such as SOA provide an opportunity to address these challenges in a systematic fashion. This paper describes Hydra, an integrating architecture for infrastructure modeling and simulation that defines geography-based schemas that, when used to wrap existing tools as web services, allow for seamless plug-and-play composability. Existing users of these tools can enhance the value of their analysis by assessing how the simulations of one tool impact the behavior of another tool and can automate existing ad hoc processes and work flows for integrating tools together.

  15. A Modular GIS-Based Software Architecture for Model Parameter Estimation using the Method of Anchored Distributions (MAD)

    NASA Astrophysics Data System (ADS)

    Ames, D. P.; Osorio-Murillo, C.; Over, M. W.; Rubin, Y.

    2012-12-01

    The Method of Anchored Distributions (MAD) is an inverse modeling technique that is well-suited for estimation of spatially varying parameter fields using limited observations and Bayesian methods. This presentation will discuss the design, development, and testing of a free software implementation of the MAD technique using the open source DotSpatial geographic information system (GIS) framework, R statistical software, and the MODFLOW groundwater model. This new tool, dubbed MAD-GIS, is built using a modular architecture that supports the integration of external analytical tools and models for key computational processes including a forward model (e.g. MODFLOW, HYDRUS) and geostatistical analysis (e.g. R, GSLIB). The GIS-based graphical user interface provides a relatively simple way for new users of the technique to prepare the spatial domain, to identify observation and anchor points, to perform the MAD analysis using a selected forward model, and to view results. MAD-GIS uses the Managed Extensibility Framework (MEF) provided by the Microsoft .NET programming platform to support integration of different modeling and analytical tools at run-time through a custom "driver." Each driver establishes a connection with external programs through a programming interface, which provides the elements for communicating with core MAD software. This presentation gives an example of adapting the MODFLOW to serve as the external forward model in MAD-GIS for inferring the distribution functions of key MODFLOW parameters. Additional drivers for other models are being developed and it is expected that the open source nature of the project will engender the development of additional model drivers by 3rd party scientists.

  16. FIA: An Open Forensic Integration Architecture for Composing Digital Evidence

    NASA Astrophysics Data System (ADS)

    Raghavan, Sriram; Clark, Andrew; Mohay, George

    The analysis and value of digital evidence in an investigation has been the domain of discourse in the digital forensic community for several years. While many works have considered different approaches to model digital evidence, a comprehensive understanding of the process of merging different evidence items recovered during a forensic analysis is still a distant dream. With the advent of modern technologies, pro-active measures are integral to keeping abreast of all forms of cyber crimes and attacks. This paper motivates the need to formalize the process of analyzing digital evidence from multiple sources simultaneously. In this paper, we present the forensic integration architecture (FIA) which provides a framework for abstracting the evidence source and storage format information from digital evidence and explores the concept of integrating evidence information from multiple sources. The FIA architecture identifies evidence information from multiple sources that enables an investigator to build theories to reconstruct the past. FIA is hierarchically composed of multiple layers and adopts a technology independent approach. FIA is also open and extensible making it simple to adapt to technological changes. We present a case study using a hypothetical car theft case to demonstrate the concepts and illustrate the value it brings into the field.

  17. The ASTRI SST-2M telescope prototype for the Cherenkov Telescope Array: camera DAQ software architecture

    NASA Astrophysics Data System (ADS)

    Conforti, Vito; Trifoglio, Massimo; Bulgarelli, Andrea; Gianotti, Fulvio; Fioretti, Valentina; Tacchini, Alessandro; Zoli, Andrea; Malaguti, Giuseppe; Capalbi, Milvia; Catalano, Osvaldo

    2014-07-01

    ASTRI (Astrofisica con Specchi a Tecnologia Replicante Italiana) is a Flagship Project financed by the Italian Ministry of Education, University and Research, and led by INAF, the Italian National Institute of Astrophysics. Within this framework, INAF is currently developing an end-to-end prototype of a Small Size dual-mirror Telescope. In a second phase the ASTRI project foresees the installation of the first elements of the array at CTA southern site, a mini-array of 7 telescopes. The ASTRI Camera DAQ Software is aimed at the Camera data acquisition, storage and display during Camera development as well as during commissioning and operations on the ASTRI SST-2M telescope prototype that will operate at the INAF observing station located at Serra La Nave on the Mount Etna (Sicily). The Camera DAQ configuration and operations will be sequenced either through local operator commands or through remote commands received from the Instrument Controller System that commands and controls the Camera. The Camera DAQ software will acquire data packets through a direct one-way socket connection with the Camera Back End Electronics. In near real time, the data will be stored in both raw and FITS format. The DAQ Quick Look component will allow the operator to display in near real time the Camera data packets. We are developing the DAQ software adopting the iterative and incremental model in order to maximize the software reuse and to implement a system which is easily adaptable to changes. This contribution presents the Camera DAQ Software architecture with particular emphasis on its potential reuse for the ASTRI/CTA mini-array.

  18. Considerations for developing technologies for an integrated person-borne IED countermeasure architecture

    NASA Astrophysics Data System (ADS)

    Lombardo, Nicholas J.; Knudson, Christa K.; Rutz, Frederick C.; Pattison, Kerrie J.; Stratton, Rex C.; Wiborg, James C.

    2010-04-01

    Developing an integrated person-borne improvised explosive device (IED) countermeasure to protect unstructured crowds at large public venues is the goal of the Standoff Technology Integration and Demonstration Program (STIDP), sponsored in part by the U.S. Department of Homeland Security (DHS). The architecture being developed includes countermeasure technologies deployed as a layered defense and enabling technologies for operating the countermeasures as an integrated system. In the architecture, early recognition of potentially higher-risk individuals is crucial. Sensors must be able to detect, with high accuracy, explosives' threat signatures in varying environmental conditions, from a variety of approaches and with dense crowds and limited dwell time. Command-and-control technologies are needed to automate sensor operation, reduce staffing requirements, improve situational awareness, and automate/facilitate operator decisions. STIDP is developing technical and operational requirements for standoff and remotely operated sensors and is working with federal agencies and foreign governments to implement these requirements into their research and development programs. STIDP also is developing requirements for a software platform to rapidly integrate and control various sensors; acquire, analyze, and record their data; and present the data in an operationally relevant manner. Requirements also are being developed for spatial analysis, tracking and assessing threats with available screening resources, and data fusion for operator decision-making.

  19. An integrated infrastructure in support of software development

    NASA Astrophysics Data System (ADS)

    Antonelli, S.; Aiftimiei, C.; Bencivenni, M.; Bisegni, C.; Chiarelli, L.; De Girolamo, D.; Giacomini, F.; Longo, S.; Manzali, M.; Veraldi, R.; Zani, S.

    2014-06-01

    This paper describes the design and the current state of implementation of an infrastructure made available to software developers within the Italian National Institute for Nuclear Physics (INFN) to support and facilitate their daily activity. The infrastructure integrates several tools, each providing a well-identified function: project management, version control system, continuous integration, dynamic provisioning of virtual machines, efficiency improvement, knowledge base. When applicable, access to the services is based on the INFN-wide Authentication and Authorization Infrastructure. The system is being installed and progressively made available to INFN users belonging to tens of sites and laboratories and will represent a solid foundation for the software development efforts of the many experiments and projects that see the involvement of the Institute. The infrastructure will be beneficial especially for small- and medium-size collaborations, which often cannot afford the resources, in particular in terms of know-how, needed to set up such services.

  20. Business Intelligence Applied to the ALMA Software Integration Process

    NASA Astrophysics Data System (ADS)

    Zambrano, M.; Recabarren, C.; González, V.; Hoffstadt, A.; Soto, R.; Shen, T.-C.

    2012-09-01

    Software quality assurance and planning of an astronomy project is a complex task, specially if it is a distributed collaborative project such as ALMA, where the development centers are spread across the globe. When you execute a software project there is much valuable information about this process itself that you might be able to collect. One of the ways you can receive this input is via an issue tracking system that will gather the problem reports relative to software bugs captured during the testing of the software, during the integration of the different components or even worst, problems occurred during production time. Usually, there is little time spent on analyzing them but with some multidimensional processing you can extract valuable information from them and it might help you on the long term planning and resources allocation. We present an analysis of the information collected at ALMA from a collection of key unbiased indicators. We describe here the extraction, transformation and load process and how the data was processed. The main goal is to assess a software process and get insights from this information.

  1. A flexible software architecture for scalable real-time image and video processing applications

    NASA Astrophysics Data System (ADS)

    Usamentiaga, Rubén; Molleda, Julio; García, Daniel F.; Bulnes, Francisco G.

    2012-06-01

    Real-time image and video processing applications require skilled architects, and recent trends in the hardware platform make the design and implementation of these applications increasingly complex. Many frameworks and libraries have been proposed or commercialized to simplify the design and tuning of real-time image processing applications. However, they tend to lack flexibility because they are normally oriented towards particular types of applications, or they impose specific data processing models such as the pipeline. Other issues include large memory footprints, difficulty for reuse and inefficient execution on multicore processors. This paper presents a novel software architecture for real-time image and video processing applications which addresses these issues. The architecture is divided into three layers: the platform abstraction layer, the messaging layer, and the application layer. The platform abstraction layer provides a high level application programming interface for the rest of the architecture. The messaging layer provides a message passing interface based on a dynamic publish/subscribe pattern. A topic-based filtering in which messages are published to topics is used to route the messages from the publishers to the subscribers interested in a particular type of messages. The application layer provides a repository for reusable application modules designed for real-time image and video processing applications. These modules, which include acquisition, visualization, communication, user interface and data processing modules, take advantage of the power of other well-known libraries such as OpenCV, Intel IPP, or CUDA. Finally, we present different prototypes and applications to show the possibilities of the proposed architecture.

  2. Data Reduction Software for the VLT Integral Field Spectrometer SPIFFI

    NASA Astrophysics Data System (ADS)

    Schreiber, J.; Thatte, N.; Eisenhauer, F.; Tecza, M.; Abuter, R.; Horrobin, M.

    2004-07-01

    A data reduction software package is developed to reduce data of the near-IR integral field spectrometer SPIFFI built at MPE. The basic data reduction routines are coded in ANSI C. The high level scripting language Python is used to connect the C-routines allowing fast prototyping. Several Python scripts are written to produce the needed calibration data and to generate the final result, a wavelength calibrated data cube with the instrumental signatures removed.

  3. Project Integration Architecture: Inter-Application Propagation of Information

    NASA Technical Reports Server (NTRS)

    Jones, William Henry

    2005-01-01

    A principal goal of the Project Integration Architecture (PIA) is to facilitate the meaningful inter-application transfer of application-value-added information. Such exchanging applications may be largely unrelated to each other except through their applicability to an overall project; however, the PIA effort recognizes as fundamental the need to make such applications cooperate despite wide disparaties either in the fidelity of the analyses carried out, or even the disciplines of the analysis. This paper discusses the approach and techniques applied and anticipated by the PIA project in treating this need.

  4. Distributed software framework and continuous integration in hydroinformatics systems

    NASA Astrophysics Data System (ADS)

    Zhou, Jianzhong; Zhang, Wei; Xie, Mengfei; Lu, Chengwei; Chen, Xiao

    2017-08-01

    When encountering multiple and complicated models, multisource structured and unstructured data, complex requirements analysis, the platform design and integration of hydroinformatics systems become a challenge. To properly solve these problems, we describe a distributed software framework and it’s continuous integration process in hydroinformatics systems. This distributed framework mainly consists of server cluster for models, distributed database, GIS (Geographic Information System) servers, master node and clients. Based on it, a GIS - based decision support system for joint regulating of water quantity and water quality of group lakes in Wuhan China is established.

  5. Integrated software system for low level waste management

    SciTech Connect

    Worku, G.

    1995-12-31

    In the continually changing and uncertain world of low level waste management, many generators in the US are faced with the prospect of having to store their waste on site for the indefinite future. This consequently increases the set of tasks performed by the generators in the areas of packaging, characterizing, classifying, screening (if a set of acceptance criteria applies), and managing the inventory for the duration of onsite storage. When disposal sites become available, it is expected that the work will require re-evaluating the waste packages, including possible re-processing, re-packaging, or re-classifying in preparation for shipment for disposal under the regulatory requirements of the time. In this day and age, when there is wide use of computers and computer literacy is at high levels, an important waste management tool would be an integrated software system that aids waste management personnel in conducting these tasks quickly and accurately. It has become evident that such an integrated radwaste management software system offers great benefits to radwaste generators both in the US and other countries. This paper discusses one such approach to integrated radwaste management utilizing some globally accepted radiological assessment software applications.

  6. CyberGIS software: a synthetic review and integration roadmap

    SciTech Connect

    Wang, Shaowen; Anselin, Luc; Bhaduri, Budhendra L; Cosby, Christopher; Goodchild, Michael; Liu, Yan; Nygers, Timothy L.

    2013-01-01

    CyberGIS defined as cyberinfrastructure-based geographic information systems (GIS) has emerged as a new generation of GIS representing an important research direction for both cyberinfrastructure and geographic information science. This study introduces a 5-year effort funded by the US National Science Foundation to advance the science and applications of CyberGIS, particularly for enabling the analysis of big spatial data, computationally intensive spatial analysis and modeling (SAM), and collaborative geospatial problem-solving and decision-making, simultaneously conducted by a large number of users. Several fundamental research questions are raised and addressed while a set of CyberGIS challenges and opportunities are identified from scientific perspectives. The study reviews several key CyberGIS software tools that are used to elucidate a vision and roadmap for CyberGIS software research. The roadmap focuses on software integration and synthesis of cyberinfrastructure, GIS, and SAM by defining several key integration dimensions and strategies. CyberGIS, based on this holistic integration roadmap, exhibits the following key characteristics: high-performance and scalable, open and distributed, collaborative, service-oriented, user-centric, and community-driven. As a major result of the roadmap, two key CyberGIS modalities gateway and toolkit combined with a community-driven and participatory approach have laid a solid foundation to achieve scientific breakthroughs across many geospatial communities that would be otherwise impossible.

  7. Designing a meta-level architecture in Java for adaptive parallelism by mobile software agents

    NASA Astrophysics Data System (ADS)

    Dominic, Stephen Victor

    Adaptive parallelism refers to a parallel computation that runs on a pool of processors that may join or withdraw from a running computation. In this dissertation, a functional system of agents and agent behaviors for adaptive parallelism is developed. Software agents have the properties of robustness and have capacity for fault-tolerance. Adaptation and fault-tolerance emerge from the interaction of self-directed autonomous software agents for a parallel computation application. The multi-agent system can be considered an object-oriented system with a higher-level architectural component, i.e., a meta level for agent behavior. The meta-level object architecture is based on patterns of behavior and communication for mobile agents, which are developed to support cooperative problem solving in a distributed-heterogeneous computing environment. Although parallel processing is a suggested application domain for mobile agents implemented in the Java language, the development of robust agent behaviors implemented in an efficient manner is an active research area. Performance characteristics for three versions of a pattern recognition problem are used to demonstrate a linear speed-up with efficiency that is compared to research using a traditional client-server protocol in the C language. The best ideas from existing approaches to adaptive parallelism are used to create a single general-purpose paradigm that overcomes problems associated with nodefailure, the use of a single-centralized or shared resource, requirements for clients to actively join a computation, and a variety of other limitations that are associated with existing systems. The multi-agent system, and experiments, show how adaptation and parallelism can be exploited by a meta-architecture for a distributed-scientific application that is of particular interest to design of signal-processing ground stations. To a large extent the framework separates concern for algorithmic design from concern for where and

  8. Wireless Communications. Wireless Network Integration Technology: MIRAI Architecture for Heterogeneous Network

    NASA Astrophysics Data System (ADS)

    Mizuno, Mitsuhiko; Wu, Gang; Havinga, Paul J. M.

    2001-12-01

    One of the keywords that describe next generation wireless communications is "seamless." As part of the e-Japan Plan promoted by the Japanese Government, the MIRAI (Multimedia Integrated network by Radio Access Innovation) project has, as its goal, the development of new technologies to enable seamless integration of various wireless access systems for practical use by the year 2005. This paper describes a heterogeneous network architecture including a common tool, a common platform, and a common access. In particular, software-defined radio technologies are used to develop a multi-service user terminal to access different wireless networks. The common platform for various wireless networks is based on a wireless supporting IPv6 network. A basic access network, separated from other wireless access networks, is used as a means for wireless system discovery, signaling, and paging. A proof-concept experimental demonstration system will be available in March, 2002.

  9. Integrated quality control architecture for multistage machining processes

    NASA Astrophysics Data System (ADS)

    Yang, Jie; Liu, Guixiong

    2010-12-01

    To solve problems concerning the process quality prediction control for the multistage machining processes, a integrated quality control architecture is proposed in this paper. First, a hierarchical multiple criteria decision model is established for the key process and the weight matrix method stratified is discussed. Predictive control of the manufacturing quality is not just for on-site monitoring and control layer, control layer in the enterprise, remote monitoring level of quality exists a variety of target predictive control demand, therefore, based on XML to achieve a unified description of manufacturing quality information, and in different source of quality information between agencies to achieve the transfer and sharing. This will predict complex global quality control, analysis and diagnosis data to lay a good foundation to achieve a more practical, open and standardized manufacturing quality with higher levels of information integration system.

  10. Integrated optics architecture for trapped-ion quantum information processing

    NASA Astrophysics Data System (ADS)

    Kielpinski, D.; Volin, C.; Streed, E. W.; Lenzini, F.; Lobino, M.

    2016-12-01

    Standard schemes for trapped-ion quantum information processing (QIP) involve the manipulation of ions in a large array of interconnected trapping potentials. The basic set of QIP operations, including state initialization, universal quantum logic, and state detection, is routinely executed within a single array site by means of optical operations, including various laser excitations as well as the collection of ion fluorescence. Transport of ions between array sites is also routinely carried out in microfabricated trap arrays. However, it is still not possible to perform optical operations in parallel across all array sites. The lack of this capability is one of the major obstacles to scalable trapped-ion QIP and presently limits exploitation of current microfabricated trap technology. Here we present an architecture for scalable integration of optical operations in trapped-ion QIP. We show theoretically that diffractive mirrors, monolithically fabricated on the trap array, can efficiently couple light between trap array sites and optical waveguide arrays. Integrated optical circuits constructed from these waveguides can be used for sequencing of laser excitation and fluorescence collection. Our scalable architecture supports all standard QIP operations, as well as photon-mediated entanglement channels, while offering substantial performance improvements over current techniques.

  11. Integration of radio-frequency transmission and radar in general software for multimodal battlefield signal modeling

    NASA Astrophysics Data System (ADS)

    Yamamoto, Kenneth K.; Reznicek, Nathan J.; Wilson, D. Keith

    2013-05-01

    The Environmental Awareness for Sensor and Emitter Employment (EASEE) software, being developed by the U. S. Army Engineer Research and Development Center (ERDC), provides a general platform for predicting sensor performance and optimizing sensor selection and placement in complex terrain and weather conditions. It incorporates an extensive library of target signatures, signal propagation models, and sensor systems. A flexible object-oriented design supports efficient integration and simulation of diverse signal modalities. This paper describes the integration of modeling capabilities for radio-frequency (RF) transmission and radar systems from the U. S. Navy Electromagnetic Propagation Integrated Resource Environment (EMPIRE), which contains nearly twenty different realistic RF propagation models. The integration utilizes an XML-based interface between EASEE and EMPIRE to set inputs for and run propagation models. To accommodate radars, fundamental improvements to the EASEE software architecture were made to support active-sensing scenarios with forward and backward propagation of the RF signals between the radar and target. Models for reflecting targets were defined to apply a target-specific, directionally dependent reflection coefficient (i.e., scattering cross section) to the incident wavefields.

  12. GiA Roots: software for the high throughput analysis of plant root system architecture

    PubMed Central

    2012-01-01

    Background Characterizing root system architecture (RSA) is essential to understanding the development and function of vascular plants. Identifying RSA-associated genes also represents an underexplored opportunity for crop improvement. Software tools are needed to accelerate the pace at which quantitative traits of RSA are estimated from images of root networks. Results We have developed GiA Roots (General Image Analysis of Roots), a semi-automated software tool designed specifically for the high-throughput analysis of root system images. GiA Roots includes user-assisted algorithms to distinguish root from background and a fully automated pipeline that extracts dozens of root system phenotypes. Quantitative information on each phenotype, along with intermediate steps for full reproducibility, is returned to the end-user for downstream analysis. GiA Roots has a GUI front end and a command-line interface for interweaving the software into large-scale workflows. GiA Roots can also be extended to estimate novel phenotypes specified by the end-user. Conclusions We demonstrate the use of GiA Roots on a set of 2393 images of rice roots representing 12 genotypes from the species Oryza sativa. We validate trait measurements against prior analyses of this image set that demonstrated that RSA traits are likely heritable and associated with genotypic differences. Moreover, we demonstrate that GiA Roots is extensible and an end-user can add functionality so that GiA Roots can estimate novel RSA traits. In summary, we show that the software can function as an efficient tool as part of a workflow to move from large numbers of root images to downstream analysis. PMID:22834569

  13. GiA Roots: software for the high throughput analysis of plant root system architecture.

    PubMed

    Galkovskyi, Taras; Mileyko, Yuriy; Bucksch, Alexander; Moore, Brad; Symonova, Olga; Price, Charles A; Topp, Christopher N; Iyer-Pascuzzi, Anjali S; Zurek, Paul R; Fang, Suqin; Harer, John; Benfey, Philip N; Weitz, Joshua S

    2012-07-26

    Characterizing root system architecture (RSA) is essential to understanding the development and function of vascular plants. Identifying RSA-associated genes also represents an underexplored opportunity for crop improvement. Software tools are needed to accelerate the pace at which quantitative traits of RSA are estimated from images of root networks. We have developed GiA Roots (General Image Analysis of Roots), a semi-automated software tool designed specifically for the high-throughput analysis of root system images. GiA Roots includes user-assisted algorithms to distinguish root from background and a fully automated pipeline that extracts dozens of root system phenotypes. Quantitative information on each phenotype, along with intermediate steps for full reproducibility, is returned to the end-user for downstream analysis. GiA Roots has a GUI front end and a command-line interface for interweaving the software into large-scale workflows. GiA Roots can also be extended to estimate novel phenotypes specified by the end-user. We demonstrate the use of GiA Roots on a set of 2393 images of rice roots representing 12 genotypes from the species Oryza sativa. We validate trait measurements against prior analyses of this image set that demonstrated that RSA traits are likely heritable and associated with genotypic differences. Moreover, we demonstrate that GiA Roots is extensible and an end-user can add functionality so that GiA Roots can estimate novel RSA traits. In summary, we show that the software can function as an efficient tool as part of a workflow to move from large numbers of root images to downstream analysis.

  14. WARP3D-Release 10.8: Dynamic Nonlinear Analysis of Solids using a Preconditioned Conjugate Gradient Software Architecture

    NASA Technical Reports Server (NTRS)

    Koppenhoefer, Kyle C.; Gullerud, Arne S.; Ruggieri, Claudio; Dodds, Robert H., Jr.; Healy, Brian E.

    1998-01-01

    This report describes theoretical background material and commands necessary to use the WARP3D finite element code. WARP3D is under continuing development as a research code for the solution of very large-scale, 3-D solid models subjected to static and dynamic loads. Specific features in the code oriented toward the investigation of ductile fracture in metals include a robust finite strain formulation, a general J-integral computation facility (with inertia, face loading), an element extinction facility to model crack growth, nonlinear material models including viscoplastic effects, and the Gurson-Tver-gaard dilatant plasticity model for void growth. The nonlinear, dynamic equilibrium equations are solved using an incremental-iterative, implicit formulation with full Newton iterations to eliminate residual nodal forces. The history integration of the nonlinear equations of motion is accomplished with Newmarks Beta method. A central feature of WARP3D involves the use of a linear-preconditioned conjugate gradient (LPCG) solver implemented in an element-by-element format to replace a conventional direct linear equation solver. This software architecture dramatically reduces both the memory requirements and CPU time for very large, nonlinear solid models since formation of the assembled (dynamic) stiffness matrix is avoided. Analyses thus exhibit the numerical stability for large time (load) steps provided by the implicit formulation coupled with the low memory requirements characteristic of an explicit code. In addition to the much lower memory requirements of the LPCG solver, the CPU time required for solution of the linear equations during each Newton iteration is generally one-half or less of the CPU time required for a traditional direct solver. All other computational aspects of the code (element stiffnesses, element strains, stress updating, element internal forces) are implemented in the element-by- element, blocked architecture. This greatly improves

  15. The software architecture of climate models: a graphical comparison of CMIP5 and EMICAR5 configurations

    NASA Astrophysics Data System (ADS)

    Alexander, K.; Easterbrook, S. M.

    2015-01-01

    We analyse the source code of eight coupled climate models, selected from those that participated in the CMIP5 (Taylor et al., 2012) or EMICAR5 (Eby et al., 2013; Zickfeld et al., 2013) intercomparison projects. For each model, we sort the preprocessed code into components and subcomponents based on dependency structure. We then create software architecture diagrams which show the relative sizes of these components/subcomponents and the flow of data between them. The diagrams also illustrate several major classes of climate model design; the distribution of complexity between components, which depends on historical development paths as well as the conscious goals of each institution; and the sharing of components between different modelling groups. These diagrams offer insights into the similarities and differences between models, and have the potential to be useful tools for communication between scientists, scientific institutions, and the public.

  16. The software architecture of climate models: a graphical comparison of CMIP5 and EMICAR5 configurations

    NASA Astrophysics Data System (ADS)

    Alexander, K.; Easterbrook, S. M.

    2015-04-01

    We analyze the source code of eight coupled climate models, selected from those that participated in the CMIP5 (Taylor et al., 2012) or EMICAR5 (Eby et al., 2013; Zickfeld et al., 2013) intercomparison projects. For each model, we sort the preprocessed code into components and subcomponents based on dependency structure. We then create software architecture diagrams that show the relative sizes of these components/subcomponents and the flow of data between them. The diagrams also illustrate several major classes of climate model design; the distribution of complexity between components, which depends on historical development paths as well as the conscious goals of each institution; and the sharing of components between different modeling groups. These diagrams offer insights into the similarities and differences in structure between climate models, and have the potential to be useful tools for communication between scientists, scientific institutions, and the public.

  17. Bringing electronic patient records into health professional education: software architecture and implementation.

    PubMed

    Joe, Ronald S; Kushniruk, Andre W; Borycki, Elizabeth M; Armstrong, Brian; Otto, Tony; Ho, Kendall

    2009-01-01

    This paper describes the implementation of an Electronic Medical Record (EMR) which has been redesigned specifically for the purposes of teaching medical and other health professional students. Currently available EMR software is designed specifically for use in actual practice settings and not for the needs of students and educators. The authors identified many unique requirements of an EMR in order to satisfy the educational goals unique to the electronic medium. This paper describes the specific architecture and many of the unique features of the EMR implemented for the University of British Columbia (UBC) Medical School program for teaching medical students. This implementation describes 200 participating students participating in a hands-on use of an EMR with a single standardized patient case. The participating students were distributed across three physical sites in the Province of British Columbia UBC curricula in December, 2007.

  18. Software Architecture to Support the Evolution of the ISRU RESOLVE Engineering Breadboard Unit 2 (EBU2)

    NASA Technical Reports Server (NTRS)

    Moss, Thomas; Nurge, Mark; Perusich, Stephen

    2011-01-01

    The In-Situ Resource Utilization (ISRU) Regolith & Environmental Science and Oxygen & Lunar Volatiles Extraction (RESOLVE) software provides operation of the physical plant from a remote location with a high-level interface that can access and control the data from external software applications of other subsystems. This software allows autonomous control over the entire system with manual computer control of individual system/process components. It gives non-programmer operators the capability to easily modify the high-level autonomous sequencing while the software is in operation, as well as the ability to modify the low-level, file-based sequences prior to the system operation. Local automated control in a distributed system is also enabled where component control is maintained during the loss of network connectivity with the remote workstation. This innovation also minimizes network traffic. The software architecture commands and controls the latest generation of RESOLVE processes used to obtain, process, and quantify lunar regolith. The system is grouped into six sub-processes: Drill, Crush, Reactor, Lunar Water Resource Demonstration (LWRD), Regolith Volatiles Characterization (RVC) (see example), and Regolith Oxygen Extraction (ROE). Some processes are independent, some are dependent on other processes, and some are independent but run concurrently with other processes. The first goal is to analyze the volatiles emanating from lunar regolith, such as water, carbon monoxide, carbon dioxide, ammonia, hydrogen, and others. This is done by heating the soil and analyzing and capturing the volatilized product. The second goal is to produce water by reducing the soil at high temperatures with hydrogen. This is done by raising the reactor temperature in the range of 800 to 900 C, causing the reaction to progress by adding hydrogen, and then capturing the water product in a desiccant bed. The software needs to run the entire unit and all sub-processes; however

  19. Architecture and Implementation of OpenPET Firmware and Embedded Software

    SciTech Connect

    Abu-Nimeh, Faisal T.; Ito, Jennifer; Moses, William W.; Peng, Qiyu; Choong, Woon-Seng

    2016-01-11

    OpenPET is an open source, modular, extendible, and high-performance platform suitable for multi-channel data acquisition and analysis. Due to the versatility of the hardware, firmware, and software architectures, the platform is capable of interfacing with a wide variety of detector modules not only in medical imaging but also in homeland security applications. Analog signals from radiation detectors share similar characteristics-a pulse whose area is proportional to the deposited energy and whose leading edge is used to extract a timing signal. As a result, a generic design method of the platform is adopted for the hardware, firmware, and software architectures and implementations. The analog front-end is hosted on a module called a Detector Board, where each board can filter, combine, timestamp, and process multiple channels independently. The processed data is formatted and sent through a backplane bus to a module called Support Board, where 1 Support Board can host up to eight Detector Board modules. The data in the Support Board, coming from 8 Detector Board modules, can be aggregated or correlated (if needed) depending on the algorithm implemented or runtime mode selected. It is then sent out to a computer workstation for further processing. The number of channels (detector modules), to be processed, mandates the overall OpenPET System Configuration, which is designed to handle up to 1,024 channels using 16-channel Detector Boards in the Standard System Configuration and 16,384 channels using 32-channel Detector Boards in the Large System Configuration.

  20. Architecture and Implementation of OpenPET Firmware and Embedded Software

    DOE PAGES

    Abu-Nimeh, Faisal T.; Ito, Jennifer; Moses, William W.; ...

    2016-01-11

    OpenPET is an open source, modular, extendible, and high-performance platform suitable for multi-channel data acquisition and analysis. Due to the versatility of the hardware, firmware, and software architectures, the platform is capable of interfacing with a wide variety of detector modules not only in medical imaging but also in homeland security applications. Analog signals from radiation detectors share similar characteristics-a pulse whose area is proportional to the deposited energy and whose leading edge is used to extract a timing signal. As a result, a generic design method of the platform is adopted for the hardware, firmware, and software architectures andmore » implementations. The analog front-end is hosted on a module called a Detector Board, where each board can filter, combine, timestamp, and process multiple channels independently. The processed data is formatted and sent through a backplane bus to a module called Support Board, where 1 Support Board can host up to eight Detector Board modules. The data in the Support Board, coming from 8 Detector Board modules, can be aggregated or correlated (if needed) depending on the algorithm implemented or runtime mode selected. It is then sent out to a computer workstation for further processing. The number of channels (detector modules), to be processed, mandates the overall OpenPET System Configuration, which is designed to handle up to 1,024 channels using 16-channel Detector Boards in the Standard System Configuration and 16,384 channels using 32-channel Detector Boards in the Large System Configuration.« less

  1. Architecture and Implementation of OpenPET Firmware and Embedded Software

    PubMed Central

    Abu-Nimeh, Faisal T.; Ito, Jennifer; Moses, William W.; Peng, Qiyu; Choong, Woon-Seng

    2016-01-01

    OpenPET is an open source, modular, extendible, and high-performance platform suitable for multi-channel data acquisition and analysis. Due to the flexibility of the hardware, firmware, and software architectures, the platform is capable of interfacing with a wide variety of detector modules not only in medical imaging but also in homeland security applications. Analog signals from radiation detectors share similar characteristics – a pulse whose area is proportional to the deposited energy and whose leading edge is used to extract a timing signal. As a result, a generic design method of the platform is adopted for the hardware, firmware, and software architectures and implementations. The analog front-end is hosted on a module called a Detector Board, where each board can filter, combine, timestamp, and process multiple channels independently. The processed data is formatted and sent through a backplane bus to a module called Support Board, where 1 Support Board can host up to eight Detector Board modules. The data in the Support Board, coming from 8 Detector Board modules, can be aggregated or correlated (if needed) depending on the algorithm implemented or runtime mode selected. It is then sent out to a computer workstation for further processing. The number of channels (detector modules), to be processed, mandates the overall OpenPET System Configuration, which is designed to handle up to 1,024 channels using 16-channel Detector Boards in the Standard System Configuration and 16,384 channels using 32-channel Detector Boards in the Large System Configuration. PMID:27110034

  2. A single-board NMR spectrometer based on a software defined radio architecture

    NASA Astrophysics Data System (ADS)

    Tang, Weinan; Wang, Weimin

    2011-01-01

    A single-board software defined radio (SDR) spectrometer for nuclear magnetic resonance (NMR) is presented. The SDR-based architecture, realized by combining a single field programmable gate array (FPGA) and a digital signal processor (DSP) with peripheral radio frequency (RF) front-end circuits, makes the spectrometer compact and reconfigurable. The DSP, working as a pulse programmer, communicates with a personal computer via a USB interface and controls the FPGA through a parallel port. The FPGA accomplishes digital processing tasks such as a numerically controlled oscillator (NCO), digital down converter (DDC) and gradient waveform generator. The NCO, with agile control of phase, frequency and amplitude, is part of a direct digital synthesizer that is used to generate an RF pulse. The DDC performs quadrature demodulation, multistage low-pass filtering and gain adjustment to produce a bandpass signal (receiver bandwidth from 3.9 kHz to 10 MHz). The gradient waveform generator is capable of outputting shaped gradient pulse waveforms and supports eddy-current compensation. The spectrometer directly acquires an NMR signal up to 30 MHz in the case of baseband sampling and is suitable for low-field (<0.7 T) application. Due to the featured SDR architecture, this prototype has flexible add-on ability and is expected to be suitable for portable NMR systems.

  3. CubeSat Integration into the Space Situational Awareness Architecture

    NASA Astrophysics Data System (ADS)

    Morris, K.; Wolfson, M.; Brown, J.

    2013-09-01

    the GEO belt, process out the stars, and then downlink the data to the ground. This data can then be combined with the existing metric track data to enhance the coverage and timeliness. With the current capability of CubeSats and their payloads, along with the launch constraints, the near term focus is to integrate into existing architectures by reducing technology risks, understanding unique phenomenology, and augment mission collection capability. Understanding the near term benefits of utilizing CubeSats will better inform the SSA mission developers how to integrate CubeSats into the next generation of architectures from the start.

  4. TREK: an integrated system architecture for intraoperative cone-beam CT-guided surgery.

    PubMed

    Uneri, A; Schafer, S; Mirota, D J; Nithiananthan, S; Otake, Y; Taylor, R H; Gallia, G L; Khanna, A J; Lee, S; Reh, D D; Siewerdsen, J H

    2012-01-01

    A system architecture has been developed for integration of intraoperative 3D imaging [viz., mobile C-arm cone-beam CT (CBCT)] with surgical navigation (e.g., trackers, endoscopy, and preoperative image and planning data). The goal of this paper is to describe the architecture and its handling of a broad variety of data sources in modular tool development for streamlined use of CBCT guidance in application-specific surgical scenarios. The architecture builds on two proven open-source software packages, namely the cisst package (Johns Hopkins University, Baltimore, MD) and 3D Slicer (Brigham and Women's Hospital, Boston, MA), and combines data sources common to image-guided procedures with intraoperative 3D imaging. Integration at the software component level is achieved through language bindings to a scripting language (Python) and an object-oriented approach to abstract and simplify the use of devices with varying characteristics. The platform aims to minimize offline data processing and to expose quantitative tools that analyze and communicate factors of geometric precision online. Modular tools are defined to accomplish specific surgical tasks, demonstrated in three clinical scenarios (temporal bone, skull base, and spine surgery) that involve a progressively increased level of complexity in toolset requirements. The resulting architecture (referred to as "TREK") hosts a collection of modules developed according to application-specific surgical tasks, emphasizing streamlined integration with intraoperative CBCT. These include multi-modality image display; 3D-3D rigid and deformable registration to bring preoperative image and planning data to the most up-to-date CBCT; 3D-2D registration of planning and image data to real-time fluoroscopy; infrared, electromagnetic, and video-based trackers used individually or in hybrid arrangements; augmented overlay of image and planning data in endoscopic or in-room video; and real-time "virtual fluoroscopy" computed from GPU

  5. Integrating Physiology and Architecture in Models of Fruit Expansion

    PubMed Central

    Cieslak, Mikolaj; Cheddadi, Ibrahim; Boudon, Frédéric; Baldazzi, Valentina; Génard, Michel; Godin, Christophe; Bertin, Nadia

    2016-01-01

    Architectural properties of a fruit, such as its shape, vascular patterns, and skin morphology, play a significant role in determining the distributions of water, carbohydrates, and nutrients inside the fruit. Understanding the impact of these properties on fruit quality is difficult because they develop over time and are highly dependent on both genetic and environmental controls. We present a 3D functional-structural fruit model that can be used to investigate effects of the principle architectural properties on fruit quality. We use a three step modeling pipeline in the OpenAlea platform: (1) creating a 3D volumetric mesh representation of the internal and external fruit structure, (2) generating a complex network of vasculature that is embedded within this mesh, and (3) integrating aspects of the fruit's function, such as water and dry matter transport, with the fruit's structure. We restrict our approach to the phase where fruit growth is mostly due to cell expansion and the fruit has already differentiated into different tissue types. We show how fruit shape affects vascular patterns and, as a consequence, the distribution of sugar/water in tomato fruit. Furthermore, we show that strong interaction between tomato fruit shape and vessel density induces, independently of size, an important and contrasted gradient of water supply from the pedicel to the blossom end of the fruit. We also demonstrate how skin morphology related to microcracking distribution affects the distribution of water and sugars inside nectarine fruit. Our results show that such a generic model permits detailed studies of various, unexplored architectural features affecting fruit quality development. PMID:27917187

  6. Integrating Physiology and Architecture in Models of Fruit Expansion.

    PubMed

    Cieslak, Mikolaj; Cheddadi, Ibrahim; Boudon, Frédéric; Baldazzi, Valentina; Génard, Michel; Godin, Christophe; Bertin, Nadia

    2016-01-01

    Architectural properties of a fruit, such as its shape, vascular patterns, and skin morphology, play a significant role in determining the distributions of water, carbohydrates, and nutrients inside the fruit. Understanding the impact of these properties on fruit quality is difficult because they develop over time and are highly dependent on both genetic and environmental controls. We present a 3D functional-structural fruit model that can be used to investigate effects of the principle architectural properties on fruit quality. We use a three step modeling pipeline in the OpenAlea platform: (1) creating a 3D volumetric mesh representation of the internal and external fruit structure, (2) generating a complex network of vasculature that is embedded within this mesh, and (3) integrating aspects of the fruit's function, such as water and dry matter transport, with the fruit's structure. We restrict our approach to the phase where fruit growth is mostly due to cell expansion and the fruit has already differentiated into different tissue types. We show how fruit shape affects vascular patterns and, as a consequence, the distribution of sugar/water in tomato fruit. Furthermore, we show that strong interaction between tomato fruit shape and vessel density induces, independently of size, an important and contrasted gradient of water supply from the pedicel to the blossom end of the fruit. We also demonstrate how skin morphology related to microcracking distribution affects the distribution of water and sugars inside nectarine fruit. Our results show that such a generic model permits detailed studies of various, unexplored architectural features affecting fruit quality development.

  7. An open-architecture approach to defect analysis software for mask inspection systems

    NASA Astrophysics Data System (ADS)

    Pereira, Mark; Pai, Ravi R.; Reddy, Murali Mohan; Krishna, Ravi M.

    2009-04-01

    Industry data suggests that Mask Inspection represents the second biggest component of Mask Cost and Mask Turn Around Time (TAT). Ever decreasing defect size targets lead to more sensitive mask inspection across the chip, thus generating too many defects. Hence, more operator time is being spent in analyzing and disposition of defects. Also, the fact that multiple Mask Inspection Systems and Defect Analysis strategies would typically be in use in a Mask Shop or a Wafer Foundry further complicates the situation. In this scenario, there is a need for a versatile, user friendly and extensible Defect Analysis software that reduces operator analysis time and enables correct classification and disposition of mask defects by providing intuitive visual and analysis aids. We propose a new vendor-neutral defect analysis software, NxDAT, based on an open architecture. The open architecture of NxDAT makes it easily extensible to support defect analysis for mask inspection systems from different vendors. The capability to load results from mask inspection systems from different vendors either directly or through a common interface enables the functionality of establishing correlation between inspections carried out by mask inspection systems from different vendors. This capability of NxDAT enhances the effectiveness of defect analysis as it directly addresses the real-life scenario where multiple types of mask inspection systems from different vendors co-exist in mask shops or wafer foundries. The open architecture also potentially enables loading wafer inspection results as well as loading data from other related tools such as Review Tools, Repair Tools, CD-SEM tools etc, and correlating them with the corresponding mask inspection results. A unique concept of Plug-In interface to NxDAT further enhances the openness of the architecture of NxDAT by enabling end-users to add their own proprietary defect analysis and image processing algorithms. The plug-in interface makes it

  8. "HIP" new software : the Hydroecological Integrity Assessment Process

    USGS Publications Warehouse

    Henriksen, Jim; Wilson, Juliette T.

    2006-01-01

    Center (FORT) have developed the Hydroecological Integrity Assessment Process (HIP) and a suite of software tools for conducting a hydrologic classification of streams, addressing instream flow needs, and assessing past and proposed hydrologic alterations on streamflow and other ecosystem components. The HIP recognizes that streamflow is strongly related to many critical physiochemical components of rivers, such as dissolved oxygen, channel geomorphology, and habitats. Streamflow is considered a “master variable” that limits the distribution, abundance, and diversity of many aquatic plant and animal species.

  9. Project Integration Architecture: A Practical Demonstration of Information Propagation

    NASA Technical Reports Server (NTRS)

    Jones, William Henry

    2005-01-01

    One of the goals of the Project Integration Architecture (PIA) effort is to provide the ability to propagate information between disparate applications. With this ability, applications may then be formed into an application graph constituting a super-application. Such a super-application would then provide all of the analysis appropriate to a given technical system. This paper reports on a small demonstration of this concept in which a Computer Aided Design (CAD) application was connected to an inlet analysis code and geometry information automatically propagated from one to the other. The majority of the work reported involved not the technology of information propagation, but rather the conversion of propagated information into a form usable by the receiving application.

  10. Complex Product Architecture Analysis using an Integrated Approach

    NASA Astrophysics Data System (ADS)

    Uddin, Amad; Felician Campean, Ioan; Khurshid Khan, Mohammed

    2014-07-01

    Product design decomposition and synthesis is a constant challenge with its continuously increasing complexity at each level of abstraction. Currently, design decomposition and synthesis analytical tasks are mostly accomplished via functional and structural methods. These methods are useful in different phases of design process for product definition and architecture but limited in a way that they tend to focus more on 'what' and less on 'how' and vice versa. This paper combines a functional representation tool known as System State Flow Diagram (a solution independent approach), a solution search tool referred as Morphology Table, and Design Structure Matrix (mainly a solution dependent tool). The proposed approach incorporates Multiple Domain Matrix (MDM) to integrate the knowledge of both solution independent and dependent analyses. The approach is illustrated with a case study of solar robot toy, followed by its limitations, future work and discussion.

  11. Specsim: A Software Simulator for Integral Field Unit Spectrometers

    NASA Astrophysics Data System (ADS)

    Lorente, N. P. F.; Glasse, A. C. H.; Wright, G. S.; Ramsay, S. K.; Evans, C. J.

    As the scale and complexity of each generation of telescopes and their instruments increases, the requirement for a means of furthering our understanding of their properties and limitations, from the initial design to the point of commissioning also grows. An effective way of learning about the behaviour of a new system is to employ a software simulator to generate synthetic astronomical data, based on a given set of telescope and instrument characteristics. The Specsim tool has been developed to model, in software, the operation of Integral Field Unit (IFU) spectrometers, so as to give the science, engineering and operations teams responsible for designing, building and running such instruments a preview of the data products before the system is operational. Specsim generates synthetic data frames approximating those which will be taken by the instrument. The program models astronomical sources and generates detector frames using the predicted and measured properties of the telescope and instrument. These frames can then be used to illustrate and inform a range of activities, including refining the design, developing calibration strategies and the development and testing of data reduction pipelines. Specsim is currently used to model the Medium Resolution Spectrograph on JWST-MIRI, and KMOS on the ESO VLT. The software has been designed in a modular fashion, thus allowing the tool to expand easily to model future instruments, by incorporating new models into the existing infrastructure.

  12. Integrated Payload Data Handling Systems Using Software Partitioning

    NASA Astrophysics Data System (ADS)

    Taylor, Alun; Hann, Mark; Wishart, Alex

    2015-09-01

    An integrated Payload Data Handling System (I-PDHS) is one in which multiple instruments share a central payload processor for their on-board data processing tasks. This offers a number of advantages over the conventional decentralised architecture. Savings in payload mass and power can be realised because the total processing resource is matched to the requirements, as opposed to the decentralised architecture here the processing resource is in effect the sum of all the applications. Overall development cost can be reduced using a common processor. At individual instrument level the potential benefits include a standardised application development environment, and the opportunity to run the instrument data handling application on a fully redundant and more powerful processing platform [1]. This paper describes a joint program by SCISYS UK Limited, Airbus Defence and Space, Imperial College London and RAL Space to implement a realistic demonstration of an I-PDHS using engineering models of flight instruments (a magnetometer and camera) and a laboratory demonstrator of a central payload processor which is functionally representative of a flight design. The objective is to raise the Technology Readiness Level of the centralised data processing technique by address the key areas of task partitioning to prevent fault propagation and the use of a common development process for the instrument applications. The project is supported by a UK Space Agency grant awarded under the National Space Technology Program SpaceCITI scheme. [1].

  13. An architecture for integrating distributed and cooperating knowledge-based Air Force decision aids

    NASA Technical Reports Server (NTRS)

    Nugent, Richard O.; Tucker, Richard W.

    1988-01-01

    MITRE has been developing a Knowledge-Based Battle Management Testbed for evaluating the viability of integrating independently-developed knowledge-based decision aids in the Air Force tactical domain. The primary goal for the testbed architecture is to permit a new system to be added to a testbed with little change to the system's software. Each system that connects to the testbed network declares that it can provide a number of services to other systems. When a system wants to use another system's service, it does not address the server system by name, but instead transmits a request to the testbed network asking for a particular service to be performed. A key component of the testbed architecture is a common database which uses a relational database management system (RDBMS). The RDBMS provides a database update notification service to requesting systems. Normally, each system is expected to monitor data relations of interest to it. Alternatively, a system may broadcast an announcement message to inform other systems that an event of potential interest has occurred. Current research is aimed at dealing with issues resulting from integration efforts, such as dealing with potential mismatches of each system's assumptions about the common database, decentralizing network control, and coordinating multiple agents.

  14. On Using Cloud Platforms in a Software Architecture for Smart Energy Grids

    SciTech Connect

    Simmhan, Yogesh; Giakkoupis, Michail; Cao, Baohua; Prasanna, Viktor K.

    2010-11-30

    Increasing concern about energy consumption is leading to infrastructure that continuously monitors consumer energy usage and allow power utilities to provide dynamic feedback to curtail peak power load. Smart Grid infrastructure being deployed globally needs scalable software platforms to rapidly integrate and analyze information streaming from millions of smart meters, forecast power usage and respond to operational events. Cloud platforms are well suited to support such data and compute intensive, always-on applications. We examine opportunities and challenges of using cloud platforms for such applications in the emerging domain of energy informatics.

  15. Software Architecture of the NASA Shuttle Ground Operations Simulator--SGOS

    NASA Technical Reports Server (NTRS)

    Cook Robert P.; Lostroscio, Charles T.

    2005-01-01

    The SGOS executive and its subsystems have been an integral component of the Shuttle Launch Safety Program for almost thirty years. it is usable (via the LAN) by over 2000 NASA employees at the Kennedy Space Center and 11,000 contractors. SGOS supports over 800 models comprised of several hundred thousand lines of code and over 1,00 MCP procedures. Yet neither language has a for loop!! The simulation software described in this paper is used to train ground controllers and to certify launch countdown readiness.

  16. Display system software for the integration of an ADAGE 3000 programmable display generator into the solid modeling package C.A.D. software

    NASA Technical Reports Server (NTRS)

    Montoya, R. J.; Lane, H. H., Jr.

    1986-01-01

    A software system that integrates an ADAGE 3000 Programmable Display Generator into a C.A.D. software package known as the Solid Modeling Program is described. The Solid Modeling Program (SMP) is an interactive program that is used to model complex solid object through the composition of primitive geomeentities. In addition, SMP provides extensive facilities for model editing and display. The ADAGE 3000 Programmable Display Generator (PDG) is a color, raster scan, programmable display generator with a 32-bit bit-slice, bipolar microprocessor (BPS). The modularity of the system architecture and the width and speed of the system bus allow for additional co-processors in the system. These co-processors combine to provide efficient operations on and rendering of graphics entities. The resulting software system takes advantage of the graphics capabilities of the PDG in the operation of SMP by distributing its processing modules between the host and the PDG. Initially, the target host computer was a PRIME 850, which was later substituted with a VAX-11/785. Two versions of the software system were developed, a phase 1 and a phase 2. In phase 1, the ADAGE 3000 is used as a frame buffer. In phase II, SMP was functionally partitioned and some of its functions were implemented in the ADAGE 3000 by means of ADAGE's SOLID 3000 software package.

  17. Framework programmable platform for the advanced software development workstation. Integration mechanism design document

    NASA Technical Reports Server (NTRS)

    Mayer, Richard J.; Blinn, Thomas M.; Mayer, Paula S. D.; Reddy, Uday; Ackley, Keith; Futrell, Mike

    1991-01-01

    The Framework Programmable Software Development Platform (FPP) is a project aimed at combining effective tool and data integration mechanisms with a model of the software development process in an intelligent integrated software development environment. Guided by this model, this system development framework will take advantage of an integrated operating environment to automate effectively the management of the software development process so that costly mistakes during the development phase can be eliminated.

  18. Fault tolerant architectures for integrated aircraft electronics systems

    NASA Technical Reports Server (NTRS)

    Levitt, K. N.; Melliar-Smith, P. M.; Schwartz, R. L.

    1983-01-01

    Work into possible architectures for future flight control computer systems is described. Ada for Fault-Tolerant Systems, the NETS Network Error-Tolerant System architecture, and voting in asynchronous systems are covered.

  19. Development of economically viable, highly integrated, highly modular SEGIS architecture.

    SciTech Connect

    Enslin, Johan; Hamaoui, Ronald; Gonzalez, Sigifredo; Haddad, Ghaith; Rustom, Khalid; Stuby, Rick; Kuran, Mohammad; Mark, Evlyn; Amarin, Ruba; Alatrash, Hussam; Bower, Ward Isaac; Kuszmaul, Scott S.; Sena-Henderson, Lisa; David, Carolyn; Akhil, Abbas Ali

    2012-03-01

    Initiated in 2008, the SEGIS initiative is a partnership involving the U.S. DOE, Sandia National Laboratories, private sector companies, electric utilities, and universities. Projects supported under the initiative have focused on the complete-system development of solar technologies, with the dual goal of expanding renewable PV applications and addressing new challenges of connecting large-scale solar installations in higher penetrations to the electric grid. Petra Solar, Inc., a New Jersey-based company, received SEGIS funds to develop solutions to two of these key challenges: integrating increasing quantities of solar resources into the grid without compromising (and likely improving) power quality and reliability, and moving the design from a concept of intelligent system controls to successful commercialization. The resulting state-of-the art technology now includes a distributed photovoltaic (PV) architecture comprising AC modules that not only feed directly into the electrical grid at distribution levels but are equipped with new functions that improve voltage stability and thus enhance overall grid stability. This integrated PV system technology, known as SunWave, has applications for 'Power on a Pole,' and comes with a suite of technical capabilities, including advanced inverter and system controls, micro-inverters (capable of operating at both the 120V and 240V levels), communication system, network management system, and semiconductor integration. Collectively, these components are poised to reduce total system cost, increase the system's overall value and help mitigate the challenges of solar intermittency. Designed to be strategically located near point of load, the new SunWave technology is suitable for integration directly into the electrical grid but is also suitable for emerging microgrid applications. SunWave was showcased as part of a SEGIS Demonstration Conference at Pepco Holdings, Inc., on September 29, 2011, and is presently undergoing

  20. Integrated and multiscale NDT for the study of architectural heritage

    NASA Astrophysics Data System (ADS)

    Nuzzo, Luigia; Masini, Nicola; Rizzo, Enzo; Lasaponara, Rosa

    2008-10-01

    The restoration of artistic and architectural heritage represents a bench mark of the cultural development of a society. To this end it is necessary to develop a suitable methodology for the analysis of the material and building components which are usually brittle and in a poor state of conservation. The paper outlines the advantages and the drawbacks in the use of Non-Destructive Testing (NDT) techniques and the need to integrate them in order to obtain a reliable reconstruction of the internal characteristics of the building elements as well as the detection of defects. In the study case we used Ground Penetrating Radar (GPR), infrared thermography (IRT), sonic and ultrasonic tests to analyze a 13th century precious rose window in Southern Italy, affected by widespread decay and instability problems. The theoretical capabilities and limitations of NDT are strictly related to the frequency content of the signals used by the different techniques. Therefore, integrating several physical methods and using different frequency bands allowed as a comprehensive, multi-scale approach to the restoration problem. This revealed to be a proper strategy in order to get high-resolution information on the building characteristics and the state of decay which could support a careful structural restoration.

  1. A Prototype for the Support of Integrated Software Process Development and Improvement

    NASA Astrophysics Data System (ADS)

    Porrawatpreyakorn, Nalinpat; Quirchmayr, Gerald; Chutimaskul, Wichian

    An efficient software development process is one of key success factors for quality software. Not only can the appropriate establishment but also the continuous improvement of integrated project management and of the software development process result in efficiency. This paper hence proposes a software process maintenance framework which consists of two core components: an integrated PMBOK-Scrum model describing how to establish a comprehensive set of project management and software engineering processes and a software development maturity model advocating software process improvement. Besides, a prototype tool to support the framework is introduced.

  2. An Integrated Approach to Functional Engineering: An Engineering Database for Harness, Avionics and Software

    NASA Astrophysics Data System (ADS)

    Piras, Annamaria; Malucchi, Giovanni

    2012-08-01

    In the design and development phase of a new program one of the critical aspects is the integration of all the functional requirements of the system and the control of the overall consistency between the identified needs on one side and the available resources on the other side, especially when both the required needs and available resources are not yet consolidated, but they are evolving as the program maturity increases.The Integrated Engineering Harness Avionics and Software database (IDEHAS) is a tool that has been developed to support this process in the frame of the Avionics and Software disciplines through the different phases of the program. The tool is in fact designed to allow an incremental build up of the avionics and software systems, from the description of the high level architectural data (available in the early stages of the program) to the definition of the pin to pin connectivity information (typically consolidated in the design finalization stages) and finally to the construction and validation of the detailed telemetry parameters and commands to be used in the test phases and in the Mission Control Centre. The key feature of this approach and of the associated tool is that it allows the definition and the maintenance / update of all these data in a single, consistent environment.On one side a system level and concurrent approach requires the feasibility to easily integrate and update the best data available since the early stages of a program in order to improve confidence in the consistency and to control the design information.On the other side, the amount of information of different typologies and the cross-relationships among the data imply highly consolidated structures requiring lot of checks to guarantee the data content consistency with negative effects on simplicity and flexibility and often limiting the attention to special needs and to the interfaces with other disciplines.

  3. Open architectures for formal reasoning and deductive technologies for software development

    NASA Technical Reports Server (NTRS)

    Mccarthy, John; Manna, Zohar; Mason, Ian; Pnueli, Amir; Talcott, Carolyn; Waldinger, Richard

    1994-01-01

    The objective of this project is to develop an open architecture for formal reasoning systems. One goal is to provide a framework with a clear semantic basis for specification and instantiation of generic components; construction of complex systems by interconnecting components; and for making incremental improvements and tailoring to specific applications. Another goal is to develop methods for specifying component interfaces and interactions to facilitate use of existing and newly built systems as 'off the shelf' components, thus helping bridge the gap between producers and consumers of reasoning systems. In this report we summarize results in several areas: our data base of reasoning systems; a theory of binding structures; a theory of components of open systems; a framework for specifying components of open reasoning system; and an analysis of the integration of rewriting and linear arithmetic modules in Boyer-Moore using the above framework.

  4. Mobile Technology and CAD Technology Integration in Teaching Architectural Design Process for Producing Creative Product

    ERIC Educational Resources Information Center

    Bin Hassan, Isham Shah; Ismail, Mohd Arif; Mustafa, Ramlee

    2011-01-01

    The purpose of this research is to examine the effect of integrating the mobile and CAD technology on teaching architectural design process for Malaysian polytechnic architectural students in producing a creative product. The website is set up based on Caroll's minimal theory, while mobile and CAD technology integration is based on Brown and…

  5. Cascade photonic integrated circuit architecture for electro-optic in-phase quadrature/single sideband modulation or frequency conversion.

    PubMed

    Hasan, Mehedi; Hall, Trevor

    2015-11-01

    A photonic integrated circuit architecture for implementing frequency upconversion is proposed. The circuit consists of a 1×2 splitter and 2×1 combiner interconnected by two stages of differentially driven phase modulators having 2×2 multimode interference coupler between the stages. A transfer matrix approach is used to model the operation of the architecture. The predictions of the model are validated by simulations performed using an industry standard software tool. The intrinsic conversion efficiency of the proposed design is improved by 6 dB over the alternative functionally equivalent circuit based on dual parallel Mach-Zehnder modulators known in the prior art. A two-tone analysis is presented to study the linearity of the proposed circuit, and a comparison is provided over the alternative. The proposed circuit is suitable for integration in any platform that offers linear electro-optic phase modulation such as LiNbO(3), silicon, III-V, or hybrid technology.

  6. The Orion GN and C Data-Driven Flight Software Architecture for Automated Sequencing and Fault Recovery

    NASA Technical Reports Server (NTRS)

    King, Ellis; Hart, Jeremy; Odegard, Ryan

    2010-01-01

    The Orion Crew Exploration Vehicle (CET) is being designed to include significantly more automation capability than either the Space Shuttle or the International Space Station (ISS). In particular, the vehicle flight software has requirements to accommodate increasingly automated missions throughout all phases of flight. A data-driven flight software architecture will provide an evolvable automation capability to sequence through Guidance, Navigation & Control (GN&C) flight software modes and configurations while maintaining the required flexibility and human control over the automation. This flexibility is a key aspect needed to address the maturation of operational concepts, to permit ground and crew operators to gain trust in the system and mitigate unpredictability in human spaceflight. To allow for mission flexibility and reconfrgurability, a data driven approach is being taken to load the mission event plan as well cis the flight software artifacts associated with the GN&C subsystem. A database of GN&C level sequencing data is presented which manages and tracks the mission specific and algorithm parameters to provide a capability to schedule GN&C events within mission segments. The flight software data schema for performing automated mission sequencing is presented with a concept of operations for interactions with ground and onboard crew members. A prototype architecture for fault identification, isolation and recovery interactions with the automation software is presented and discussed as a forward work item.

  7. Designing and Implementing a Distributed System Architecture for the Mars Rover Mission Planning Software (Maestro)

    NASA Technical Reports Server (NTRS)

    Goldgof, Gregory M.

    2005-01-01

    Distributed systems allow scientists from around the world to plan missions concurrently, while being updated on the revisions of their colleagues in real time. However, permitting multiple clients to simultaneously modify a single data repository can quickly lead to data corruption or inconsistent states between users. Since our message broker, the Java Message Service, does not ensure that messages will be received in the order they were published, we must implement our own numbering scheme to guarantee that changes to mission plans are performed in the correct sequence. Furthermore, distributed architectures must ensure that as new users connect to the system, they synchronize with the database without missing any messages or falling into an inconsistent state. Robust systems must also guarantee that all clients will remain synchronized with the database even in the case of multiple client failure, which can occur at any time due to lost network connections or a user's own system instability. The final design for the distributed system behind the Mars rover mission planning software fulfills all of these requirements and upon completion will be deployed to MER at the end of 2005 as well as Phoenix (2007) and MSL (2009).

  8. Performance evaluation of multi-stratum resources integrated resilience for software defined inter-data center interconnect.

    PubMed

    Yang, Hui; Zhang, Jie; Zhao, Yongli; Ji, Yuefeng; Wu, Jialin; Lin, Yi; Han, Jianrui; Lee, Young

    2015-05-18

    Inter-data center interconnect with IP over elastic optical network (EON) is a promising scenario to meet the high burstiness and high-bandwidth requirements of data center services. In our previous work, we implemented multi-stratum resources integration among IP networks, optical networks and application stratums resources that allows to accommodate data center services. In view of this, this study extends to consider the service resilience in case of edge optical node failure. We propose a novel multi-stratum resources integrated resilience (MSRIR) architecture for the services in software defined inter-data center interconnect based on IP over EON. A global resources integrated resilience (GRIR) algorithm is introduced based on the proposed architecture. The MSRIR can enable cross stratum optimization and provide resilience using the multiple stratums resources, and enhance the data center service resilience responsiveness to the dynamic end-to-end service demands. The overall feasibility and efficiency of the proposed architecture is experimentally verified on the control plane of our OpenFlow-based enhanced SDN (eSDN) testbed. The performance of GRIR algorithm under heavy traffic load scenario is also quantitatively evaluated based on MSRIR architecture in terms of path blocking probability, resilience latency and resource utilization, compared with other resilience algorithms.

  9. Effective software design and development for the new graph architecture HPC machines.

    SciTech Connect

    Dechev, Damian

    2012-03-01

    Software applications need to change and adapt as modern architectures evolve. Nowadays advancement in chip design translates to increased parallelism. Exploiting such parallelism is a major challenge in modern software engineering. Multicore processors are about to introduce a significant change in the way we design and use fundamental data structures. In this work we describe the design and programming principles of a software library of highly concurrent scalable and nonblocking data containers. In this project we have created algorithms and data structures for handling fundamental computations in massively multithreaded contexts, and we have incorporated these into a usable library with familiar look and feel. In this work we demonstrate the first design and implementation of a wait-free hash table. Our multiprocessor data structure design allows a large number of threads to concurrently insert, remove, and retrieve information. Non-blocking designs alleviate the problems traditionally associated with the use of mutual exclusion, such as bottlenecks and thread-safety. Lock-freedom provides the ability to share data without some of the drawbacks associated with locks, however, these designs remain susceptible to starvation. Furthermore, wait-freedom provides all of the benefits of lock-free synchronization with the added assurance that every thread makes progress in a finite number of steps. This implies deadlock-freedom, livelock-freedom, starvation-freedom, freedom from priority inversion, and thread-safety. The challenges of providing the desirable progress and correctness guarantees of wait-free objects makes their design and implementation difficult. There are few wait-free data structures described in the literature. Using only standard atomic operations provided by the hardware, our design is portable; therefore, it is applicable to a variety of data-intensive applications including the domains of embedded systems and supercomputers.Our experimental

  10. The application of natural ventilation of residential architecture in the integrated design

    NASA Astrophysics Data System (ADS)

    Yao, Ji

    2017-04-01

    As one of the important parts in the architecture design, ventilation fully reflects one important factor of energy conservation to some extent. To ensure that decreasing use of energy resources to the maximum extent in architectures and make architectures harmonize with their surroundings, which have become important pursuing gradually for residential architecture, at present, integrated design of residential architecture should not only highlight natural ventilation technique, but also should integrate natural ventilation and main feature of local climate, etc. What’s more, designers shall carry out unified analysis on the base of taking all factors into consideration. This point is more important. By this way, architecture technique can be more comprehensive and complete. With the guidance of sustainable development concept, natural ventilation as one of ecological technologies is applied extensively in many architecture designs owing to its economic effect and health benefit.

  11. Optimised layout and roadway support planning with integrated intelligent software

    SciTech Connect

    Kouniali, S.; Josien, J.P.; Piguet, J.P.

    1996-12-01

    Experience with knowledge-based systems for Layout planning and roadway support dimensioning is on hand in European coal mining since 1985. The systems SOUT (Support choice and dimensioning, 1989), SOUT 2, PLANANK (planning of bolt-support), Exos (layout planning diagnosis. 1994), Sout 3 (1995) have been developed in close cooperation by CdF{sup 1}. INERIS{sup 2} , EMN{sup 3} (France) and RAG{sup 4}, DMT{sup 5}, TH - Aachen{sup 6} (Germany); ISLSP (Integrated Software for Layout and support planning) development is in progress (completion scheduled for July 1996). This new software technology in combination with conventional programming systems, numerical models and existing databases turned out to be suited for setting-up an intelligent decision aid for layout and roadway support planning. The system enhances reliability of planning and optimises the safety-to-cost ratio for (1) deformation forecast for roadways in seam and surrounding rocks, consideration of the general position of the roadway in the rock mass (zones of increased pressure, position of operating and mined panels); (2) support dimensioning; (3) yielding arches, rigid arches, porch sets, rigid rings, yielding rings and bolting/shotcreting for drifts; (4) yielding arches, rigid arches and porch sets for roadways in seam; and (5) bolt support for gateroads (assessment of exclusion criteria and calculation of the bolting pattern) bolting of face-end zones (feasibility and safety assessment; stability guarantee).

  12. Integrated software framework for processing of geophysical data

    NASA Astrophysics Data System (ADS)

    Chubak, Glenn; Morozov, Igor

    2006-07-01

    We present an integrated software framework for geophysical data processing, based on an updated seismic data processing program package originally developed at the Program for Crustal Studies at the University of Wyoming. Unlike other systems, this processing monitor supports structured multi-component seismic data streams, multi-dimensional data traces, and employs a unique backpropagation execution logic. This results in an unusual flexibility of processing, allowing the system to handle nearly any geophysical data. A modern and feature-rich graphical user interface (GUI) was developed for the system, allowing editing and submission of processing flows and interaction with running jobs. Multiple jobs can be executed in a distributed multi-processor networks and controlled from the same GUI. Jobs, in their turn, can also be parallelized to take advantage of parallel processing environments, such as local area networks and Beowulf clusters.

  13. Propulsion/flight control integration technology (PROFIT) software system definition

    NASA Technical Reports Server (NTRS)

    Carlin, C. M.; Hastings, W. J.

    1978-01-01

    The Propulsion Flight Control Integration Technology (PROFIT) program is designed to develop a flying testbed dedicated to controls research. The control software for PROFIT is defined. Maximum flexibility, needed for long term use of the flight facility, is achieved through a modular design. The Host program, processes inputs from the telemetry uplink, aircraft central computer, cockpit computer control and plant sensors to form an input data base for use by the control algorithms. The control algorithms, programmed as application modules, process the input data to generate an output data base. The Host program formats the data for output to the telemetry downlink, the cockpit computer control, and the control effectors. Two applications modules are defined - the bill of materials F-100 engine control and the bill of materials F-15 inlet control.

  14. The software architecture of the camera for the ASTRI SST-2M prototype for the Cherenkov Telescope Array

    NASA Astrophysics Data System (ADS)

    Sangiorgi, Pierluca; Capalbi, Milvia; Gimenes, Renato; La Rosa, Giovanni; Russo, Francesco; Segreto, Alberto; Sottile, Giuseppe; Catalano, Osvaldo

    2016-07-01

    The purpose of this contribution is to present the current status of the software architecture of the ASTRI SST-2M Cherenkov Camera. The ASTRI SST-2M telescope is an end-to-end prototype for the Small Size Telescope of the Cherenkov Telescope Array. The ASTRI camera is an innovative instrument based on SiPM detectors and has several internal hardware components. In this contribution we will give a brief description of the hardware components of the camera of the ASTRI SST-2M prototype and of their interconnections. Then we will present the outcome of the software architectural design process that we carried out in order to identify the main structural components of the camera software system and the relationships among them. We will analyze the architectural model that describes how the camera software is organized as a set of communicating blocks. Finally, we will show where these blocks are deployed in the hardware components and how they interact. We will describe in some detail, the physical communication ports and external ancillary devices management, the high precision time-tag management, the fast data collection and the fast data exchange between different camera subsystems, and the interfacing with the external systems.

  15. U.S. Army Workshop on Exploring Enterprise, System of Systems, System, and Software Architectures

    DTIC Science & Technology

    2009-03-01

    Clements, SEI 0900-0945 Overview of Enterprise Architecture: definition, IT alignment, governance, notations and languages, Zachman, TOGAF , Carol Wortman...descriptions. An architecture framework (as defined by TOGAF8) is a tool which can be used for developing a broad range of 8 TOGAF is The Open...on a formal metamodel to promote architectural framework (e.g., DoDAF, MODAF, TOGAF , and NATO Architecture Framework [NAF]) and MAFP tool

  16. Mercury - A New Software Package for Orbital Integrations

    NASA Astrophysics Data System (ADS)

    Chambers, J. E.; Migliorini, F.

    1997-07-01

    We present Mercury: a new general-purpose software package for carrying out orbital integrations for problems in solar-system dynamics. Suitable applications include studying the long-term stability of the planetary system, investigating the orbital evolution of comets, asteroids or meteoroids, and simulating planetary accretion. Mercury is designed to be versatile and easy to use, accepting initial conditions in either Cartesian coordinates or Keplerian elements in ``cometary'' or ``asteroidal'' format, with different epochs of osculation for different objects. Output from an integration consists of either osculating or averaged (``proper'') elements, written in a machine-independent compressed format, which allows the results of a calculation performed on one platform to be transferred (e.g. via FTP) and decoded on another. Mercury itself is platform independent, and can be run on machines using DEC Unix, Open VMS, HP Unix, Solaris, Linux or DOS. During an integration, Mercury monitors and records details of close encounters, sungrazing events, ejections and collisions between objects. The effects of non-gravitational forces on comets can also be modelled. Additional effects such as Poynting-Robertson drag, post-Newtonian corrections, oblateness of the primary, and the galactic potential will be incorporated in future. The package currently supports integrations using a mixed-variable symplectic routine, the Bulirsch-Stoer method, and a hybrid code for planetary accretion calculations; with Everhart's popular RADAU algorithm and a symmetric multistep routine to be added shortly. Our presentation will include a demonstration of the latest version of Mercury, with the explicit aim of getting feedback from potential users and incorporating these suggestions into a final version that will be made available to everybody.

  17. A neural circuit architecture for angular integration in Drosophila.

    PubMed

    Green, Jonathan; Adachi, Atsuko; Shah, Kunal K; Hirokawa, Jonathan D; Magani, Pablo S; Maimon, Gaby

    2017-06-01

    Many animals keep track of their angular heading over time while navigating through their environment. However, a neural-circuit architecture for computing heading has not been experimentally defined in any species. Here we describe a set of clockwise- and anticlockwise-shifting neurons in the Drosophila central complex whose wiring and physiology provide a means to rotate an angular heading estimate based on the fly's angular velocity. We show that each class of shifting neurons exists in two subtypes, with spatiotemporal activity profiles that suggest different roles for each subtype at the start and end of tethered-walking turns. Shifting neurons are required for the heading system to properly track the fly's heading in the dark, and stimulation of these neurons induces predictable shifts in the heading signal. The central features of this biological circuit are analogous to those of computational models proposed for head-direction cells in rodents and may shed light on how neural systems, in general, perform integration.

  18. The modular and integrative functional architecture of the human brain

    PubMed Central

    Bertolero, Maxwell A.; Yeo, B. T. Thomas; D’Esposito, Mark

    2015-01-01

    Network-based analyses of brain imaging data consistently reveal distinct modules and connector nodes with diverse global connectivity across the modules. How discrete the functions of modules are, how dependent the computational load of each module is to the other modules’ processing, and what the precise role of connector nodes is for between-module communication remains underspecified. Here, we use a network model of the brain derived from resting-state functional MRI (rs-fMRI) data and investigate the modular functional architecture of the human brain by analyzing activity at different types of nodes in the network across 9,208 experiments of 77 cognitive tasks in the BrainMap database. Using an author–topic model of cognitive functions, we find a strong spatial correspondence between the cognitive functions and the network’s modules, suggesting that each module performs a discrete cognitive function. Crucially, activity at local nodes within the modules does not increase in tasks that require more cognitive functions, demonstrating the autonomy of modules’ functions. However, connector nodes do exhibit increased activity when more cognitive functions are engaged in a task. Moreover, connector nodes are located where brain activity is associated with many different cognitive functions. Connector nodes potentially play a role in between-module communication that maintains the modular function of the brain. Together, these findings provide a network account of the brain’s modular yet integrated implementation of cognitive functions. PMID:26598686

  19. CORBA-Based Distributed Software Framework for the NIF Integrated Computer Control System

    SciTech Connect

    Stout, E A; Carey, R W; Estes, C M; Fisher, J M; Lagin, L J; Mathisen, D G; Reynolds, C A; Sanchez, R J

    2007-11-20

    The National Ignition Facility (NIF), currently under construction at the Lawrence Livermore National Laboratory, is a stadium-sized facility containing a 192-beam, 1.8 Megajoule, 500-Terawatt, ultra-violet laser system together with a 10-meter diameter target chamber with room for nearly 100 experimental diagnostics. The NIF is operated by the Integrated Computer Control System (ICCS) which is a scalable, framework-based control system distributed over 800 computers throughout the NIF. The framework provides templates and services at multiple levels of abstraction for the construction of software applications that communicate via CORBA (Common Object Request Broker Architecture). Object-oriented software design patterns are implemented as templates and extended by application software. Developers extend the framework base classes to model the numerous physical control points and implement specializations of common application behaviors. An estimated 140 thousand software objects, each individually addressable through CORBA, will be active at full scale. Many of these objects have persistent configuration information stored in a database. The configuration data is used to initialize the objects at system start-up. Centralized server programs that implement events, alerts, reservations, data archival, name service, data access, and process management provide common system wide services. At the highest level, a model-driven, distributed shot automation system provides a flexible and scalable framework for automatic sequencing of work-flow for control and monitoring of NIF shots. The shot model, in conjunction with data defining the parameters and goals of an experiment, describes the steps to be performed by each subsystem in order to prepare for and fire a NIF shot. Status and usage of this distributed framework are described.

  20. Dietary intake assessment using integrated sensors and software

    NASA Astrophysics Data System (ADS)

    Shang, Junqing; Pepin, Eric; Johnson, Eric; Hazel, David; Teredesai, Ankur; Kristal, Alan; Mamishev, Alexander

    2012-02-01

    The area of dietary assessment is becoming increasingly important as obesity rates soar, but valid measurement of the food intake in free-living persons is extraordinarily challenging. Traditional paper-based dietary assessment methods have limitations due to bias, user burden and cost, and therefore improved methods are needed to address important hypotheses related to diet and health. In this paper, we will describe the progress of our mobile Diet Data Recorder System (DDRS), where an electronic device is used for objective measurement on dietary intake in real time and at moderate cost. The DDRS consists of (1) a mobile device that integrates a smartphone and an integrated laser package, (2) software on the smartphone for data collection and laser control, (3) an algorithm to process acquired data for food volume estimation, which is the largest source of error in calculating dietary intake, and (4) database and interface for data storage and management. The estimated food volume, together with direct entries of food questionnaires and voice recordings, could provide dietitians and nutritional epidemiologists with more complete food description and more accurate food portion sizes. In this paper, we will describe the system design of DDRS and initial results of dietary assessment.

  1. Integrated Software for Analyzing Designs of Launch Vehicles

    NASA Technical Reports Server (NTRS)

    Philips, Alan D.

    2003-01-01

    Launch Vehicle Analysis Tool (LVA) is a computer program for preliminary design structural analysis of launch vehicles. Before LVA was developed, in order to analyze the structure of a launch vehicle, it was necessary to estimate its weight, feed this estimate into a program to obtain pre-launch and flight loads, then feed these loads into structural and thermal analysis programs to obtain a second weight estimate. If the first and second weight estimates differed, it was necessary to reiterate these analyses until the solution converged. This process generally took six to twelve person-months of effort. LVA incorporates text to structural layout converter, configuration drawing, mass properties generation, pre-launch and flight loads analysis, loads output plotting, direct solution structural analysis, and thermal analysis subprograms. These subprograms are integrated in LVA so that solutions can be iterated automatically. LVA incorporates expert-system software that makes fundamental design decisions without intervention by the user. It also includes unique algorithms based on extensive research. The total integration of analysis modules drastically reduces the need for interaction with the user. A typical solution can be obtained in 30 to 60 minutes. Subsequent runs can be done in less than two minutes.

  2. Building Structure Design as an Integral Part of Architecture: A Teaching Model for Students of Architecture

    ERIC Educational Resources Information Center

    Unay, Ali Ihsan; Ozmen, Cengiz

    2006-01-01

    This paper explores the place of structural design within undergraduate architectural education. The role and format of lecture-based structure courses within an education system, organized around the architectural design studio is discussed with its most prominent problems and proposed solutions. The fundamental concept of the current teaching…

  3. Building Structure Design as an Integral Part of Architecture: A Teaching Model for Students of Architecture

    ERIC Educational Resources Information Center

    Unay, Ali Ihsan; Ozmen, Cengiz

    2006-01-01

    This paper explores the place of structural design within undergraduate architectural education. The role and format of lecture-based structure courses within an education system, organized around the architectural design studio is discussed with its most prominent problems and proposed solutions. The fundamental concept of the current teaching…

  4. The EPOS Architecture: Integrated Services for solid Earth Science

    NASA Astrophysics Data System (ADS)

    Cocco, Massimo; Consortium, Epos

    2013-04-01

    The European Plate Observing System (EPOS) represents a scientific vision and an IT approach in which innovative multidisciplinary research is made possible for a better understanding of the physical processes controlling earthquakes, volcanic eruptions, unrest episodes and tsunamis as well as those driving tectonics and Earth surface dynamics. EPOS has a long-term plan to facilitate integrated use of data, models and facilities from existing (but also new) distributed research infrastructures, for solid Earth science. One primary purpose of EPOS is to take full advantage of the new e-science opportunities coming available. The aim is to obtain an efficient and comprehensive multidisciplinary research platform for the Earth sciences in Europe. The EPOS preparatory phase (EPOS PP), funded by the European Commission within the Capacities program, started on November 1st 2010 and it has completed its first two years of activity. EPOS is presently mid-way through its preparatory phase and to date it has achieved all the objectives, milestones and deliverables planned in its roadmap towards construction. The EPOS mission is to integrate the existing research infrastructures (RIs) in solid Earth science warranting increased accessibility and usability of multidisciplinary data from monitoring networks, laboratory experiments and computational simulations. This is expected to enhance worldwide interoperability in the Earth Sciences and establish a leading, integrated European infrastructure offering services to researchers and other stakeholders. The Preparatory Phase aims at leveraging the project to the level of maturity required to implement the EPOS construction phase, with a defined legal structure, detailed technical planning and financial plan. We will present the EPOS architecture, which relies on the integration of the main outcomes from legal, governance and financial work following the strategic EPOS roadmap and according to the technical work done during the

  5. The Application of New Software Technology to the Architecture of the National Cycle Program

    NASA Technical Reports Server (NTRS)

    Schoeffler, James D.

    1997-01-01

    As part of the Numerical Propulsion System Simulation (NPSS) effort of NASA Lewis in conjunction with the United States aeropropulsion industry, a new system simulation framework, the National Cycle Program (NCP), capable of combining existing empirical engine models with new detailed component-based computational models is being developed. The software architecture of the NCP program involves a generalized object- oriented framework and a base-set of engine component models along with supporting tool kits which will support engine simulation in a distributed environment. As the models are extended to contain two and three dimensions the computing load increases rapidly and it is intended that this load be distributed across multiple work stations executing concurrently in order to get acceptably fast results. The research carried out was directed toward performance analysis of the distributed object system. More specifically, the performance of the actor-based distributed object design I created earlier was desired. To this end, the research was directed toward the design and implementation of suitable performance-analysis techniques and software to demonstrate those techniques. There were three specific results which are reported in two separate reports submitted separately as NASA Technical Memoranda. The results are: (1) Design, implementation, and testing of a performance analysis program for a set of active objects (actor based objects) which allowed the individual actors to be assigned to arbitrary processes on an arbitrary set of machines. (2) The global-balance-equation approach has the fundamental limitation that the number of equations increases exponentially with the number of actors. Hence, unlike many approximate approaches to this problem, the nearest-neighbor approach allows checking of the solution and an estimate of the error. The technique was demonstrated in a prototype analysis program as part of this research. The results of the program were

  6. Software architecture and design of the web services facilitating climate model diagnostic analysis

    NASA Astrophysics Data System (ADS)

    Pan, L.; Lee, S.; Zhang, J.; Tang, B.; Zhai, C.; Jiang, J. H.; Wang, W.; Bao, Q.; Qi, M.; Kubar, T. L.; Teixeira, J.

    2015-12-01

    Climate model diagnostic analysis is a computationally- and data-intensive task because it involves multiple numerical model outputs and satellite observation data that can both be high resolution. We have built an online tool that facilitates this process. The tool is called Climate Model Diagnostic Analyzer (CMDA). It employs the web service technology and provides a web-based user interface. The benefits of these choices include: (1) No installation of any software other than a browser, hence it is platform compatable; (2) Co-location of computation and big data on the server side, and small results and plots to be downloaded on the client side, hence high data efficiency; (3) multi-threaded implementation to achieve parallel performance on multi-core servers; and (4) cloud deployment so each user has a dedicated virtual machine. In this presentation, we will focus on the computer science aspects of this tool, namely the architectural design, the infrastructure of the web services, the implementation of the web-based user interface, the mechanism of provenance collection, the approach to virtualization, and the Amazon Cloud deployment. As an example, We will describe our methodology to transform an existing science application code into a web service using a Python wrapper interface and Python web service frameworks (i.e., Flask, Gunicorn, and Tornado). Another example is the use of Docker, a light-weight virtualization container, to distribute and deploy CMDA onto an Amazon EC2 instance. Our tool of CMDA has been successfully used in the 2014 Summer School hosted by the JPL Center for Climate Science. Students had positive feedbacks in general and we will report their comments. An enhanced version of CMDA with several new features, some requested by the 2014 students, will be used in the 2015 Summer School soon.

  7. The Titan graphics supercomputer architecture

    SciTech Connect

    Diede, T.; Hagenmaier, C.F.; Miranker, G.S.; Rubinstein, J.J.; Worley, W.S. Jr. )

    1988-09-01

    Leading-edge hardware and software technologies now make possible a new class of system - the graphics supercomputer. Titan architecture provides a substantial fraction of supercomputer performance plus integrated high-quality graphics.

  8. DSSR: an integrated software tool for dissecting the spatial structure of RNA

    PubMed Central

    Lu, Xiang-Jun; Bussemaker, Harmen J.; Olson, Wilma K.

    2015-01-01

    Insight into the three-dimensional architecture of RNA is essential for understanding its cellular functions. However, even the classic transfer RNA structure contains features that are overlooked by existing bioinformatics tools. Here we present DSSR (Dissecting the Spatial Structure of RNA), an integrated and automated tool for analyzing and annotating RNA tertiary structures. The software identifies canonical and noncanonical base pairs, including those with modified nucleotides, in any tautomeric or protonation state. DSSR detects higher-order coplanar base associations, termed multiplets. It finds arrays of stacked pairs, classifies them by base-pair identity and backbone connectivity, and distinguishes a stem of covalently connected canonical pairs from a helix of stacked pairs of arbitrary type/linkage. DSSR identifies coaxial stacking of multiple stems within a single helix and lists isolated canonical pairs that lie outside of a stem. The program characterizes ‘closed’ loops of various types (hairpin, bulge, internal, and junction loops) and pseudoknots of arbitrary complexity. Notably, DSSR employs isolated pairs and the ends of stems, whether pseudoknotted or not, to define junction loops. This new, inclusive definition provides a novel perspective on the spatial organization of RNA. Tests on all nucleic acid structures in the Protein Data Bank confirm the efficiency and robustness of the software, and applications to representative RNA molecules illustrate its unique features. DSSR and related materials are freely available at http://x3dna.org/. PMID:26184874

  9. DSSR: an integrated software tool for dissecting the spatial structure of RNA.

    PubMed

    Lu, Xiang-Jun; Bussemaker, Harmen J; Olson, Wilma K

    2015-12-02

    Insight into the three-dimensional architecture of RNA is essential for understanding its cellular functions. However, even the classic transfer RNA structure contains features that are overlooked by existing bioinformatics tools. Here we present DSSR (Dissecting the Spatial Structure of RNA), an integrated and automated tool for analyzing and annotating RNA tertiary structures. The software identifies canonical and noncanonical base pairs, including those with modified nucleotides, in any tautomeric or protonation state. DSSR detects higher-order coplanar base associations, termed multiplets. It finds arrays of stacked pairs, classifies them by base-pair identity and backbone connectivity, and distinguishes a stem of covalently connected canonical pairs from a helix of stacked pairs of arbitrary type/linkage. DSSR identifies coaxial stacking of multiple stems within a single helix and lists isolated canonical pairs that lie outside of a stem. The program characterizes 'closed' loops of various types (hairpin, bulge, internal, and junction loops) and pseudoknots of arbitrary complexity. Notably, DSSR employs isolated pairs and the ends of stems, whether pseudoknotted or not, to define junction loops. This new, inclusive definition provides a novel perspective on the spatial organization of RNA. Tests on all nucleic acid structures in the Protein Data Bank confirm the efficiency and robustness of the software, and applications to representative RNA molecules illustrate its unique features. DSSR and related materials are freely available at http://x3dna.org/. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.

  10. A software and hardware architecture for a high-availability PACS.

    PubMed

    Gutiérrez-Martínez, Josefina; Núñez-Gaona, Marco Antonio; Aguirre-Meneses, Heriberto; Delgado-Esquerra, Ruth Evelin

    2012-08-01

    Increasing radiology studies has led to the emergence of new requirements for management medical information, mainly affecting the storage of digital images. Today, it is a necessary interaction between workflow management and legal rules that govern it, to allow an efficient control of medical technology and associated costs. Another important topic that is growing in importance within the healthcare sector is compliance, which includes the retention of studies, information security, and patient privacy. Previously, we conducted a series of extensive analysis and measurements of pre-existing operating conditions. These studies and projects have been described in other papers. The first phase: hardware and software installation and initial tests were completed in March 2006. The storage phase was built step by step until the PACS-INR was totally completed. Two important aspects were considered in the integration of components: (1) the reliability and performance of the system to transfer and display DICOM images, and (2) the availability of data backups for disaster recovery and downtime scenarios. This paper describes the high-availability model for a large-scale PACS to support the storage and retrieve of data using CAS and DAS technologies to provide an open storage platform. This solution offers a simple framework that integrates and automates the information at low cost and minimum risk. Likewise, the model allows an optimized use of the information infrastructure in the clinical environment. The tests of the model include massive data migration, openness, scalability, and standard compatibility to avoid locking data into a proprietary technology.

  11. A Reusable and Adaptable Software Architecture for Embedded Space Flight System: The Core Flight Software System (CFS)

    NASA Technical Reports Server (NTRS)

    Wilmot, Jonathan

    2005-01-01

    The contents include the following: High availability. Hardware is in harsh environment. Flight processor (constraints) very widely due to power and weight constraints. Software must be remotely modifiable and still operate while changes are being made. Many custom one of kind interfaces for one of a kind missions. Sustaining engineering. Price of failure is high, tens to hundreds of millions of dollars.

  12. Executable Behavioral Modeling of System and Software Architecture Specifications to Inform Resourcing Decisions

    DTIC Science & Technology

    2016-09-01

    model that all users can interpret, used to communicate with a spectrum of stakeholders. The development of a precise architecture and commonly...A system’s required behaviors can be modeled in MP to confirm that the requirements communicated by the stakeholders have been satisfied...require a different view of the architecture or architecture model to communicate relevant information to different stakeholders with different

  13. SDN architecture for optical packet and circuit integrated networks

    NASA Astrophysics Data System (ADS)

    Furukawa, Hideaki; Miyazawa, Takaya

    2016-02-01

    We have been developing an optical packet and circuit integrated (OPCI) network, which realizes dynamic optical path, high-density packet multiplexing, and flexible wavelength resource allocation. In the OPCI networks, a best-effort service and a QoS-guaranteed service are provided by employing optical packet switching (OPS) and optical circuit switching (OCS) respectively, and users can select these services. Different wavelength resources are assigned for OPS and OCS links, and the amount of their wavelength resources are dynamically changed in accordance with the service usage conditions. To apply OPCI networks into wide-area (core/metro) networks, we have developed an OPCI node with a distributed control mechanism. Moreover, our OPCI node works with a centralized control mechanism as well as a distributed one. It is therefore possible to realize SDN-based OPCI networks, where resource requests and a centralized configuration are carried out. In this paper, we show our SDN architecture for an OPS system that configures mapping tables between IP addresses and optical packet addresses and switching tables according to the requests from multiple users via a web interface. While OpenFlow-based centralized control protocol is coming into widespread use especially for single-administrative, small-area (LAN/data-center) networks. Here, we also show an interworking mechanism between OpenFlow-based networks (OFNs) and the OPCI network for constructing a wide-area network, and a control method of wavelength resource selection to automatically transfer diversified flows from OFNs to the OPCI network.

  14. Integrating silicon photonic interconnects with CMOS: Fabrication to architecture

    NASA Astrophysics Data System (ADS)

    Sherwood, Nicholas Ramsey

    While it was for many years the goal of microelectronics to speed up our daily tasks, the focus of today's technological developments is heavily centered on electronic media. Anyone can share their thoughts as text, sound, images or full videos, they can even make phone calls and download full movies on their computers, tablets and phones. The impact of this upsurge in bandwidth is directly on the infrastructure that carries this data. Long distance telecom lines were long ago replaced by optical fibers; now shorter and shorter distance connections have moved to optical transmission to keep up with the bandwidth requirements. Yet microprocessors that make up the switching nodes as well as the endpoints are not only stagnant in terms of processing speed, but also unlikely to continue Moore's transistor-doubling trend for much longer. Silicon photonics stands to make a technical leap in microprocessor technology by allowing monolithic communication speeds between arbitrarily spaced processing elements. The improvement in on-chip communication could reduce power and enable new improvements in this field. This work explores a few aspects involved in making such a leap practical in real life. The first part of the thesis develops process techniques and materials to make silicon photonics truly compatible with CMOS electronics, for two different stack layouts, including a glimpse into multilayerd photonics. Following this is an evaluation of the limitations of integrated devices and a post-fabrication/stabilizing solution using thermal index shifting. In the last parts we explore higher level device design and architecture on the SOI platform.

  15. Increasing the Practical Impact of Formal Methods for Computer-Aided Software Development: Software Slicing, Merging and Integration

    DTIC Science & Technology

    1993-10-15

    transformational development of software (e.g., in KIDS [Smith 90]). We obtain an integrated view of software development and evolution by considering what is...In CADE 6 (1982), Lecture Notes in Computer Science, Vol. 138, Springer-Verlag, pp. 172-193. [Smith 90) SMITH, D. R. KIDS : A semiautomatic program...taruw of the FDG, ruther than the combinled CF-/PD. becauste the ssabstitutzoos on the CR;G porno . are staghtforwaad dke of mas tuctsu rlaioshipetolthe

  16. Integrated Software Health Management for Aircraft GN and C

    NASA Technical Reports Server (NTRS)

    Schumann, Johann; Mengshoel, Ole

    2011-01-01

    Modern aircraft rely heavily on dependable operation of many safety-critical software components. Despite careful design, verification and validation (V&V), on-board software can fail with disastrous consequences if it encounters problematic software/hardware interaction or must operate in an unexpected environment. We are using a Bayesian approach to monitor the software and its behavior during operation and provide up-to-date information about the health of the software and its components. The powerful reasoning mechanism provided by our model-based Bayesian approach makes reliable diagnosis of the root causes possible and minimizes the number of false alarms. Compilation of the Bayesian model into compact arithmetic circuits makes SWHM feasible even on platforms with limited CPU power. We show initial results of SWHM on a small simulator of an embedded aircraft software system, where software and sensor faults can be injected.

  17. Performance evaluation of multi-stratum resources integration based on network function virtualization in software defined elastic data center optical interconnect.

    PubMed

    Yang, Hui; Zhang, Jie; Ji, Yuefeng; Tian, Rui; Han, Jianrui; Lee, Young

    2015-11-30

    Data center interconnect with elastic optical network is a promising scenario to meet the high burstiness and high-bandwidth requirements of data center services. In our previous work, we implemented multi-stratum resilience between IP and elastic optical networks that allows to accommodate data center services. In view of this, this study extends to consider the resource integration by breaking the limit of network device, which can enhance the resource utilization. We propose a novel multi-stratum resources integration (MSRI) architecture based on network function virtualization in software defined elastic data center optical interconnect. A resource integrated mapping (RIM) scheme for MSRI is introduced in the proposed architecture. The MSRI can accommodate the data center services with resources integration when the single function or resource is relatively scarce to provision the services, and enhance globally integrated optimization of optical network and application resources. The overall feasibility and efficiency of the proposed architecture are experimentally verified on the control plane of OpenFlow-based enhanced software defined networking (eSDN) testbed. The performance of RIM scheme under heavy traffic load scenario is also quantitatively evaluated based on MSRI architecture in terms of path blocking probability, provisioning latency and resource utilization, compared with other provisioning schemes.

  18. Integrated Functional and Executional Modelling of Software Using Web-Based Databases

    NASA Technical Reports Server (NTRS)

    Kulkarni, Deepak; Marietta, Roberta

    1998-01-01

    NASA's software subsystems undergo extensive modification and updates over the operational lifetimes. It is imperative that modified software should satisfy safety goals. This report discusses the difficulties encountered in doing so and discusses a solution based on integrated modelling of software, use of automatic information extraction tools, web technology and databases.

  19. Integrated Functional and Executional Modelling of Software Using Web-Based Databases

    NASA Technical Reports Server (NTRS)

    Kulkarni, Deepak; Marietta, Roberta

    1998-01-01

    NASA's software subsystems undergo extensive modification and updates over the operational lifetimes. It is imperative that modified software should satisfy safety goals. This report discusses the difficulties encountered in doing so and discusses a solution based on integrated modelling of software, use of automatic information extraction tools, web technology and databases. To appear in an article of Journal of Database Management.

  20. QUASAR: A Method for the Quality Assessment of Software-Intensive System Architectures

    DTIC Science & Technology

    2006-07-01

    OF FITNESS FOR PURPOSE OR MERCHANTABILITY, EXCLUSIVITY, OR RESULTS OBTAINED FROM USE OF THE MATERIAL. CARNEGIE MELLON UNIVERSITY DOES NOT MAKE ANY...architecture assessments is one way for engineering personnel in the PMO to obtain that visibility and leverage. 11. Provide acquisition oversight of...architectures. 26 To obtain an objective assessment, it is important that the assessment team is

  1. An Approach for Detecting Inconsistencies between Behavioral Models of the Software Architecture and the Code

    SciTech Connect

    Ciraci, Selim; Sozer, Hasan; Tekinerdogan, Bedir

    2012-07-16

    In practice, inconsistencies between architectural documentation and the code might arise due to improper implementation of the architecture or the separate, uncontrolled evolution of the code. Several approaches have been proposed to detect the inconsistencies between the architecture and the code but these tend to be limited for capturing inconsistencies that might occur at runtime. We present a runtime verification approach for detecting inconsistencies between the dynamic behavior of the architecture and the actual code. The approach is supported by a set of tools that implement the architecture and the code patterns in Prolog, and support the automatic generation of runtime monitors for detecting inconsistencies. We illustrate the approach and the toolset for a Crisis Management System case study.

  2. GLUE!: An Architecture for the Integration of External Tools in Virtual Learning Environments

    ERIC Educational Resources Information Center

    Alario-Hoyos, Carlos; Bote-Lorenzo, Miguel L.; Gomez-Sanchez, Eduardo; Asensio-Perez, Juan I.; Vega-Gorgojo, Guillermo; Ruiz-Calleja, Adolfo

    2013-01-01

    The integration of external tools in Virtual Learning Environments (VLEs) aims at enriching the learning activities that educational practitioners may design and enact. This paper presents GLUE!, an architecture that enables the lightweight integration of multiple existing external tools in multiple existing VLEs. GLUE! fosters this integration by…

  3. GLUE!: An Architecture for the Integration of External Tools in Virtual Learning Environments

    ERIC Educational Resources Information Center

    Alario-Hoyos, Carlos; Bote-Lorenzo, Miguel L.; Gomez-Sanchez, Eduardo; Asensio-Perez, Juan I.; Vega-Gorgojo, Guillermo; Ruiz-Calleja, Adolfo

    2013-01-01

    The integration of external tools in Virtual Learning Environments (VLEs) aims at enriching the learning activities that educational practitioners may design and enact. This paper presents GLUE!, an architecture that enables the lightweight integration of multiple existing external tools in multiple existing VLEs. GLUE! fosters this integration by…

  4. Network architectures and protocols for the integration of ACTS and ISDN

    NASA Technical Reports Server (NTRS)

    Chitre, D. M.; Lowry, P. A.

    1992-01-01

    A close integration of satellite networks and the integrated services digital network (ISDN) is essential for satellite networks to carry ISDN traffic effectively. This also shows how a given (pre-ISDN) satellite network architecture can be enhanced to handle ISDN signaling and provide ISDN services. It also describes the functional architecture and high-level protocols that could be implemented in the NASA Advanced Communications Technology Satellite (ACTS) low burst rate communications system to provide ISDN services.

  5. Parallel Architectures and Parallel Algorithms for Integrated Vision Systems. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Choudhary, Alok Nidhi

    1989-01-01

    Computer vision is regarded as one of the most complex and computationally intensive problems. An integrated vision system (IVS) is a system that uses vision algorithms from all levels of processing to perform for a high level application (e.g., object recognition). An IVS normally involves algorithms from low level, intermediate level, and high level vision. Designing parallel architectures for vision systems is of tremendous interest to researchers. Several issues are addressed in parallel architectures and parallel algorithms for integrated vision systems.

  6. MIDCA: A Metacognitive, Integrated Dual-Cycle Architecture for Self-Regulated Autonomy

    DTIC Science & Technology

    2013-09-01

    ONR Award #N000141210172 -- Period of Performance: 1 June 2012 through 31 May 2013 MIDCA: A Metacognitive , Integrated Dual-Cycle Architecture... metacognition in cognitive architectures and to demonstrate the underlying theory through implemented computational models. During the last year, the...REPORT DATE SEP 2013 2. REPORT TYPE 3. DATES COVERED 01-06-2012 to 31-05-2013 4. TITLE AND SUBTITLE MIDCA: A Metacognitive , Integrated Dual

  7. A neuron-inspired computational architecture for spatiotemporal visual processing: real-time visual sensory integration for humanoid robots.

    PubMed

    Holzbach, Andreas; Cheng, Gordon

    2014-06-01

    In this article, we present a neurologically motivated computational architecture for visual information processing. The computational architecture's focus lies in multiple strategies: hierarchical processing, parallel and concurrent processing, and modularity. The architecture is modular and expandable in both hardware and software, so that it can also cope with multisensory integrations - making it an ideal tool for validating and applying computational neuroscience models in real time under real-world conditions. We apply our architecture in real time to validate a long-standing biologically inspired visual object recognition model, HMAX. In this context, the overall aim is to supply a humanoid robot with the ability to perceive and understand its environment with a focus on the active aspect of real-time spatiotemporal visual processing. We show that our approach is capable of simulating information processing in the visual cortex in real time and that our entropy-adaptive modification of HMAX has a higher efficiency and classification performance than the standard model (up to ∼+6%).

  8. Development of a modular integrated control architecture for flexible manipulators. Final report

    SciTech Connect

    Burks, B.L.; Battiston, G.

    1994-12-08

    In April 1994, ORNL and SPAR completed the joint development of a manipulator controls architecture for flexible structure controls under a CRADA between the two organizations. The CRADA project entailed design and development of a new architecture based upon the Modular Integrated Control Architecture (MICA) previously developed by ORNL. The new architecture, dubbed MICA-II, uses an object-oriented coding philosophy to provide a highly modular and expandable architecture for robotic manipulator control. This architecture can be readily ported to control of many different manipulator systems. The controller also provides a user friendly graphical operator interface and display of many forms of data including system diagnostics. The capabilities of MICA-II were demonstrated during oscillation damping experiments using the Flexible Beam Experimental Test Bed at Hanford.

  9. Integration and validation of a data grid software

    NASA Astrophysics Data System (ADS)

    Carenton-Madiec, Nicolas; Berger, Katharina; Cofino, Antonio

    2014-05-01

    The Earth System Grid Federation (ESGF) Peer-to-Peer (P2P) is a software infrastructure for the management, dissemination, and analysis of model output and observational data. The ESGF grid is composed with several types of nodes which have different roles. About 40 data nodes host model outputs and datasets using thredds catalogs. About 25 compute nodes offer remote visualization and analysis tools. About 15 index nodes crawl data nodes catalogs and implement faceted and federated search in a web interface. About 15 Identity providers nodes manage accounts, authentication and authorization. Here we will present an actual size test federation spread across different institutes in different countries and a python test suite that were started in December 2013. The first objective of the test suite is to provide a simple tool that helps to test and validate a single data node and its closest index, compute and identity provider peer. The next objective will be to run this test suite on every data node of the federation and therefore test and validate every single node of the whole federation. The suite already implements nosetests, requests, myproxy-logon, subprocess, selenium and fabric python libraries in order to test both web front ends, back ends and security services. The goal of this project is to improve the quality of deliverable in a small developers team context. Developers are widely spread around the world working collaboratively and without hierarchy. This kind of working organization context en-lighted the need of a federated integration test and validation process.

  10. Experiences with Integrating Simulation into a Software Engineering Curriculum

    ERIC Educational Resources Information Center

    Bollin, Andreas; Hochmuller, Elke; Mittermeir, Roland; Samuelis, Ladislav

    2012-01-01

    Software Engineering education must account for a broad spectrum of knowledge and skills software engineers will be required to apply throughout their professional life. Covering all the topics in depth within a university setting is infeasible due to curricular constraints as well as due to the inherent differences between educational…

  11. Integrated Measurement and Analysis Framework for Software Security

    DTIC Science & Technology

    2010-09-01

    com/ Open Web Application Security Project ( OWASP ) Software Assurance Maturity Model (SAMM): http://www.owasp.org/index.php...for software security include the following, taken from the OWASP SAMM, version 1.0 [OpenSAMM 2009]: >90 percent applications and data assets

  12. Experiences with Integrating Simulation into a Software Engineering Curriculum

    ERIC Educational Resources Information Center

    Bollin, Andreas; Hochmuller, Elke; Mittermeir, Roland; Samuelis, Ladislav

    2012-01-01

    Software Engineering education must account for a broad spectrum of knowledge and skills software engineers will be required to apply throughout their professional life. Covering all the topics in depth within a university setting is infeasible due to curricular constraints as well as due to the inherent differences between educational…

  13. Project Integration Architecture: Distributed Lock Management, Deadlock Detection, and Set Iteration

    NASA Technical Reports Server (NTRS)

    Jones, William Henry

    2005-01-01

    The migration of the Project Integration Architecture (PIA) to the distributed object environment of the Common Object Request Broker Architecture (CORBA) brings with it the nearly unavoidable requirements of multiaccessor, asynchronous operations. In order to maintain the integrity of data structures in such an environment, it is necessary to provide a locking mechanism capable of protecting the complex operations typical of the PIA architecture. This paper reports on the implementation of a locking mechanism to treat that need. Additionally, the ancillary features necessary to make the distributed lock mechanism work are discussed.

  14. Fault tolerant architectures for integrated aircraft electronics systems, task 2

    NASA Technical Reports Server (NTRS)

    Levitt, K. N.; Melliar-Smith, P. M.; Schwartz, R. L.

    1984-01-01

    The architectural basis for an advanced fault tolerant on-board computer to succeed the current generation of fault tolerant computers is examined. The network error tolerant system architecture is studied with particular attention to intercluster configurations and communication protocols, and to refined reliability estimates. The diagnosis of faults, so that appropriate choices for reconfiguration can be made is discussed. The analysis relates particularly to the recognition of transient faults in a system with tasks at many levels of priority. The demand driven data-flow architecture, which appears to have possible application in fault tolerant systems is described and work investigating the feasibility of automatic generation of aircraft flight control programs from abstract specifications is reported.

  15. Generic Vehicle Architecture for the integration and sharing of in-vehicle and extra-vehicle sensors

    NASA Astrophysics Data System (ADS)

    Bergamaschi, Flavio; Conway-Jones, Dave; Peach, Nicholas

    2010-04-01

    In this paper we present a Generic Vehicle Architecture (GVA), developed as part of the UK MOD GVA programme that addresses the issues of dynamic platform re-role through modular capability integration and behaviour orchestration. The proposed architecture addresses the need for: a) easy integration with legacy and future systems, and architectures; b) scalability from individual sensors, individual human users, vehicles and patrols to battle groups and brigades; c) rapid introduction of new capabilities in response to a changing operational scenario; d) be agnostic of communications systems, devices, operating systems and computer platforms. The GVA leverages the use of research output and tools developed by the International Technology Alliance (ITA) in Network and Information Science1 programme, in particular the ITA Sensor Fabric2-4 developed to address the challenges in the areas of sensor identification, classification, interoperability and sensor data sharing, dissemination and consumability, commonly present in tactical ISR/ISTAR,5 and the Gaian Dynamic Distributed Federated Database (DDFD)6-8 developed the challenges of accessing distributed sources of data in an ad-hoc environment where the consumers do not have the knowledge of the location of the data within the network. The GVA also promotes the use of off-the-shelf hardware, and software which is advantageous from the aspect of easy of upgrading, lower cost of support and replacement, and speed of re-deploying platforms through a "fitted for but not with" approach. The GVA exploits the services orientated architecture (SOA) environment provided by the ITA Sensor Fabric to enhance the capability of legacy solutions and applications by enabling information exchange between them by, for example, providing direct near real-time communication between legacy systems. The GVA, a prototype implementation demonstrator of this architecture has demonstrated its utility to fusing, exploiting and sharing

  16. PICNIC Architecture.

    PubMed

    Saranummi, Niilo

    2005-01-01

    The PICNIC architecture aims at supporting inter-enterprise integration and the facilitation of collaboration between healthcare organisations. The concept of a Regional Health Economy (RHE) is introduced to illustrate the varying nature of inter-enterprise collaboration between healthcare organisations collaborating in providing health services to citizens and patients in a regional setting. The PICNIC architecture comprises a number of PICNIC IT Services, the interfaces between them and presents a way to assemble these into a functioning Regional Health Care Network meeting the needs and concerns of its stakeholders. The PICNIC architecture is presented through a number of views relevant to different stakeholder groups. The stakeholders of the first view are national and regional health authorities and policy makers. The view describes how the architecture enables the implementation of national and regional health policies, strategies and organisational structures. The stakeholders of the second view, the service viewpoint, are the care providers, health professionals, patients and citizens. The view describes how the architecture supports and enables regional care delivery and process management including continuity of care (shared care) and citizen-centred health services. The stakeholders of the third view, the engineering view, are those that design, build and implement the RHCN. The view comprises four sub views: software engineering, IT services engineering, security and data. The proposed architecture is founded into the main stream of how distributed computing environments are evolving. The architecture is realised using the web services approach. A number of well established technology platforms and generic standards exist that can be used to implement the software components. The software components that are specified in PICNIC are implemented in Open Source.

  17. Computational architecture for integrated controls and structures design

    NASA Technical Reports Server (NTRS)

    Belvin, W. Keith; Park, K. C.

    1989-01-01

    To facilitate the development of control structure interaction (CSI) design methodology, a computational architecture for interdisciplinary design of active structures is presented. The emphasis of the computational procedure is to exploit existing sparse matrix structural analysis techniques, in-core data transfer with control synthesis programs, and versatility in the optimization methodology to avoid unnecessary structural or control calculations. The architecture is designed such that all required structure, control and optimization analyses are performed within one program. Hence, the optimization strategy is not unduly constrained by cold starts of existing structural analysis and control synthesis packages.

  18. An Integration Architecture of Virtual Campuses with External e-Learning Tools

    ERIC Educational Resources Information Center

    Navarro, Antonio; Cigarran, Juan; Huertas, Francisco; Rodriguez-Artacho, Miguel; Cogolludo, Alberto

    2014-01-01

    Technology enhanced learning relies on a variety of software architectures and platforms to provide different kinds of management service and enhanced instructional interaction. As e-learning support has become more complex, there is a need for virtual campuses that combine learning management systems with the services demanded by educational…

  19. An Integration Architecture of Virtual Campuses with External e-Learning Tools

    ERIC Educational Resources Information Center

    Navarro, Antonio; Cigarran, Juan; Huertas, Francisco; Rodriguez-Artacho, Miguel; Cogolludo, Alberto

    2014-01-01

    Technology enhanced learning relies on a variety of software architectures and platforms to provide different kinds of management service and enhanced instructional interaction. As e-learning support has become more complex, there is a need for virtual campuses that combine learning management systems with the services demanded by educational…

  20. Opening up Architectures of Software-Intensive Systems: A Functional Decomposition to Support System Comprehension

    DTIC Science & Technology

    2007-10-01

    as on a state-of-the- art survey on system architecture recovery and comprehension. Following the conception of this functional decomposition, a...consideration the findings contained in a state-of- the- art survey on system architecture recovery and comprehension that was carried out as a previous phase...constituantes. Synthèse Fournit la capacité de combiner plusieurs faits qui ont été extraits pour former un nouveau tout à un niveau d’abstraction

  1. An analysis of integrated science and language arts themes in software at the elementary school level

    NASA Astrophysics Data System (ADS)

    Libidinsky, Lisa Jill

    2002-09-01

    There are many demands on the elementary classroom teacher today, such that teachers often do not have the time and resources to instruct in a meaningful manner that would produce effective, real instruction. Subjects are often disjointed and not significant. When teachers instruct using an integrated approach, students learn more efficiently as they see connections in the subjects. Science and language arts, when combined to produce an integrated approach, show positive associations that can enable students to learn real-life connections. In addition, with the onset of technology and the increased usage of technological programs in the schools, teachers can use technology to support an integrated curriculum. When teachers use a combined instructional focus of science, language arts, and technology to produce lessons, students are able to gain knowledge of concepts and skills necessary for appropriate academic growth and development. Given that there are many software programs available to teachers for classroom use, it is imperative that quality software is used for instruction. Using criteria based upon an intensive literature review of integrated instruction in the areas of science and language arts, this study examines science and language arts software programs to determine whether there are science and language arts integrated themes in the software analyzed. Also, this study examines whether more science and language arts integrated themes are present in science or language arts software programs. Overall, this study finds a significant difference between language arts software and science software when looking at integrated themes. This study shows that science software shows integrated themes with language arts more often than does language arts software with science. The findings in this study can serve as a reference point for educators when selecting software that is meaningful and effective in the elementary classroom. Based on this study, it is

  2. Data Centric Integration and Analysis of Information Technology Architectures

    DTIC Science & Technology

    2007-09-01

    Architecture TEMP Test & Evaluation Master Plan TTV Target Technical View xix Acronym Definition TV Technical Standards View UJTL Universal...and • how to apply the Target Technical View ( TTV ), in which the architect identifies current and emerging standards and technologies considered

  3. The Integration of Interior Architecture Education with Digital Design Approaches

    ERIC Educational Resources Information Center

    Yazicioglu, Deniz Ayse

    2011-01-01

    It is inevitable that as a result of progress in technology and the changes in the ways with which design is conceived, interior architecture schools should be updated according to these requirements and that new educational processes should be tried out. It is for this reason that the scope and aim of this study have been determined as being the…

  4. Integrated network architecture for sustained human and robotic exploration

    NASA Technical Reports Server (NTRS)

    Noreen, Gary K.; Cesarone, Robert; Deutsch, Leslie; Edwards, Charlie; Soloff, Jason; Ely, Todd; Cook, Brian; Morabito, David; Hemmati, Hamid; Piazzolla, Sabino; hide

    2005-01-01

    The National Aeronautics and Space Administration (NASA) Exploration Systems Mission Directorate is planning a series of human and robotic missions to the Earth's moon and to Mars. These missions will require telecommunication and navigation services. This paper sets forth presumed requirements for such services and presents strawman lunar and Mars telecommunications network architectures to satisfy the presumed requirements.

  5. Human Symbol Manipulation within an Integrated Cognitive Architecture

    ERIC Educational Resources Information Center

    Anderson, John R.

    2005-01-01

    This article describes the Adaptive Control of Thought-Rational (ACT-R) cognitive architecture (Anderson et al., 2004; Anderson & Lebiere, 1998) and its detailed application to the learning of algebraic symbol manipulation. The theory is applied to modeling the data from a study by Qin, Anderson, Silk, Stenger, & Carter (2004) in which children…

  6. Human Symbol Manipulation within an Integrated Cognitive Architecture

    ERIC Educational Resources Information Center

    Anderson, John R.

    2005-01-01

    This article describes the Adaptive Control of Thought-Rational (ACT-R) cognitive architecture (Anderson et al., 2004; Anderson & Lebiere, 1998) and its detailed application to the learning of algebraic symbol manipulation. The theory is applied to modeling the data from a study by Qin, Anderson, Silk, Stenger, & Carter (2004) in which children…

  7. Integrated network architecture for sustained human and robotic exploration

    NASA Technical Reports Server (NTRS)

    Noreen, Gary K.; Cesarone, Robert; Deutsch, Leslie; Edwards, Charlie; Soloff, Jason; Ely, Todd; Cook, Brian; Morabito, David; Hemmati, Hamid; Piazzolla, Sabino; Hastrup, Rolf; Abraham, Douglas

    2005-01-01

    The National Aeronautics and Space Administration (NASA) Exploration Systems Mission Directorate is planning a series of human and robotic missions to the Earth's moon and to Mars. These missions will require telecommunication and navigation services. This paper sets forth presumed requirements for such services and presents strawman lunar and Mars telecommunications network architectures to satisfy the presumed requirements.

  8. Software Tool Integrating Data Flow Diagrams and Petri Nets

    NASA Technical Reports Server (NTRS)

    Thronesbery, Carroll; Tavana, Madjid

    2010-01-01

    Data Flow Diagram - Petri Net (DFPN) is a software tool for analyzing other software to be developed. The full name of this program reflects its design, which combines the benefit of data-flow diagrams (which are typically favored by software analysts) with the power and precision of Petri-net models, without requiring specialized Petri-net training. (A Petri net is a particular type of directed graph, a description of which would exceed the scope of this article.) DFPN assists a software analyst in drawing and specifying a data-flow diagram, then translates the diagram into a Petri net, then enables graphical tracing of execution paths through the Petri net for verification, by the end user, of the properties of the software to be developed. In comparison with prior means of verifying the properties of software to be developed, DFPN makes verification by the end user more nearly certain, thereby making it easier to identify and correct misconceptions earlier in the development process, when correction is less expensive. After the verification by the end user, DFPN generates a printable system specification in the form of descriptions of processes and data.

  9. Future Integrated Architecture (FIA): A Proposed Space Internetworking Architecture for Future Operations

    DTIC Science & Technology

    2008-09-01

    to share the information they need, when they need it, in a form they can understand and act on with confidence [JCS, 2005].” A specific definition of...architecture and primary node for all DSN networks. The Moon and Mars will act as critical nodes and will support Local Area Networks (LAN) with cross-links...communications satellite acting as a point of presence in space. In this example, the IEEE 802.16 standard dynamically connects the sister satellite

  10. Progress on Ultra-Dense Quantum Communication Using Integrated Photonic Architecture

    DTIC Science & Technology

    2012-05-09

    quantum information, integrated optics, photonic integrated chip Dirk Englund, Karl Berggren, Jeffrey Shapiro, Chee Wei Wong, Franco Wong, and Gregory...Integrated Photonic Architecture Dirk Englund, Karl Berggren, Jeffrey Shapiro, Chee Wei Wong, Franco Wong, and Gregory Wornell (Dated: May 9, 2012) The...Ultrahigh Flux Entangled Photon Source & Time-Energy entanglement d-dimensional QKD 6 VI. Waveguide -integrated SNSPD 6 A. Next three months 7 References

  11. An e-consent-based shared EHR system architecture for integrated healthcare networks.

    PubMed

    Bergmann, Joachim; Bott, Oliver J; Pretschner, Dietrich P; Haux, Reinhold

    2007-01-01

    Virtual integration of distributed patient data promises advantages over a consolidated health record, but raises questions mainly about practicability and authorization concepts. Our work aims on specification and development of a virtual shared health record architecture using a patient-centred integration and authorization model. A literature survey summarizes considerations of current architectural approaches. Complemented by a methodical analysis in two regional settings, a formal architecture model was specified and implemented. Results presented in this paper are a survey of architectural approaches for shared health records and an architecture model for a virtual shared EHR, which combines a patient-centred integration policy with provider-oriented document management. An electronic consent system assures, that access to the shared record remains under control of the patient. A corresponding system prototype has been developed and is currently being introduced and evaluated in a regional setting. The proposed architecture is capable of partly replacing message-based communications. Operating highly available provider repositories for the virtual shared EHR requires advanced technology and probably means additional costs for care providers. Acceptance of the proposed architecture depends on transparently embedding document validation and digital signature into the work processes. The paradigm shift from paper-based messaging to a "pull model" needs further evaluation.

  12. Software Integration in Multi-scale Simulations: the PUPIL System

    NASA Astrophysics Data System (ADS)

    Torras, J.; Deumens, E.; Trickey, S. B.

    2006-10-01

    The state of the art for computational tools in both computational chemistry and computational materials physics includes many algorithms and functionalities which are implemented again and again. Several projects aim to reduce, eliminate, or avoid this problem. Most such efforts seem to be focused within a particular specialty, either quantum chemistry or materials physics. Multi-scale simulations, by their very nature however, cannot respect that specialization. In simulation of fracture, for example, the energy gradients that drive the molecular dynamics (MD) come from a quantum mechanical treatment that most often derives from quantum chemistry. That “QM” region is linked to a surrounding “CM” region in which potentials yield the forces. The approach therefore requires the integration or at least inter-operation of quantum chemistry and materials physics algorithms. The same problem occurs in “QM/MM” simulations in computational biology. The challenge grows if pattern recognition or other analysis codes of some kind must be used as well. The most common mode of inter-operation is user intervention: codes are modified as needed and data files are managed “by hand” by the user (interactively and via shell scripts). User intervention is however inefficient by nature, difficult to transfer to the community, and prone to error. Some progress (e.g Sethna’s work at Cornell [C.R. Myers et al., Mat. Res. Soc. Symp. Proc., 538(1999) 509, C.-S. Chen et al., Poster presented at the Material Research Society Meeting (2000)]) has been made on using Python scripts to achieve a more efficient level of interoperation. In this communication we present an alternative approach to merging current working packages without the necessity of major recoding and with only a relatively light wrapper interface. The scheme supports communication among the different components required for a given multi-scale calculation and access to the functionalities of those components

  13. The Rational Unified Process and the Capability Maturity Model - Integrated Systems/Software Engineering

    DTIC Science & Technology

    2001-01-01

    2001 by Carnegie Mellon University RU{/CMMI Tutorial - ESEPG1 The Rational Unified Process® and the Capability Maturity Model ® – Integrated Systems...Software Engineering SM CMMI and CMM Integration are service marks of Carnegie Mellon University. ® Capability Maturity Model , Capability Maturity...TITLE AND SUBTITLE The Rational Unified Process and the Capability Maturity Model - Integrated Systems/Software Engineering 5a. CONTRACT NUMBER 5b

  14. The IXV Avionics & Software Architecture for the On-Board Management of the Autonomous Re-Entry Vehicle

    NASA Astrophysics Data System (ADS)

    Malocchi, Giovanni; Angelini, Roberto; Dussy, Stephane

    2016-08-01

    This paper focuses on the Intermediate eXperimental Vehicle (IXV), the first European glider successfully performing an autonomous atmospheric re-entry from a suborbital LEO trajectory. An introduction of the mission objectives is provided, describing the selected trajectory envelope and the main spacecraft features. Core of the paper is the presentation of the Avionics and Software architecture, analyzed through the constituting subsystems and equipment, from the perspective of the mission autonomy needs and constraints.The launch campaign activities involving the IXV Avionics and Software are presented, including the specific flight preparation tasks.Finally, the paper provides some highlights of the main mission results based on the interpretation of the data received via telemetry and retrieved from the flight recorders.The paper then gives some preliminary tracks of the IXV follow on, introducing the objectives of the Innovative Space Vehicle and the necessary improvements, to be developed in the frame of ISV-PRIDE.

  15. Integrated Sensor Architecture (ISA) for Live Virtual Constructive (LVC) Environments

    DTIC Science & Technology

    2014-03-01

    of information between sensors and systems in a dynamic tactical environment. The ISA created a Service Oriented Architecture ( SOA ) that identifies...interoperability were defined and implemented and these levels were tested at many events. Extensible data models and capabilities that are scalable...uniformly considered essential in DoD programs7 they are rarely -- implemented while a system is being designed. Adding such measures afterwards can be a

  16. Ultra-Dense Quantum Communication Using Integrated Photonic Architecture

    DTIC Science & Technology

    2012-02-03

    last month. Membrane fabrication We intend to fabricate membrane - SNSPDs by employing a wet etch process (TMAH) that selectively etches the...Nanowire fabrication. Figure 8: Current membrane under-cut process. Figure 9: Etch stop features. this issue by patterning NbN-free regions...Gaussian, or Laguerre-Gaussian modes are employe by the transmitter. We are also beg nning to analyze the architecture needed for the channel tracking

  17. Fast packet switch architectures for broadband integrated services digital networks

    NASA Technical Reports Server (NTRS)

    Tobagi, Fouad A.

    1990-01-01

    Background information on networking and switching is provided, and the various architectures that have been considered for fast packet switches are described. The focus is solely on switches designed to be implemented electronically. A set of definitions and a brief description of the functionality required of fast packet switches are given. Three basic types of packet switches are identified: the shared-memory, shared-medium, and space-division types. Each of these is described, and examples are given.

  18. Autonomy Software Architecture for LORAX (Life On ice Robotic Antarctic eXplorer)

    NASA Technical Reports Server (NTRS)

    Jonsson, Ari; McGann, Conor; Pedersen, Liam; Iatauro, Michael; Rajagopalan, Srikanth

    2005-01-01

    LORAX is a robotic astrobiological study of the ice field surrounding the Carapace Nunatak near the Allan Hills in Antarctica. The study culminates in a l00km traverse, sampling the ice at various depths (from surface to 10cm) at over 100 sites to survey microbial ecology and to record environmental parameters. The autonomy requirements from LORAX are shared by many robotic exploration tasks. Consequently, the LORAX autonomy architecture is a general architecture for on-board planning and execution in environments where science return is to be maximized against resource limitations and other constraints.

  19. PUPIL: A systematic approach to software integration in multi-scale simulations

    NASA Astrophysics Data System (ADS)

    Torras, Juan; He, Yao; Cao, Chao; Muralidharan, Krishna; Deumens, E.; Cheng, H.-P.; Trickey, S. B.

    2007-08-01

    We present a relatively straightforward way to integrate existing software packages into a full multi-scale simulation package in which each application runs in its own address space and there is no run-time intervention by the researcher. The PUPIL (Program for User Package Interfacing and Linking) architectural concept is to provide a simulation Supervisor, implemented as a Manager and various Workers which involve small wrapper interfaces written and installed within each application package and various communication services. The different, autonomous packages ("Calculation Units") are plugged into the PUPIL system which one then operates as a software driver for them. Well-defined protocols are provided for communication between the different Calculation Units and the PUPIL system. The CORBA communication protocol is used to exchange information between running processes. All simulation directives from the user are stored in an XML file that is interpreted by the PUPIL Manager and Workers. An initial version has been designed using the Object Oriented (OO) paradigm and implemented in Java as a fast prototyping language. Tests of implementation ease and of operational correctness (on toy physical systems) have been carried out. In the former category, we document how interfaces to both DL_POLY and SIESTA were done relatively straightforwardly. In the latter category, the most demanding test was the joining of three different packages to do a MD calculation with pattern recognition to identify the QM-forces region and an external QM force calculation. The results show that PUPIL provides ease of operation and maintenance with little overhead.

  20. Continuous integration and quality control for scientific software

    NASA Astrophysics Data System (ADS)

    Neidhardt, A.; Ettl, M.; Brisken, W.; Dassing, R.

    2013-08-01

    Modern software has to be stable, portable, fast and reliable. This is going to be also more and more important for scientific software. But this requires a sophisticated way to inspect, check and evaluate the quality of source code with a suitable, automated infrastructure. A centralized server with a software repository and a version control system is one essential part, to manage the code basis and to control the different development versions. While each project can be compiled separately, the whole code basis can also be compiled with one central “Makefile”. This is used to create automated, nightly builds. Additionally all sources are inspected automatically with static code analysis and inspection tools, which check well-none error situations, memory and resource leaks, performance issues, or style issues. In combination with an automatic documentation generator it is possible to create the developer documentation directly from the code and the inline comments. All reports and generated information are presented as HTML page on a Web server. Because this environment increased the stability and quality of the software of the Geodetic Observatory Wettzell tremendously, it is now also available for scientific communities. One regular customer is already the developer group of the DiFX software correlator project.