Sample records for unified software development

  1. What Does CALL Have to Offer Computer Science and What Does Computer Science Have to Offer CALL?

    ERIC Educational Resources Information Center

    Cushion, Steve

    2006-01-01

    We will argue that CALL can usefully be viewed as a subset of computer software engineering and can profit from adopting some of the recent progress in software development theory. The unified modelling language has become the industry standard modelling technique and the accompanying unified process is rapidly gaining acceptance. The manner in…

  2. Data Centric Development Methodology

    ERIC Educational Resources Information Center

    Khoury, Fadi E.

    2012-01-01

    Data centric applications, an important effort of software development in large organizations, have been mostly adopting a software methodology, such as a waterfall or Rational Unified Process, as the framework for its development. These methodologies could work on structural, procedural, or object oriented based applications, but fails to capture…

  3. A Unified Algebraic and Logic-Based Framework Towards Safe Routing Implementations

    DTIC Science & Technology

    2015-08-13

    Software - defined Networks ( SDN ). We developed a declarative platform for implementing SDN protocols using declarative...and debugging several SDN applications. Example-based SDN synthesis. Recent emergence of software - defined networks offers an opportunity to design...domain of Software - defined Networks ( SDN ). We developed a declarative platform for implementing SDN protocols using declarative networking

  4. Study of a unified hardware and software fault-tolerant architecture

    NASA Technical Reports Server (NTRS)

    Lala, Jaynarayan; Alger, Linda; Friend, Steven; Greeley, Gregory; Sacco, Stephen; Adams, Stuart

    1989-01-01

    A unified architectural concept, called the Fault Tolerant Processor Attached Processor (FTP-AP), that can tolerate hardware as well as software faults is proposed for applications requiring ultrareliable computation capability. An emulation of the FTP-AP architecture, consisting of a breadboard Motorola 68010-based quadruply redundant Fault Tolerant Processor, four VAX 750s as attached processors, and four versions of a transport aircraft yaw damper control law, is used as a testbed in the AIRLAB to examine a number of critical issues. Solutions of several basic problems associated with N-Version software are proposed and implemented on the testbed. This includes a confidence voter to resolve coincident errors in N-Version software. A reliability model of N-Version software that is based upon the recent understanding of software failure mechanisms is also developed. The basic FTP-AP architectural concept appears suitable for hosting N-Version application software while at the same time tolerating hardware failures. Architectural enhancements for greater efficiency, software reliability modeling, and N-Version issues that merit further research are identified.

  5. Workstation-Based Simulation for Rapid Prototyping and Piloted Evaluation of Control System Designs

    NASA Technical Reports Server (NTRS)

    Mansur, M. Hossein; Colbourne, Jason D.; Chang, Yu-Kuang; Aiken, Edwin W. (Technical Monitor)

    1998-01-01

    The development and optimization of flight control systems for modem fixed- and rotary-. wing aircraft consume a significant portion of the overall time and cost of aircraft development. Substantial savings can be achieved if the time required to develop and flight test the control system, and the cost, is reduced. To bring about such reductions, software tools such as Matlab/Simulink are being used to readily implement block diagrams and rapidly evaluate the expected responses of the completed system. Moreover, tools such as CONDUIT (CONtrol Designer's Unified InTerface) have been developed that enable the controls engineers to optimize their control laws and ensure that all the relevant quantitative criteria are satisfied, all within a fully interactive, user friendly, unified software environment.

  6. Modelling and Implementation of Catalogue Cards Using FreeMarker

    ERIC Educational Resources Information Center

    Radjenovic, Jelen; Milosavljevic, Branko; Surla, Dusan

    2009-01-01

    Purpose: The purpose of this paper is to report on a study involving the specification (using Unified Modelling Language (UML) 2.0) of information requirements and implementation of the software components for generating catalogue cards. The implementation in a Java environment is developed using the FreeMarker software.…

  7. FORMED: Bringing Formal Methods to the Engineering Desktop

    DTIC Science & Technology

    2016-02-01

    integrates formal verification into software design and development by precisely defining semantics for a restricted subset of the Unified Modeling...input-output contract satisfaction and absence of null pointer dereferences. 15. SUBJECT TERMS Formal Methods, Software Verification , Model-Based...Domain specific languages (DSLs) drive both implementation and formal verification

  8. A unified approach to VLSI layout automation and algorithm mapping on processor arrays

    NASA Technical Reports Server (NTRS)

    Venkateswaran, N.; Pattabiraman, S.; Srinivasan, Vinoo N.

    1993-01-01

    Development of software tools for designing supercomputing systems is highly complex and cost ineffective. To tackle this a special purpose PAcube silicon compiler which integrates different design levels from cell to processor arrays has been proposed. As a part of this, we present in this paper a novel methodology which unifies the problems of Layout Automation and Algorithm Mapping.

  9. An investigation of difficulties experienced by students developing unified modelling language (UML) class and sequence diagrams

    NASA Astrophysics Data System (ADS)

    Sien, Ven Yu

    2011-12-01

    Object-oriented analysis and design (OOAD) is not an easy subject to learn. There are many challenges confronting students when studying OOAD. Students have particular difficulty abstracting real-world problems within the context of OOAD. They are unable to effectively build object-oriented (OO) models from the problem domain because they essentially do not know "what" to model. This article investigates the difficulties and misconceptions undergraduate students have with analysing systems using unified modelling language analysis class and sequence diagrams. These models were chosen because they represent important static and dynamic aspects of the software system under development. The results of this study will help students produce effective OO models, and facilitate software engineering lecturers design learning materials and approaches for introductory OOAD courses.

  10. Software techniques for a distributed real-time processing system. [for spacecraft

    NASA Technical Reports Server (NTRS)

    Lesh, F.; Lecoq, P.

    1976-01-01

    The paper describes software techniques developed for the Unified Data System (UDS), a distributed processor network for control and data handling onboard a planetary spacecraft. These techniques include a structured language for specifying the programs contained in each module, and a small executive program in each module which performs scheduling and implements the module task.

  11. Basic analysis of reflectometry data software package for the analysis of multilayered structures according to reflectometry data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Astaf'ev, S. B., E-mail: bard@ns.crys.ras.ru; Shchedrin, B. M.; Yanusova, L. G.

    2012-01-15

    The main principles of developing the Basic Analysis of Reflectometry Data (BARD) software package, which is aimed at obtaining a unified (standardized) tool for analyzing the structure of thin multilayer films and nanostructures of different nature based on reflectometry data, are considered. This software package contains both traditionally used procedures for processing reflectometry data and the authors' original developments on the basis of new methods for carrying out and analyzing reflectometry experiments. The structure of the package, its functional possibilities, examples of application, and prospects of development are reviewed.

  12. A unified software framework for deriving, visualizing, and exploring abstraction networks for ontologies

    PubMed Central

    Ochs, Christopher; Geller, James; Perl, Yehoshua; Musen, Mark A.

    2016-01-01

    Software tools play a critical role in the development and maintenance of biomedical ontologies. One important task that is difficult without software tools is ontology quality assurance. In previous work, we have introduced different kinds of abstraction networks to provide a theoretical foundation for ontology quality assurance tools. Abstraction networks summarize the structure and content of ontologies. One kind of abstraction network that we have used repeatedly to support ontology quality assurance is the partial-area taxonomy. It summarizes structurally and semantically similar concepts within an ontology. However, the use of partial-area taxonomies was ad hoc and not generalizable. In this paper, we describe the Ontology Abstraction Framework (OAF), a unified framework and software system for deriving, visualizing, and exploring partial-area taxonomy abstraction networks. The OAF includes support for various ontology representations (e.g., OWL and SNOMED CT's relational format). A Protégé plugin for deriving “live partial-area taxonomies” is demonstrated. PMID:27345947

  13. A unified software framework for deriving, visualizing, and exploring abstraction networks for ontologies.

    PubMed

    Ochs, Christopher; Geller, James; Perl, Yehoshua; Musen, Mark A

    2016-08-01

    Software tools play a critical role in the development and maintenance of biomedical ontologies. One important task that is difficult without software tools is ontology quality assurance. In previous work, we have introduced different kinds of abstraction networks to provide a theoretical foundation for ontology quality assurance tools. Abstraction networks summarize the structure and content of ontologies. One kind of abstraction network that we have used repeatedly to support ontology quality assurance is the partial-area taxonomy. It summarizes structurally and semantically similar concepts within an ontology. However, the use of partial-area taxonomies was ad hoc and not generalizable. In this paper, we describe the Ontology Abstraction Framework (OAF), a unified framework and software system for deriving, visualizing, and exploring partial-area taxonomy abstraction networks. The OAF includes support for various ontology representations (e.g., OWL and SNOMED CT's relational format). A Protégé plugin for deriving "live partial-area taxonomies" is demonstrated. Copyright © 2016 Elsevier Inc. All rights reserved.

  14. Techniques for Unifying Disparate Elements in an EOS Instrument's Product Generation System Development Environment

    NASA Technical Reports Server (NTRS)

    Murray, Alex; Eng, Bjorn; Leff, Craig; Schwarz, Arnold

    1997-01-01

    In the development environment for ASTER level II product generation system, techniques have been incorporated to allow automated information sharing among all system elements, and to enable the use of sound software engineering techniques in the scripting languages.

  15. Standardized development of computer software. Part 1: Methods

    NASA Technical Reports Server (NTRS)

    Tausworthe, R. C.

    1976-01-01

    This work is a two-volume set on standards for modern software engineering methodology. This volume presents a tutorial and practical guide to the efficient development of reliable computer software, a unified and coordinated discipline for design, coding, testing, documentation, and project organization and management. The aim of the monograph is to provide formal disciplines for increasing the probability of securing software that is characterized by high degrees of initial correctness, readability, and maintainability, and to promote practices which aid in the consistent and orderly development of a total software system within schedule and budgetary constraints. These disciplines are set forth as a set of rules to be applied during software development to drastically reduce the time traditionally spent in debugging, to increase documentation quality, to foster understandability among those who must come in contact with it, and to facilitate operations and alterations of the program as requirements on the program environment change.

  16. DICOM static and dynamic representation through unified modeling language

    NASA Astrophysics Data System (ADS)

    Martinez-Martinez, Alfonso; Jimenez-Alaniz, Juan R.; Gonzalez-Marquez, A.; Chavez-Avelar, N.

    2004-04-01

    The DICOM standard, as all standards, specifies in generic way the management in network and storage media environments of digital medical images and their related information. However, understanding the specifications for particular implementation is not a trivial work. Thus, this work is about understanding and modelling parts of the DICOM standard using Object Oriented methodologies, as part of software development processes. This has offered different static and dynamic views, according with the standard specifications, and the resultant models have been represented through the Unified Modelling Language (UML). The modelled parts are related to network conformance claim: Network Communication Support for Message Exchange, Message Exchange, Information Object Definitions, Service Class Specifications, Data Structures and Encoding, and Data Dictionary. The resultant models have given a better understanding about DICOM parts and have opened the possibility of create a software library to develop DICOM conformable PACS applications.

  17. The Unified Plant Growth Model (UPGM): software framework overview and model application

    USDA-ARS?s Scientific Manuscript database

    Since the Environmental Policy Integrated Climate (EPIC) model was developed in 1989, the EPIC plant growth component has been incorporated into other erosion and crop management models (e.g., WEPS, WEPP, SWAT, ALMANAC, and APEX) and modified to meet model developer research objectives. This has re...

  18. One approach for evaluating the Distributed Computing Design System (DCDS)

    NASA Technical Reports Server (NTRS)

    Ellis, J. T.

    1985-01-01

    The Distributed Computer Design System (DCDS) provides an integrated environment to support the life cycle of developing real-time distributed computing systems. The primary focus of DCDS is to significantly increase system reliability and software development productivity, and to minimize schedule and cost risk. DCDS consists of integrated methodologies, languages, and tools to support the life cycle of developing distributed software and systems. Smooth and well-defined transistions from phase to phase, language to language, and tool to tool provide a unique and unified environment. An approach to evaluating DCDS highlights its benefits.

  19. Customer-experienced rapid prototyping

    NASA Astrophysics Data System (ADS)

    Zhang, Lijuan; Zhang, Fu; Li, Anbo

    2008-12-01

    In order to describe accurately and comprehend quickly the perfect GIS requirements, this article will integrate the ideas of QFD (Quality Function Deployment) and UML (Unified Modeling Language), and analyze the deficiency of prototype development model, and will propose the idea of the Customer-Experienced Rapid Prototyping (CE-RP) and describe in detail the process and framework of the CE-RP, from the angle of the characteristics of Modern-GIS. The CE-RP is mainly composed of Customer Tool-Sets (CTS), Developer Tool-Sets (DTS) and Barrier-Free Semantic Interpreter (BF-SI) and performed by two roles of customer and developer. The main purpose of the CE-RP is to produce the unified and authorized requirements data models between customer and software developer.

  20. Unified Approach to Modeling and Simulation of Space Communication Networks and Systems

    NASA Technical Reports Server (NTRS)

    Barritt, Brian; Bhasin, Kul; Eddy, Wesley; Matthews, Seth

    2010-01-01

    Network simulator software tools are often used to model the behaviors and interactions of applications, protocols, packets, and data links in terrestrial communication networks. Other software tools that model the physics, orbital dynamics, and RF characteristics of space systems have matured to allow for rapid, detailed analysis of space communication links. However, the absence of a unified toolset that integrates the two modeling approaches has encumbered the systems engineers tasked with the design, architecture, and analysis of complex space communication networks and systems. This paper presents the unified approach and describes the motivation, challenges, and our solution - the customization of the network simulator to integrate with astronautical analysis software tools for high-fidelity end-to-end simulation. Keywords space; communication; systems; networking; simulation; modeling; QualNet; STK; integration; space networks

  1. An XML-based method for astronomy software designing

    NASA Astrophysics Data System (ADS)

    Liao, Mingxue; Aili, Yusupu; Zhang, Jin

    XML-based method for standardization of software designing is introduced and analyzed and successfully applied to renovating the hardware and software of the digital clock at Urumqi Astronomical Station. Basic strategy for eliciting time information from the new digital clock of FT206 in the antenna control program is introduced. By FT206, the need to compute how many centuries passed since a certain day with sophisticated formulas is eliminated and it is no longer necessary to set right UT time for the computer holding control over antenna because the information about year, month, day are all deduced from Julian day dwelling in FT206, rather than from computer time. With XML-based method and standard for software designing, various existing designing methods are unified, communications and collaborations between developers are facilitated, and thus Internet-based mode of developing software becomes possible. The trend of development of XML-based designing method is predicted.

  2. Unified Geophysical Cloud Platform (UGCP) for Seismic Monitoring and other Geophysical Applications.

    NASA Astrophysics Data System (ADS)

    Synytsky, R.; Starovoit, Y. O.; Henadiy, S.; Lobzakov, V.; Kolesnikov, L.

    2016-12-01

    We present Unified Geophysical Cloud Platform (UGCP) or UniGeoCloud as an innovative approach for geophysical data processing in the Cloud environment with the ability to run any type of data processing software in isolated environment within the single Cloud platform. We've developed a simple and quick method of several open-source widely known software seismic packages (SeisComp3, Earthworm, Geotool, MSNoise) installation which does not require knowledge of system administration, configuration, OS compatibility issues etc. and other often annoying details preventing time wasting for system configuration work. Installation process is simplified as "mouse click" on selected software package from the Cloud market place. The main objective of the developed capability was the software tools conception with which users are able to design and install quickly their own highly reliable and highly available virtual IT-infrastructure for the organization of seismic (and in future other geophysical) data processing for either research or monitoring purposes. These tools provide access to any seismic station data available in open IP configuration from the different networks affiliated with different Institutions and Organizations. It allows also setting up your own network as you desire by selecting either regionally deployed stations or the worldwide global network based on stations selection form the global map. The processing software and products and research results could be easily monitored from everywhere using variety of user's devices form desk top computers to IT gadgets. Currents efforts of the development team are directed to achieve Scalability, Reliability and Sustainability (SRS) of proposed solutions allowing any user to run their applications with the confidence of no data loss and no failure of the monitoring or research software components. The system is suitable for quick rollout of NDC-in-Box software package developed for State Signatories and aimed for promotion of data processing collected by the IMS Network.

  3. Unified Engineering Software System

    NASA Technical Reports Server (NTRS)

    Purves, L. R.; Gordon, S.; Peltzman, A.; Dube, M.

    1989-01-01

    Collection of computer programs performs diverse functions in prototype engineering. NEXUS, NASA Engineering Extendible Unified Software system, is research set of computer programs designed to support full sequence of activities encountered in NASA engineering projects. Sequence spans preliminary design, design analysis, detailed design, manufacturing, assembly, and testing. Primarily addresses process of prototype engineering, task of getting single or small number of copies of product to work. Written in FORTRAN 77 and PROLOG.

  4. CoCoNUT: an efficient system for the comparison and analysis of genomes

    PubMed Central

    2008-01-01

    Background Comparative genomics is the analysis and comparison of genomes from different species. This area of research is driven by the large number of sequenced genomes and heavily relies on efficient algorithms and software to perform pairwise and multiple genome comparisons. Results Most of the software tools available are tailored for one specific task. In contrast, we have developed a novel system CoCoNUT (Computational Comparative geNomics Utility Toolkit) that allows solving several different tasks in a unified framework: (1) finding regions of high similarity among multiple genomic sequences and aligning them, (2) comparing two draft or multi-chromosomal genomes, (3) locating large segmental duplications in large genomic sequences, and (4) mapping cDNA/EST to genomic sequences. Conclusion CoCoNUT is competitive with other software tools w.r.t. the quality of the results. The use of state of the art algorithms and data structures allows CoCoNUT to solve comparative genomics tasks more efficiently than previous tools. With the improved user interface (including an interactive visualization component), CoCoNUT provides a unified, versatile, and easy-to-use software tool for large scale studies in comparative genomics. PMID:19014477

  5. Automatic extraction and visualization of object-oriented software design metrics

    NASA Astrophysics Data System (ADS)

    Lakshminarayana, Anuradha; Newman, Timothy S.; Li, Wei; Talburt, John

    2000-02-01

    Software visualization is a graphical representation of software characteristics and behavior. Certain modes of software visualization can be useful in isolating problems and identifying unanticipated behavior. In this paper we present a new approach to aid understanding of object- oriented software through 3D visualization of software metrics that can be extracted from the design phase of software development. The focus of the paper is a metric extraction method and a new collection of glyphs for multi- dimensional metric visualization. Our approach utilize the extensibility interface of a popular CASE tool to access and automatically extract the metrics from Unified Modeling Language class diagrams. Following the extraction of the design metrics, 3D visualization of these metrics are generated for each class in the design, utilizing intuitively meaningful 3D glyphs that are representative of the ensemble of metrics. Extraction and visualization of design metrics can aid software developers in the early study and understanding of design complexity.

  6. Colaborated Architechture Framework for Composition UML 2.0 in Zachman Framework

    NASA Astrophysics Data System (ADS)

    Hermawan; Hastarista, Fika

    2016-01-01

    Zachman Framework (ZF) is the framework of enterprise architechture that most widely adopted in the Enterprise Information System (EIS) development. In this study, has been developed Colaborated Architechture Framework (CAF) to collaborate ZF with Unified Modeling Language (UML) 2.0 modeling. The CAF provides the composition of ZF matrix that each cell is consist of the Model Driven architechture (MDA) from the various UML models and many Software Requirement Specification (SRS) documents. Implementation of this modeling is used to develops Enterprise Resource Planning (ERP). Because ERP have a coverage of applications in large numbers and complexly relations, it is necessary to use Agile Model Driven Design (AMDD) approach as an advanced method to transforms MDA into components of application modules with efficiently and accurately. Finally, through the using of the CAF, give good achievement in fullfilment the needs from all stakeholders that are involved in the overall process stage of Rational Unified Process (RUP), and also obtaining a high satisfaction to fullfiled the functionality features of the ERP software in PT. Iglas (Persero) Gresik.

  7. A Clustering-Based Approach to Enriching Code Foraging Environment.

    PubMed

    Niu, Nan; Jin, Xiaoyu; Niu, Zhendong; Cheng, Jing-Ru C; Li, Ling; Kataev, Mikhail Yu

    2016-09-01

    Developers often spend valuable time navigating and seeking relevant code in software maintenance. Currently, there is a lack of theoretical foundations to guide tool design and evaluation to best shape the code base to developers. This paper contributes a unified code navigation theory in light of the optimal food-foraging principles. We further develop a novel framework for automatically assessing the foraging mechanisms in the context of program investigation. We use the framework to examine to what extent the clustering of software entities affects code foraging. Our quantitative analysis of long-lived open-source projects suggests that clustering enriches the software environment and improves foraging efficiency. Our qualitative inquiry reveals concrete insights into real developer's behavior. Our research opens the avenue toward building a new set of ecologically valid code navigation tools.

  8. Pedagogical Issues in Object Orientation.

    ERIC Educational Resources Information Center

    Nerur, Sridhar; Ramanujan, Sam; Kesh, Someswar

    2002-01-01

    Discusses the need for people with object-oriented (OO) skills, explains benefits of OO in software development, and addresses some of the difficulties in teaching OO. Topics include the evolution of programming languages; differences between OO and traditional approaches; differences from data modeling; and Unified Modeling Language (UML) and…

  9. A Unified Framework for Periodic, On-Demand, and User-Specified Software Information

    NASA Technical Reports Server (NTRS)

    Kolano, Paul Z.

    2004-01-01

    Although grid computing can increase the number of resources available to a user; not all resources on the grid may have a software environment suitable for running a given application. To provide users with the necessary assistance for selecting resources with compatible software environments and/or for automatically establishing such environments, it is necessary to have an accurate source of information about the software installed across the grid. This paper presents a new OGSI-compliant software information service that has been implemented as part of NASA's Information Power Grid project. This service is built on top of a general framework for reconciling information from periodic, on-demand, and user-specified sources. Information is retrieved using standard XPath queries over a single unified namespace independent of the information's source. Two consumers of the provided software information, the IPG Resource Broker and the IPG Neutralization Service, are briefly described.

  10. Design and implementation of a unified certification management system based on seismic business

    NASA Astrophysics Data System (ADS)

    Tang, Hongliang

    2018-04-01

    Many business software for seismic systems are based on web pages, users can simply open a browser and enter their IP address. However, how to achieve unified management and security management of many IP addresses, this paper introduces the design concept based on seismic business and builds a unified authentication management system using ASP technology.

  11. Integrated design optimization research and development in an industrial environment

    NASA Astrophysics Data System (ADS)

    Kumar, V.; German, Marjorie D.; Lee, S.-J.

    1989-04-01

    An overview is given of a design optimization project that is in progress at the GE Research and Development Center for the past few years. The objective of this project is to develop a methodology and a software system for design automation and optimization of structural/mechanical components and systems. The effort focuses on research and development issues and also on optimization applications that can be related to real-life industrial design problems. The overall technical approach is based on integration of numerical optimization techniques, finite element methods, CAE and software engineering, and artificial intelligence/expert systems (AI/ES) concepts. The role of each of these engineering technologies in the development of a unified design methodology is illustrated. A software system DESIGN-OPT has been developed for both size and shape optimization of structural components subjected to static as well as dynamic loadings. By integrating this software with an automatic mesh generator, a geometric modeler and an attribute specification computer code, a software module SHAPE-OPT has been developed for shape optimization. Details of these software packages together with their applications to some 2- and 3-dimensional design problems are described.

  12. Integrated design optimization research and development in an industrial environment

    NASA Technical Reports Server (NTRS)

    Kumar, V.; German, Marjorie D.; Lee, S.-J.

    1989-01-01

    An overview is given of a design optimization project that is in progress at the GE Research and Development Center for the past few years. The objective of this project is to develop a methodology and a software system for design automation and optimization of structural/mechanical components and systems. The effort focuses on research and development issues and also on optimization applications that can be related to real-life industrial design problems. The overall technical approach is based on integration of numerical optimization techniques, finite element methods, CAE and software engineering, and artificial intelligence/expert systems (AI/ES) concepts. The role of each of these engineering technologies in the development of a unified design methodology is illustrated. A software system DESIGN-OPT has been developed for both size and shape optimization of structural components subjected to static as well as dynamic loadings. By integrating this software with an automatic mesh generator, a geometric modeler and an attribute specification computer code, a software module SHAPE-OPT has been developed for shape optimization. Details of these software packages together with their applications to some 2- and 3-dimensional design problems are described.

  13. Use of Unified Modeling Language (UML) in Model-Based Development (MBD) For Safety-Critical Applications

    DTIC Science & Technology

    2014-12-01

    appears that UML is becoming the de facto MBD language. OMG® states the following on the MDA® FAQ page: “Although not formally required [for MBD], UML...a known limitation [42], so UML users should plan accordingly, especially for safety-critical programs. For example, “models are not used to...description of the MBD tool chain can be produced. That description could be resident in a Plan for Software Aspects of Certification (PSAC) or Software

  14. CASE tools and UML: state of the ART.

    PubMed

    Agarwal, S

    2001-05-01

    With increasing need for automated tools to assist complex systems development, software design methods are becoming popular. This article analyzes the state of art in computer-aided software engineering (CASE) tools and unified modeling language (UML), focusing on their evolution, merits, and industry usage. It identifies managerial issues for the tools' adoption and recommends an action plan to select and implement them. While CASE and UML offer inherent advantages like cheaper, shorter, and efficient development cycles, they suffer from poor user satisfaction. The critical success factors for their implementation include, among others, management and staff commitment, proper corporate infrastructure, and user training.

  15. Control System Architectures, Technologies and Concepts for Near Term and Future Human Exploration of Space

    NASA Technical Reports Server (NTRS)

    Boulanger, Richard; Overland, David

    2004-01-01

    Technologies that facilitate the design and control of complex, hybrid, and resource-constrained systems are examined. This paper focuses on design methodologies, and system architectures, not on specific control methods that may be applied to life support subsystems. Honeywell and Boeing have estimated that 60-80Y0 of the effort in developing complex control systems is software development, and only 20-40% is control system development. It has also been shown that large software projects have failure rates of as high as 50-65%. Concepts discussed include the Unified Modeling Language (UML) and design patterns with the goal of creating a self-improving, self-documenting system design process. Successful architectures for control must not only facilitate hardware to software integration, but must also reconcile continuously changing software with much less frequently changing hardware. These architectures rely on software modules or components to facilitate change. Architecting such systems for change leverages the interfaces between these modules or components.

  16. libdrdc: software standards library

    NASA Astrophysics Data System (ADS)

    Erickson, David; Peng, Tie

    2008-04-01

    This paper presents the libdrdc software standards library including internal nomenclature, definitions, units of measure, coordinate reference frames, and representations for use in autonomous systems research. This library is a configurable, portable C-function wrapped C++ / Object Oriented C library developed to be independent of software middleware, system architecture, processor, or operating system. It is designed to use the automatically-tuned linear algebra suite (ATLAS) and Basic Linear Algebra Suite (BLAS) and port to firmware and software. The library goal is to unify data collection and representation for various microcontrollers and Central Processing Unit (CPU) cores and to provide a common Application Binary Interface (ABI) for research projects at all scales. The library supports multi-platform development and currently works on Windows, Unix, GNU/Linux, and Real-Time Executive for Multiprocessor Systems (RTEMS). This library is made available under LGPL version 2.1 license.

  17. Analysis of a hardware and software fault tolerant processor for critical applications

    NASA Technical Reports Server (NTRS)

    Dugan, Joanne B.

    1993-01-01

    Computer systems for critical applications must be designed to tolerate software faults as well as hardware faults. A unified approach to tolerating hardware and software faults is characterized by classifying faults in terms of duration (transient or permanent) rather than source (hardware or software). Errors arising from transient faults can be handled through masking or voting, but errors arising from permanent faults require system reconfiguration to bypass the failed component. Most errors which are caused by software faults can be considered transient, in that they are input-dependent. Software faults are triggered by a particular set of inputs. Quantitative dependability analysis of systems which exhibit a unified approach to fault tolerance can be performed by a hierarchical combination of fault tree and Markov models. A methodology for analyzing hardware and software fault tolerant systems is applied to the analysis of a hypothetical system, loosely based on the Fault Tolerant Parallel Processor. The models consider both transient and permanent faults, hardware and software faults, independent and related software faults, automatic recovery, and reconfiguration.

  18. [Computers in nursing: development of free software application with care and management].

    PubMed

    dos Santos, Sérgio Ribeiro

    2010-06-01

    This study aimed at developing an information system in nursing with the implementation of nursing care and management of the service. The SisEnf--Information System in Nursing--is a free software module that comprises the care of nursing: history, clinical examination and care plan; the management module consists of: service shifts, personnel management, hospital indicators and other elements. The system was implemented at the Medical Clinic of the Lauro Wanderley University Hospital, at Universidade Federal da Paraiba. In view of the need to bring user and developer closer, in addition to the constant change of functional requirements during the interactive process, the method of unified process was used. The SisEnf was developed on a WEB platform and using free software. Hence, the work developed aimed at assisting in the working process of nursing, which will now have the opportunity to incorporate information technology in their work routine.

  19. A Software Hub for High Assurance Model-Driven Development and Analysis

    DTIC Science & Technology

    2007-01-23

    verification of UML models in TLPVS. In Thomas Baar, Alfred Strohmeier, Ana Moreira, and Stephen J. Mellor, editors, UML 2004 - The Unified Modeling...volume 3785 of Lecture Notes in Computer Science, pages 52–65, Manchester, UK, Nov 2005. Springer. [GH04] Günter Graw and Peter Herrmann. Transformation

  20. A support architecture for reliable distributed computing systems

    NASA Technical Reports Server (NTRS)

    Dasgupta, Partha; Leblanc, Richard J., Jr.

    1988-01-01

    The Clouds project is well underway to its goal of building a unified distributed operating system supporting the object model. The operating system design uses the object concept of structuring software at all levels of the system. The basic operating system was developed and work is under progress to build a usable system.

  1. Cognitive Foundry v. 3.0 (OSS)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Basilico, Justin; Dixon, Kevin; McClain, Jonathan

    2009-11-18

    The Cognitive Foundry is a unified collection of tools designed for research and applications that use cognitive modeling, machine learning, or pattern recognition. The software library contains design patterns, interface definitions, and default implementations of reusable software components and algorithms designed to support a wide variety of research and development needs. The library contains three main software packages: the Common package that contains basic utilities and linear algebraic methods, the Cognitive Framework package that contains tools to assist in implementing and analyzing theories of cognition, and the Machine Learning package that provides general algorithms and methods for populating Cognitive Frameworkmore » components from domain-relevant data.« less

  2. Sharing Research Models: Using Software Engineering Practices for Facilitation

    PubMed Central

    Bryant, Stephanie P.; Solano, Eric; Cantor, Susanna; Cooley, Philip C.; Wagener, Diane K.

    2011-01-01

    Increasingly, researchers are turning to computational models to understand the interplay of important variables on systems’ behaviors. Although researchers may develop models that meet the needs of their investigation, application limitations—such as nonintuitive user interface features and data input specifications—may limit the sharing of these tools with other research groups. By removing these barriers, other research groups that perform related work can leverage these work products to expedite their own investigations. The use of software engineering practices can enable managed application production and shared research artifacts among multiple research groups by promoting consistent models, reducing redundant effort, encouraging rigorous peer review, and facilitating research collaborations that are supported by a common toolset. This report discusses three established software engineering practices— the iterative software development process, object-oriented methodology, and Unified Modeling Language—and the applicability of these practices to computational model development. Our efforts to modify the MIDAS TranStat application to make it more user-friendly are presented as an example of how computational models that are based on research and developed using software engineering practices can benefit a broader audience of researchers. PMID:21687780

  3. minimUML: A Minimalist Approach to UML Diagramming for Early Computer Science Education

    ERIC Educational Resources Information Center

    Turner, Scott A.; Perez-Quinones, Manuel A.; Edwards, Stephen H.

    2005-01-01

    In introductory computer science courses, the Unified Modeling Language (UML) is commonly used to teach basic object-oriented design. However, there appears to be a lack of suitable software to support this task. Many of the available programs that support UML focus on developing code and not on enhancing learning. Programs designed for…

  4. A unified approach to computer analysis and modeling of spacecraft environmental interactions

    NASA Technical Reports Server (NTRS)

    Katz, I.; Mandell, M. J.; Cassidy, J. J.

    1986-01-01

    A new, coordinated, unified approach to the development of spacecraft plasma interaction models is proposed. The objective is to eliminate the unnecessary duplicative work in order to allow researchers to concentrate on the scientific aspects. By streamlining the developmental process, the interchange between theories and experimentalists is enhanced, and the transfer of technology to the spacecraft engineering community is faster. This approach is called the UNIfied Spacecraft Interaction Model (UNISIM). UNISIM is a coordinated system of software, hardware, and specifications. It is a tool for modeling and analyzing spacecraft interactions. It will be used to design experiments, to interpret results of experiments, and to aid in future spacecraft design. It breaks a Spacecraft Ineraction analysis into several modules. Each module will perform an analysis for some physical process, using phenomenology and algorithms which are well documented and have been subject to review. This system and its characteristics are discussed.

  5. [The planning of resource support of secondary medical care in hospital].

    PubMed

    Kungurov, N V; Zil'berberg, N V

    2010-01-01

    The Ural Institute of dermatovenerology and immunopathology developed and implemented the software concerning the personalized total recording of medical services and pharmaceuticals. The Institute also presents such software as listing of medical services, software module of calculation of financial costs of implementing full standards of secondary medical care in case of chronic dermatopathy, reference book of standards of direct specific costs on laboratory and physiotherapy services, reference book of pharmaceuticals, testing systems and consumables. The unified information system of management recording is a good technique to substantiate the costs of the implementation of standards of medical care, including high-tech care with taking into account the results of total calculation of provided medical services.

  6. A Unified Approach to Model-Based Planning and Execution

    NASA Technical Reports Server (NTRS)

    Muscettola, Nicola; Dorais, Gregory A.; Fry, Chuck; Levinson, Richard; Plaunt, Christian; Norvig, Peter (Technical Monitor)

    2000-01-01

    Writing autonomous software is complex, requiring the coordination of functionally and technologically diverse software modules. System and mission engineers must rely on specialists familiar with the different software modules to translate requirements into application software. Also, each module often encodes the same requirement in different forms. The results are high costs and reduced reliability due to the difficulty of tracking discrepancies in these encodings. In this paper we describe a unified approach to planning and execution that we believe provides a unified representational and computational framework for an autonomous agent. We identify the four main components whose interplay provides the basis for the agent's autonomous behavior: the domain model, the plan database, the plan running module, and the planner modules. This representational and problem solving approach can be applied at all levels of the architecture of a complex agent, such as Remote Agent. In the rest of the paper we briefly describe the Remote Agent architecture. The new agent architecture proposed here aims at achieving the full Remote Agent functionality. We then give the fundamental ideas behind the new agent architecture and point out some implication of the structure of the architecture, mainly in the area of reactivity and interaction between reactive and deliberative decision making. We conclude with related work and current status.

  7. Lessons learned in creating spacecraft computer systems: Implications for using Ada (R) for the space station

    NASA Technical Reports Server (NTRS)

    Tomayko, James E.

    1986-01-01

    Twenty-five years of spacecraft onboard computer development have resulted in a better understanding of the requirements for effective, efficient, and fault tolerant flight computer systems. Lessons from eight flight programs (Gemini, Apollo, Skylab, Shuttle, Mariner, Voyager, and Galileo) and three reserach programs (digital fly-by-wire, STAR, and the Unified Data System) are useful in projecting the computer hardware configuration of the Space Station and the ways in which the Ada programming language will enhance the development of the necessary software. The evolution of hardware technology, fault protection methods, and software architectures used in space flight in order to provide insight into the pending development of such items for the Space Station are reviewed.

  8. The design of aircraft using the decision support problem technique

    NASA Technical Reports Server (NTRS)

    Mistree, Farrokh; Marinopoulos, Stergios; Jackson, David M.; Shupe, Jon A.

    1988-01-01

    The Decision Support Problem Technique for unified design, manufacturing and maintenance is being developed at the Systems Design Laboratory at the University of Houston. This involves the development of a domain-independent method (and the associated software) that can be used to process domain-dependent information and thereby provide support for human judgment. In a computer assisted environment, this support is provided in the form of optimal solutions to Decision Support Problems.

  9. Modeling a Nursing Guideline with Standard Terminology and Unified Modeling Language for a Nursing Decision Support System: A Case Study.

    PubMed

    Choi, Jeeyae; Jansen, Kay; Coenen, Amy

    In recent years, Decision Support Systems (DSSs) have been developed and used to achieve "meaningful use". One approach to developing DSSs is to translate clinical guidelines into a computer-interpretable format. However, there is no specific guideline modeling approach to translate nursing guidelines to computer-interpretable guidelines. This results in limited use of DSSs in nursing. Unified modeling language (UML) is a software writing language known to accurately represent the end-users' perspective, due to its expressive characteristics. Furthermore, standard terminology enabled DSSs have been shown to smoothly integrate into existing health information systems. In order to facilitate development of nursing DSSs, the UML was used to represent a guideline for medication management for older adults encode with the International Classification for Nursing Practice (ICNP®). The UML was found to be a useful and sufficient tool to model a nursing guideline for a DSS.

  10. Modeling a Nursing Guideline with Standard Terminology and Unified Modeling Language for a Nursing Decision Support System: A Case Study

    PubMed Central

    Choi, Jeeyae; Jansen, Kay; Coenen, Amy

    2015-01-01

    In recent years, Decision Support Systems (DSSs) have been developed and used to achieve “meaningful use”. One approach to developing DSSs is to translate clinical guidelines into a computer-interpretable format. However, there is no specific guideline modeling approach to translate nursing guidelines to computer-interpretable guidelines. This results in limited use of DSSs in nursing. Unified modeling language (UML) is a software writing language known to accurately represent the end-users’ perspective, due to its expressive characteristics. Furthermore, standard terminology enabled DSSs have been shown to smoothly integrate into existing health information systems. In order to facilitate development of nursing DSSs, the UML was used to represent a guideline for medication management for older adults encode with the International Classification for Nursing Practice (ICNP®). The UML was found to be a useful and sufficient tool to model a nursing guideline for a DSS. PMID:26958174

  11. Validating agent oriented methodology (AOM) for netlogo modelling and simulation

    NASA Astrophysics Data System (ADS)

    WaiShiang, Cheah; Nissom, Shane; YeeWai, Sim; Sharbini, Hamizan

    2017-10-01

    AOM (Agent Oriented Modeling) is a comprehensive and unified agent methodology for agent oriented software development. AOM methodology was proposed to aid developers with the introduction of technique, terminology, notation and guideline during agent systems development. Although AOM methodology is claimed to be capable of developing a complex real world system, its potential is yet to be realized and recognized by the mainstream software community and the adoption of AOM is still at its infancy. Among the reason is that there are not much case studies or success story of AOM. This paper presents two case studies on the adoption of AOM for individual based modelling and simulation. It demonstrate how the AOM is useful for epidemiology study and ecological study. Hence, it further validate the AOM in a qualitative manner.

  12. Practical Application of Model-based Programming and State-based Architecture to Space Missions

    NASA Technical Reports Server (NTRS)

    Horvath, Gregory A.; Ingham, Michel D.; Chung, Seung; Martin, Oliver; Williams, Brian

    2006-01-01

    Innovative systems and software engineering solutions are required to meet the increasingly challenging demands of deep-space robotic missions. While recent advances in the development of an integrated systems and software engineering approach have begun to address some of these issues, they are still at the core highly manual and, therefore, error-prone. This paper describes a task aimed at infusing MIT's model-based executive, Titan, into JPL's Mission Data System (MDS), a unified state-based architecture, systems engineering process, and supporting software framework. Results of the task are presented, including a discussion of the benefits and challenges associated with integrating mature model-based programming techniques and technologies into a rigorously-defined domain specific architecture.

  13. Probabilistic Graphical Model Representation in Phylogenetics

    PubMed Central

    Höhna, Sebastian; Heath, Tracy A.; Boussau, Bastien; Landis, Michael J.; Ronquist, Fredrik; Huelsenbeck, John P.

    2014-01-01

    Recent years have seen a rapid expansion of the model space explored in statistical phylogenetics, emphasizing the need for new approaches to statistical model representation and software development. Clear communication and representation of the chosen model is crucial for: (i) reproducibility of an analysis, (ii) model development, and (iii) software design. Moreover, a unified, clear and understandable framework for model representation lowers the barrier for beginners and nonspecialists to grasp complex phylogenetic models, including their assumptions and parameter/variable dependencies. Graphical modeling is a unifying framework that has gained in popularity in the statistical literature in recent years. The core idea is to break complex models into conditionally independent distributions. The strength lies in the comprehensibility, flexibility, and adaptability of this formalism, and the large body of computational work based on it. Graphical models are well-suited to teach statistical models, to facilitate communication among phylogeneticists and in the development of generic software for simulation and statistical inference. Here, we provide an introduction to graphical models for phylogeneticists and extend the standard graphical model representation to the realm of phylogenetics. We introduce a new graphical model component, tree plates, to capture the changing structure of the subgraph corresponding to a phylogenetic tree. We describe a range of phylogenetic models using the graphical model framework and introduce modules to simplify the representation of standard components in large and complex models. Phylogenetic model graphs can be readily used in simulation, maximum likelihood inference, and Bayesian inference using, for example, Metropolis–Hastings or Gibbs sampling of the posterior distribution. [Computation; graphical models; inference; modularization; statistical phylogenetics; tree plate.] PMID:24951559

  14. Modifications to Langley 0.3-m TCT adaptive wall software for heavy gas test medium, phase 1 studies

    NASA Technical Reports Server (NTRS)

    Murthy, A. V.

    1992-01-01

    The scheme for two-dimensional wall adaptation with sulfur hexafluoride (SF6) as test gas in the NASA Langley Research Center 0.3-m Transonic Cryogenic Tunnel (0.3-m TCT) is presented. A unified version of the wall adaptation software has been developed to function in a dual gas operation mode (nitrogen or SF6). The feature of ideal gas calculations for nitrogen operation is retained. For SF6 operation, real gas properties have been computed using the departure function technique. Installation of the software on the 0.3-m TCT ModComp-A computer and preliminary validation with nitrogen operation were found to be satisfactory. Further validation and improvements to the software will be undertaken when the 0.3-m TCT is ready for operation with SF6 gas.

  15. Adapting Rational Unified Process (RUP) approach in designing a secure e-Tendering model

    NASA Astrophysics Data System (ADS)

    Mohd, Haslina; Robie, Muhammad Afdhal Muhammad; Baharom, Fauziah; Darus, Norida Muhd; Saip, Mohamed Ali; Yasin, Azman

    2016-08-01

    e-Tendering is an electronic processing of the tender document via internet and allow tenderer to publish, communicate, access, receive and submit all tender related information and documentation via internet. This study aims to design the e-Tendering system using Rational Unified Process approach. RUP provides a disciplined approach on how to assign tasks and responsibilities within the software development process. RUP has four phases that can assist researchers to adjust the requirements of various projects with different scope, problem and the size of projects. RUP is characterized as a use case driven, architecture centered, iterative and incremental process model. However the scope of this study only focusing on Inception and Elaboration phases as step to develop the model and perform only three of nine workflows (business modeling, requirements, analysis and design). RUP has a strong focus on documents and the activities in the inception and elaboration phases mainly concern the creation of diagrams and writing of textual descriptions. The UML notation and the software program, Star UML are used to support the design of e-Tendering. The e-Tendering design based on the RUP approach can contribute to e-Tendering developers and researchers in e-Tendering domain. In addition, this study also shows that the RUP is one of the best system development methodology that can be used as one of the research methodology in Software Engineering domain related to secured design of any observed application. This methodology has been tested in various studies in certain domains, such as in Simulation-based Decision Support, Security Requirement Engineering, Business Modeling and Secure System Requirement, and so forth. As a conclusion, these studies showed that the RUP one of a good research methodology that can be adapted in any Software Engineering (SE) research domain that required a few artifacts to be generated such as use case modeling, misuse case modeling, activity diagram, and initial class diagram from a list of requirements as identified earlier by the SE researchers

  16. Programming model for distributed intelligent systems

    NASA Technical Reports Server (NTRS)

    Sztipanovits, J.; Biegl, C.; Karsai, G.; Bogunovic, N.; Purves, B.; Williams, R.; Christiansen, T.

    1988-01-01

    A programming model and architecture which was developed for the design and implementation of complex, heterogeneous measurement and control systems is described. The Multigraph Architecture integrates artificial intelligence techniques with conventional software technologies, offers a unified framework for distributed and shared memory based parallel computational models and supports multiple programming paradigms. The system can be implemented on different hardware architectures and can be adapted to strongly different applications.

  17. A Study of Factors Affecting the Adoption of E-Learning Systems Enabled with Cultural Contextual Features by Instructions in Jamaican Tertiary Institutions

    ERIC Educational Resources Information Center

    Rhoden, Niccardo S.

    2014-01-01

    Understanding factors affecting the acceptance of E-Learning Systems Enabled with Cultural Contextual Features by lnstructors in Jamaican Tertiary Institutions is an important topic that's relevant to not only educational institutions, but developers of software for on line learning. The use of the unified theory of acceptance and use of…

  18. ELSI: A unified software interface for Kohn–Sham electronic structure solvers

    DOE PAGES

    Yu, Victor Wen-zhe; Corsetti, Fabiano; Garcia, Alberto; ...

    2017-09-15

    Solving the electronic structure from a generalized or standard eigenproblem is often the bottleneck in large scale calculations based on Kohn-Sham density-functional theory. This problem must be addressed by essentially all current electronic structure codes, based on similar matrix expressions, and by high-performance computation. We here present a unified software interface, ELSI, to access different strategies that address the Kohn-Sham eigenvalue problem. Currently supported algorithms include the dense generalized eigensolver library ELPA, the orbital minimization method implemented in libOMM, and the pole expansion and selected inversion (PEXSI) approach with lower computational complexity for semilocal density functionals. The ELSI interface aimsmore » to simplify the implementation and optimal use of the different strategies, by offering (a) a unified software framework designed for the electronic structure solvers in Kohn-Sham density-functional theory; (b) reasonable default parameters for a chosen solver; (c) automatic conversion between input and internal working matrix formats, and in the future (d) recommendation of the optimal solver depending on the specific problem. As a result, comparative benchmarks are shown for system sizes up to 11,520 atoms (172,800 basis functions) on distributed memory supercomputing architectures.« less

  19. ELSI: A unified software interface for Kohn-Sham electronic structure solvers

    NASA Astrophysics Data System (ADS)

    Yu, Victor Wen-zhe; Corsetti, Fabiano; García, Alberto; Huhn, William P.; Jacquelin, Mathias; Jia, Weile; Lange, Björn; Lin, Lin; Lu, Jianfeng; Mi, Wenhui; Seifitokaldani, Ali; Vázquez-Mayagoitia, Álvaro; Yang, Chao; Yang, Haizhao; Blum, Volker

    2018-01-01

    Solving the electronic structure from a generalized or standard eigenproblem is often the bottleneck in large scale calculations based on Kohn-Sham density-functional theory. This problem must be addressed by essentially all current electronic structure codes, based on similar matrix expressions, and by high-performance computation. We here present a unified software interface, ELSI, to access different strategies that address the Kohn-Sham eigenvalue problem. Currently supported algorithms include the dense generalized eigensolver library ELPA, the orbital minimization method implemented in libOMM, and the pole expansion and selected inversion (PEXSI) approach with lower computational complexity for semilocal density functionals. The ELSI interface aims to simplify the implementation and optimal use of the different strategies, by offering (a) a unified software framework designed for the electronic structure solvers in Kohn-Sham density-functional theory; (b) reasonable default parameters for a chosen solver; (c) automatic conversion between input and internal working matrix formats, and in the future (d) recommendation of the optimal solver depending on the specific problem. Comparative benchmarks are shown for system sizes up to 11,520 atoms (172,800 basis functions) on distributed memory supercomputing architectures.

  20. ELSI: A unified software interface for Kohn–Sham electronic structure solvers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yu, Victor Wen-zhe; Corsetti, Fabiano; Garcia, Alberto

    Solving the electronic structure from a generalized or standard eigenproblem is often the bottleneck in large scale calculations based on Kohn-Sham density-functional theory. This problem must be addressed by essentially all current electronic structure codes, based on similar matrix expressions, and by high-performance computation. We here present a unified software interface, ELSI, to access different strategies that address the Kohn-Sham eigenvalue problem. Currently supported algorithms include the dense generalized eigensolver library ELPA, the orbital minimization method implemented in libOMM, and the pole expansion and selected inversion (PEXSI) approach with lower computational complexity for semilocal density functionals. The ELSI interface aimsmore » to simplify the implementation and optimal use of the different strategies, by offering (a) a unified software framework designed for the electronic structure solvers in Kohn-Sham density-functional theory; (b) reasonable default parameters for a chosen solver; (c) automatic conversion between input and internal working matrix formats, and in the future (d) recommendation of the optimal solver depending on the specific problem. As a result, comparative benchmarks are shown for system sizes up to 11,520 atoms (172,800 basis functions) on distributed memory supercomputing architectures.« less

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Hsien-Hsin S

    The overall objective of this research project is to develop novel architectural techniques as well as system software to achieve a highly secure and intrusion-tolerant computing system. Such system will be autonomous, self-adapting, introspective, with self-healing capability under the circumstances of improper operations, abnormal workloads, and malicious attacks. The scope of this research includes: (1) System-wide, unified introspection techniques for autonomic systems, (2) Secure information-flow microarchitecture, (3) Memory-centric security architecture, (4) Authentication control and its implication to security, (5) Digital right management, (5) Microarchitectural denial-of-service attacks on shared resources. During the period of the project, we developed several architectural techniquesmore » and system software for achieving a robust, secure, and reliable computing system toward our goal.« less

  2. Mapping modern software process engineering techniques onto an HEP development environment

    NASA Astrophysics Data System (ADS)

    Wellisch, J. P.

    2003-04-01

    One of the most challenging issues faced in HEP in recent years is the question of how to capitalise on software development and maintenance experience in a continuous manner. To capitalise means in our context to evaluate and apply new process technologies as they arise, and to further evolve technologies already widely in use. It also implies the definition and adoption of standards. The CMS off-line software improvement effort aims at continual software quality improvement, and continual improvement in the efficiency of the working environment with the goal to facilitate doing great new physics. To achieve this, we followed a process improvement program based on ISO-15504, and Rational Unified Process. This experiment in software process improvement in HEP has been progressing now for a period of 3 years. Taking previous experience from ATLAS and SPIDER into account, we used a soft approach of continuous change within the limits of current culture to create of de facto software process standards within the CMS off line community as the only viable route to a successful software process improvement program in HEP. We will present the CMS approach to software process improvement in this process R&D, describe lessons learned, and mistakes made. We will demonstrate the benefits gained, and the current status of the software processes established in CMS off-line software.

  3. Clinical, information and business process modeling to promote development of safe and flexible software.

    PubMed

    Liaw, Siaw-Teng; Deveny, Elizabeth; Morrison, Iain; Lewis, Bryn

    2006-09-01

    Using a factorial vignette survey and modeling methodology, we developed clinical and information models - incorporating evidence base, key concepts, relevant terms, decision-making and workflow needed to practice safely and effectively - to guide the development of an integrated rule-based knowledge module to support prescribing decisions in asthma. We identified workflows, decision-making factors, factor use, and clinician information requirements. The Unified Modeling Language (UML) and public domain software and knowledge engineering tools (e.g. Protégé) were used, with the Australian GP Data Model as the starting point for expressing information needs. A Web Services service-oriented architecture approach was adopted within which to express functional needs, and clinical processes and workflows were expressed in the Business Process Execution Language (BPEL). This formal analysis and modeling methodology to define and capture the process and logic of prescribing best practice in a reference implementation is fundamental to tackling deficiencies in prescribing decision support software.

  4. Comparison of BrainTool to other UML modeling and model transformation tools

    NASA Astrophysics Data System (ADS)

    Nikiforova, Oksana; Gusarovs, Konstantins

    2017-07-01

    In the last 30 years there were numerous model generated software systems offered targeting problems with the development productivity and the resulting software quality. CASE tools developed due today's date are being advertised as having "complete code-generation capabilities". Nowadays the Object Management Group (OMG) is calling similar arguments in regards to the Unified Modeling Language (UML) models at different levels of abstraction. It is being said that software development automation using CASE tools enables significant level of automation. Actual today's CASE tools are usually offering a combination of several features starting with a model editor and a model repository for a traditional ones and ending with code generator (that could be using a scripting or domain-specific (DSL) language), transformation tool to produce the new artifacts from the manually created and transformation definition editor to define new transformations for the most advanced ones. Present paper contains the results of CASE tool (mainly UML editors) comparison against the level of the automation they are offering.

  5. Software for Data Analysis with Graphical Models

    NASA Technical Reports Server (NTRS)

    Buntine, Wray L.; Roy, H. Scott

    1994-01-01

    Probabilistic graphical models are being used widely in artificial intelligence and statistics, for instance, in diagnosis and expert systems, as a framework for representing and reasoning with probabilities and independencies. They come with corresponding algorithms for performing statistical inference. This offers a unifying framework for prototyping and/or generating data analysis algorithms from graphical specifications. This paper illustrates the framework with an example and then presents some basic techniques for the task: problem decomposition and the calculation of exact Bayes factors. Other tools already developed, such as automatic differentiation, Gibbs sampling, and use of the EM algorithm, make this a broad basis for the generation of data analysis software.

  6. Software for Fermat's Principle and Lenses

    ERIC Educational Resources Information Center

    Mihas, Pavlos

    2012-01-01

    Fermat's principle is considered as a unifying concept. It is usually presented erroneously as a "least time principle". In this paper we present some software that shows cases of maxima and minima and the application of Fermat's principle to the problem of focusing in lenses. (Contains 12 figures.)

  7. Representing nursing guideline with unified modeling language to facilitate development of a computer system: a case study.

    PubMed

    Choi, Jeeyae; Choi, Jeungok E

    2014-01-01

    To provide best recommendations at the point of care, guidelines have been implemented in computer systems. As a prerequisite, guidelines are translated into a computer-interpretable guideline format. Since there are no specific tools to translate nursing guidelines, only a few nursing guidelines are translated and implemented in computer systems. Unified modeling language (UML) is a software writing language and is known to well and accurately represent end-users' perspective, due to the expressive characteristics of the UML. In order to facilitate the development of computer systems for nurses' use, the UML was used to translate a paper-based nursing guideline, and its ease of use and the usefulness were tested through a case study of a genetic counseling guideline. The UML was found to be a useful tool to nurse informaticians and a sufficient tool to model a guideline in a computer program.

  8. Object-oriented software design in semiautomatic building extraction

    NASA Astrophysics Data System (ADS)

    Guelch, Eberhard; Mueller, Hardo

    1997-08-01

    Developing a system for semiautomatic building acquisition is a complex process, that requires constant integration and updating of software modules and user interfaces. To facilitate these processes we apply an object-oriented design not only for the data but also for the software involved. We use the unified modeling language (UML) to describe the object-oriented modeling of the system in different levels of detail. We can distinguish between use cases from the users point of view, that represent a sequence of actions, yielding in an observable result and the use cases for the programmers, who can use the system as a class library to integrate the acquisition modules in their own software. The structure of the system is based on the model-view-controller (MVC) design pattern. An example from the integration of automated texture extraction for the visualization of results demonstrate the feasibility of this approach.

  9. Orthographic Software Modelling: A Novel Approach to View-Based Software Engineering

    NASA Astrophysics Data System (ADS)

    Atkinson, Colin

    The need to support multiple views of complex software architectures, each capturing a different aspect of the system under development, has been recognized for a long time. Even the very first object-oriented analysis/design methods such as the Booch method and OMT supported a number of different diagram types (e.g. structural, behavioral, operational) and subsequent methods such as Fusion, Kruchten's 4+1 views and the Rational Unified Process (RUP) have added many more views over time. Today's leading modeling languages such as the UML and SysML, are also oriented towards supporting different views (i.e. diagram types) each able to portray a different facets of a system's architecture. More recently, so called enterprise architecture frameworks such as the Zachman Framework, TOGAF and RM-ODP have become popular. These add a whole set of new non-functional views to the views typically emphasized in traditional software engineering environments.

  10. Software for integrated manufacturing systems, part 2

    NASA Technical Reports Server (NTRS)

    Volz, R. A.; Naylor, A. W.

    1987-01-01

    Part 1 presented an overview of the unified approach to manufacturing software. The specific characteristics of the approach that allow it to realize the goals of reduced cost, increased reliability and increased flexibility are considered. Why the blending of a components view, distributed languages, generics and formal models is important, why each individual part of this approach is essential, and why each component will typically have each of these parts are examined. An example of a specification for a real material handling system is presented using the approach and compared with the standard interface specification given by the manufacturer. Use of the component in a distributed manufacturing system is then compared with use of the traditional specification with a more traditional approach to designing the system. An overview is also provided of the underlying mechanisms used for implementing distributed manufacturing systems using the unified software/hardware component approach.

  11. Using Docker Compose for the Simple Deployment of an Integrated Drug Target Screening Platform.

    PubMed

    List, Markus

    2017-06-10

    Docker virtualization allows for software tools to be executed in an isolated and controlled environment referred to as a container. In Docker containers, dependencies are provided exactly as intended by the developer and, consequently, they simplify the distribution of scientific software and foster reproducible research. The Docker paradigm is that each container encapsulates one particular software tool. However, to analyze complex biomedical data sets, it is often necessary to combine several software tools into elaborate workflows. To address this challenge, several Docker containers need to be instantiated and properly integrated, which complicates the software deployment process unnecessarily. Here, we demonstrate how an extension to Docker, Docker compose, can be used to mitigate these problems by providing a unified setup routine that deploys several tools in an integrated fashion. We demonstrate the power of this approach by example of a Docker compose setup for a drug target screening platform consisting of five integrated web applications and shared infrastructure, deployable in just two lines of codes.

  12. Design Considerations of a Virtual Laboratory for Advanced X-ray Sources

    NASA Astrophysics Data System (ADS)

    Luginsland, J. W.; Frese, M. H.; Frese, S. D.; Watrous, J. J.; Heileman, G. L.

    2004-11-01

    The field of scientific computation has greatly advanced in the last few years, resulting in the ability to perform complex computer simulations that can predict the performance of real-world experiments in a number of fields of study. Among the forces driving this new computational capability is the advent of parallel algorithms, allowing calculations in three-dimensional space with realistic time scales. Electromagnetic radiation sources driven by high-voltage, high-current electron beams offer an area to further push the state-of-the-art in high fidelity, first-principles simulation tools. The physics of these x-ray sources combine kinetic plasma physics (electron beams) with dense fluid-like plasma physics (anode plasmas) and x-ray generation (bremsstrahlung). There are a number of mature techniques and software packages for dealing with the individual aspects of these sources, such as Particle-In-Cell (PIC), Magneto-Hydrodynamics (MHD), and radiation transport codes. The current effort is focused on developing an object-oriented software environment using the Rational© Unified Process and the Unified Modeling Language (UML) to provide a framework where multiple 3D parallel physics packages, such as a PIC code (ICEPIC), a MHD code (MACH), and a x-ray transport code (ITS) can co-exist in a system-of-systems approach to modeling advanced x-ray sources. Initial software design and assessments of the various physics algorithms' fidelity will be presented.

  13. Monitoring the performance of the next Climate Forecast System version 3, throughout its development stage at EMC/NCEP

    NASA Astrophysics Data System (ADS)

    Peña, M.; Saha, S.; Wu, X.; Wang, J.; Tripp, P.; Moorthi, S.; Bhattacharjee, P.

    2016-12-01

    The next version of the operational Climate Forecast System (version 3, CFSv3) will be a fully coupled six-components system with diverse applications to earth system modeling, including weather and climate predictions. This system will couple the earth's atmosphere, land, ocean, sea-ice, waves and aerosols for both data assimilation and modeling. It will also use the NOAA Environmental Modeling System (NEMS) software super structure to couple these components. The CFSv3 is part of the next Unified Global Coupled System (UGCS), which will unify the global prediction systems that are now operational at NCEP. The UGCS is being developed through the efforts of dedicated research and engineering teams and through coordination across many CPO/MAPP and NGGPS groups. During this development phase, the UGCS is being tested for seasonal purposes and undergoes frequent revisions. Each new revision is evaluated to quickly discover, isolate and solve problems that negatively impact its performance. In the UGCS-seasonal model, components (e.g., ocean, sea-ice, atmosphere, etc.) are coupled through a NEMS-based "mediator". In this numerical infrastructure, model diagnostics and forecast validation are carried out, both component by component, and as a whole. The next stage, model optimization, will require enhanced performance diagnostics tools to help prioritize areas of numerical improvements. After the technical development of the UGCS-seasonal is completed, it will become the first realization of the CFSv3. All future development of this system will be carried out by the climate team at NCEP, in scientific collaboration with the groups that developed the individual components, as well as the climate community. A unique challenge to evaluate this unified weather-climate system is the large number of variables, which evolve over a wide range of temporal and spatial scales. A small set of performance measures and scorecard displays are been created, and collaboration and software contributions from research and operational centers are being incorporated. A status of the CFSv3/UGCS-seasonal development and examples of its performance and measuring tools will be presented.

  14. Modeling stroke rehabilitation processes using the Unified Modeling Language (UML).

    PubMed

    Ferrante, Simona; Bonacina, Stefano; Pinciroli, Francesco

    2013-10-01

    In organising and providing rehabilitation procedures for stroke patients, the usual need for many refinements makes it inappropriate to attempt rigid standardisation, but greater detail is required concerning workflow. The aim of this study was to build a model of the post-stroke rehabilitation process. The model, implemented in the Unified Modeling Language, was grounded on international guidelines and refined following the clinical pathway adopted at local level by a specialized rehabilitation centre. The model describes the organisation of the rehabilitation delivery and it facilitates the monitoring of recovery during the process. Indeed, a system software was developed and tested to support clinicians in the digital administration of clinical scales. The model flexibility assures easy updating after process evolution. Copyright © 2013 Elsevier Ltd. All rights reserved.

  15. BGen: A UML Behavior Network Generator Tool

    NASA Technical Reports Server (NTRS)

    Huntsberger, Terry; Reder, Leonard J.; Balian, Harry

    2010-01-01

    BGen software was designed for autogeneration of code based on a graphical representation of a behavior network used for controlling automatic vehicles. A common format used for describing a behavior network, such as that used in the JPL-developed behavior-based control system, CARACaS ["Control Architecture for Robotic Agent Command and Sensing" (NPO-43635), NASA Tech Briefs, Vol. 32, No. 10 (October 2008), page 40] includes a graph with sensory inputs flowing through the behaviors in order to generate the signals for the actuators that drive and steer the vehicle. A computer program to translate Unified Modeling Language (UML) Freeform Implementation Diagrams into a legacy C implementation of Behavior Network has been developed in order to simplify the development of C-code for behavior-based control systems. UML is a popular standard developed by the Object Management Group (OMG) to model software architectures graphically. The C implementation of a Behavior Network is functioning as a decision tree.

  16. Development of A General Principle Solution Forisoagrinet Compliant Networking System Components in Animal Husbandry

    NASA Astrophysics Data System (ADS)

    Kuhlmann, Arne; Herd, Daniel; Röβler, Benjamin; Gallmann, Eva; Jungbluth, Thomas

    In pig production software and electronic systems are widely used for process control and management. Unfortunately most devices on farms are proprietary solutions and autonomically working. To unify data communication of devices in agricultural husbandry, the international standard ISOagriNET (ISO 17532:2007) was developed. It defines data formats and exchange protocols, to link up devices like climate controls, feeding systems and sensors, but also management software. The aim of the research project, "Information and Data Collection in Livestock Systems" is to develop an ISOagriNET compliant IT system, a so called Farming Cell. It integrates all electronic components to acquire the available data and information for pig fattening. That way, an additional benefit to humans, animals and the environment regarding process control and documentation, can be generated. Developing the Farming Cell is very complex; in detail it is very difficult and long-winded to integrate hardware and software by various vendors into an ISOagriNET compliant IT system. This ISOagriNET prototype shows as a test environment the potential of this new standard.

  17. Application of NX Siemens PLM software in educational process in preparing students of engineering branch

    NASA Astrophysics Data System (ADS)

    Sadchikova, G. M.

    2017-01-01

    This article discusses the results of the introduction of computer-aided design NX by Siemens Plm Software to the classes of a higher education institution. The necessity of application of modern information technologies in teaching students of engineering profile and selection of a software product is substantiated. The author describes stages of the software module study in relation to some specific courses, considers the features of NX software, which require the creation of standard and unified product databases. The article also gives examples of research carried out by the students with the various software modules.

  18. Automatic Debugging Support for UML Designs

    NASA Technical Reports Server (NTRS)

    Schumann, Johann; Swanson, Keith (Technical Monitor)

    2001-01-01

    Design of large software systems requires rigorous application of software engineering methods covering all phases of the software process. Debugging during the early design phases is extremely important, because late bug-fixes are expensive. In this paper, we describe an approach which facilitates debugging of UML requirements and designs. The Unified Modeling Language (UML) is a set of notations for object-orient design of a software system. We have developed an algorithm which translates requirement specifications in the form of annotated sequence diagrams into structured statecharts. This algorithm detects conflicts between sequence diagrams and inconsistencies in the domain knowledge. After synthesizing statecharts from sequence diagrams, these statecharts usually are subject to manual modification and refinement. By using the "backward" direction of our synthesis algorithm. we are able to map modifications made to the statechart back into the requirements (sequence diagrams) and check for conflicts there. Fed back to the user conflicts detected by our algorithm are the basis for deductive-based debugging of requirements and domain theory in very early development stages. Our approach allows to generate explanations oil why there is a conflict and which parts of the specifications are affected.

  19. ProteoWizard: open source software for rapid proteomics tools development.

    PubMed

    Kessner, Darren; Chambers, Matt; Burke, Robert; Agus, David; Mallick, Parag

    2008-11-01

    The ProteoWizard software project provides a modular and extensible set of open-source, cross-platform tools and libraries. The tools perform proteomics data analyses; the libraries enable rapid tool creation by providing a robust, pluggable development framework that simplifies and unifies data file access, and performs standard proteomics and LCMS dataset computations. The library contains readers and writers of the mzML data format, which has been written using modern C++ techniques and design principles and supports a variety of platforms with native compilers. The software has been specifically released under the Apache v2 license to ensure it can be used in both academic and commercial projects. In addition to the library, we also introduce a rapidly growing set of companion tools whose implementation helps to illustrate the simplicity of developing applications on top of the ProteoWizard library. Cross-platform software that compiles using native compilers (i.e. GCC on Linux, MSVC on Windows and XCode on OSX) is available for download free of charge, at http://proteowizard.sourceforge.net. This website also provides code examples, and documentation. It is our hope the ProteoWizard project will become a standard platform for proteomics development; consequently, code use, contribution and further development are strongly encouraged.

  20. Theoretical Foundation of Copernicus: A Unified System for Trajectory Design and Optimization

    NASA Technical Reports Server (NTRS)

    Ocampo, Cesar; Senent, Juan S.; Williams, Jacob

    2010-01-01

    The fundamental methods are described for the general spacecraft trajectory design and optimization software system called Copernicus. The methods rely on a unified framework that is used to model, design, and optimize spacecraft trajectories that may operate in complex gravitational force fields, use multiple propulsion systems, and involve multiple spacecraft. The trajectory model, with its associated equations of motion and maneuver models, are discussed.

  1. Software system architecture for corporate user support

    NASA Astrophysics Data System (ADS)

    Sukhopluyeva, V. S.; Kuznetsov, D. Y.

    2017-01-01

    In this article, several existing ready-to-use solutions for the HelpDesk are reviewed. Advantages and disadvantages of these systems are identified. Architecture of software solution for a corporate user support system is presented in a form of the use case, state, and component diagrams described by using a unified modeling language (UML).

  2. Negotiating Software Agreements: Avoid Contractual Mishaps and Get the Biggest Bang for Your Buck

    ERIC Educational Resources Information Center

    Riley, Sheila

    2006-01-01

    Purchasing software license and service agreements can be daunting for any district. Greg Lindner, director of information and technology services for the Elk Grove Unified School District in California, and Steve Midgley, program manager at the Stupski Foundation, provided several tips on contract negotiation. This article presents the tips…

  3. Enhancing the Portability of GBT Data

    NASA Astrophysics Data System (ADS)

    Cowan, A. W.; Radziwill, N.; Fleming, D.; Sessoms, E.

    2003-12-01

    The Green Bank Telescope currently produces its raw data as a suite of FITS files, which are then consolidated and pre-processed before being packaged into a Measurement Set (the data structure understood by AIPS++). The separation of data adds to the complexity of data analysis, and we would like to reduce the artificial complexity involved in reading the data. Also, in order to support a broader cross-section of observers' backgrounds and interests, we would like to begin supporting data reduction packages in addition to AIPS++. Therefore, GBT data must be readily accessible to IDL, CLASS, and other data reduction packages, as well as any software that observers write themselves. In pursuit of this goal, we are currently developing a unified FITS data product that contains the entirety of the data and can be readily assimilated into multiple software packages. During the summer of 2003, prototyping exercises were initiated based on the SDFITS convention, which have led to an alpha-test period now in progress. This poster discusses the process of generating the unified FITS data product and details the current status of the project. Thanks to the National Science Foundation REU program for their financial support.

  4. Software architecture of the Magdalena Ridge Observatory Interferometer

    NASA Astrophysics Data System (ADS)

    Farris, Allen; Klinglesmith, Dan; Seamons, John; Torres, Nicolas; Buscher, David; Young, John

    2010-07-01

    Merging software from 36 independent work packages into a coherent, unified software system with a lifespan of twenty years is the challenge faced by the Magdalena Ridge Observatory Interferometer (MROI). We solve this problem by using standardized interface software automatically generated from simple highlevel descriptions of these systems, relying only on Linux, GNU, and POSIX without complex software such as CORBA. This approach, based on gigabit Ethernet with a TCP/IP protocol, provides the flexibility to integrate and manage diverse, independent systems using a centralized supervisory system that provides a database manager, data collectors, fault handling, and an operator interface.

  5. Indiva: a middleware for managing distributed media environment

    NASA Astrophysics Data System (ADS)

    Ooi, Wei-Tsang; Pletcher, Peter; Rowe, Lawrence A.

    2003-12-01

    This paper presents a unified set of abstractions and operations for hardware devices, software processes, and media data in a distributed audio and video environment. These abstractions, which are provided through a middleware layer called Indiva, use a file system metaphor to access resources and high-level commands to simplify the development of Internet webcast and distributed collaboration control applications. The design and implementation of Indiva are described and examples are presented to illustrate the usefulness of the abstractions.

  6. A Unified Overset Grid Generation Graphical Interface and New Concepts on Automatic Gridding Around Surface Discontinuities

    NASA Technical Reports Server (NTRS)

    Chan, William M.; Akien, Edwin (Technical Monitor)

    2002-01-01

    For many years, generation of overset grids for complex configurations has required the use of a number of different independently developed software utilities. Results created by each step were then visualized using a separate visualization tool before moving on to the next. A new software tool called OVERGRID was developed which allows the user to perform all the grid generation steps and visualization under one environment. OVERGRID provides grid diagnostic functions such as surface tangent and normal checks as well as grid manipulation functions such as extraction, extrapolation, concatenation, redistribution, smoothing, and projection. Moreover, it also contains hyperbolic surface and volume grid generation modules that are specifically suited for overset grid generation. It is the first time that such a unified interface existed for the creation of overset grids for complex geometries. New concepts on automatic overset surface grid generation around surface discontinuities will also be briefly presented. Special control curves on the surface such as intersection curves, sharp edges, open boundaries, are called seam curves. The seam curves are first automatically extracted from a multiple panel network description of the surface. Points where three or more seam curves meet are automatically identified and are called seam corners. Seam corner surface grids are automatically generated using a singular axis topology. Hyperbolic surface grids are then grown from the seam curves that are automatically trimmed away from the seam corners.

  7. Using a Foundational Ontology for Reengineering a Software Enterprise Ontology

    NASA Astrophysics Data System (ADS)

    Perini Barcellos, Monalessa; de Almeida Falbo, Ricardo

    The knowledge about software organizations is considerably relevant to software engineers. The use of a common vocabulary for representing the useful knowledge about software organizations involved in software projects is important for several reasons, such as to support knowledge reuse and to allow communication and interoperability between tools. Domain ontologies can be used to define a common vocabulary for sharing and reuse of knowledge about some domain. Foundational ontologies can be used for evaluating and re-designing domain ontologies, giving to these real-world semantics. This paper presents an evaluating of a Software Enterprise Ontology that was reengineered using the Unified Foundation Ontology (UFO) as basis.

  8. Integrating Software-Architecture-Centric Methods into the Rational Unified Process

    DTIC Science & Technology

    2004-07-01

    Architecture Design ...................................................................................... 19...QAW in a life- cycle context. One issue that needs to be addressed is how scenarios produced in a QAW can be used by a software architecture design method...implementation testing. 18 CMU/SEI-2004-TR-011 CMU/SEI-2004-TR-011 19 4 Architecture Design The Attribute-Driven Design (ADD) method

  9. A development framework for semantically interoperable health information systems.

    PubMed

    Lopez, Diego M; Blobel, Bernd G M E

    2009-02-01

    Semantic interoperability is a basic challenge to be met for new generations of distributed, communicating and co-operating health information systems (HIS) enabling shared care and e-Health. Analysis, design, implementation and maintenance of such systems and intrinsic architectures have to follow a unified development methodology. The Generic Component Model (GCM) is used as a framework for modeling any system to evaluate and harmonize state of the art architecture development approaches and standards for health information systems as well as to derive a coherent architecture development framework for sustainable, semantically interoperable HIS and their components. The proposed methodology is based on the Rational Unified Process (RUP), taking advantage of its flexibility to be configured for integrating other architectural approaches such as Service-Oriented Architecture (SOA), Model-Driven Architecture (MDA), ISO 10746, and HL7 Development Framework (HDF). Existing architectural approaches have been analyzed, compared and finally harmonized towards an architecture development framework for advanced health information systems. Starting with the requirements for semantic interoperability derived from paradigm changes for health information systems, and supported in formal software process engineering methods, an appropriate development framework for semantically interoperable HIS has been provided. The usability of the framework has been exemplified in a public health scenario.

  10. From data to analysis: linking NWChem and Avogadro with the syntax and semantics of Chemical Markup Language

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    De Jong, Wibe A.; Walker, Andrew M.; Hanwell, Marcus D.

    Background Multidisciplinary integrated research requires the ability to couple the diverse sets of data obtained from a range of complex experiments and computer simulations. Integrating data requires semantically rich information. In this paper the generation of semantically rich data from the NWChem computational chemistry software is discussed within the Chemical Markup Language (CML) framework. Results The NWChem computational chemistry software has been modified and coupled to the FoX library to write CML compliant XML data files. The FoX library was expanded to represent the lexical input files used by the computational chemistry software. Conclusions The production of CML compliant XMLmore » files for the computational chemistry software NWChem can be relatively easily accomplished using the FoX library. A unified computational chemistry or CompChem convention and dictionary needs to be developed through a community-based effort. The long-term goal is to enable a researcher to do Google-style chemistry and physics searches.« less

  11. miRMaid: a unified programming interface for microRNA data resources

    PubMed Central

    2010-01-01

    Background MicroRNAs (miRNAs) are endogenous small RNAs that play a key role in post-transcriptional regulation of gene expression in animals and plants. The number of known miRNAs has increased rapidly over the years. The current release (version 14.0) of miRBase, the central online repository for miRNA annotation, comprises over 10.000 miRNA precursors from 115 different species. Furthermore, a large number of decentralized online resources are now available, each contributing with important miRNA annotation and information. Results We have developed a software framework, designated here as miRMaid, with the goal of integrating miRNA data resources in a uniform web service interface that can be accessed and queried by researchers and, most importantly, by computers. miRMaid is built around data from miRBase and is designed to follow the official miRBase data releases. It exposes miRBase data as inter-connected web services. Third-party miRNA data resources can be modularly integrated as miRMaid plugins or they can loosely couple with miRMaid as individual entities in the World Wide Web. miRMaid is available as a public web service but is also easily installed as a local application. The software framework is freely available under the LGPL open source license for academic and commercial use. Conclusion miRMaid is an intuitive and modular software platform designed to unify miRBase and independent miRNA data resources. It enables miRNA researchers to computationally address complex questions involving the multitude of miRNA data resources. Furthermore, miRMaid constitutes a basic framework for further programming in which microRNA-interested bioinformaticians can readily develop their own tools and data sources. PMID:20074352

  12. Executive control systems in the engineering design environment. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Hurst, P. W.

    1985-01-01

    An executive control system (ECS) is a software structure for unifying various applications codes into a comprehensive system. It provides a library of applications, a uniform access method through a cental user interface, and a data management facility. A survey of twenty-four executive control systems designed to unify various CAD/CAE applications for use in diverse engineering design environments within government and industry was conducted. The goals of this research were to establish system requirements to survey state-of-the-art architectural design approaches, and to provide an overview of the historical evolution of these systems. Foundations for design are presented and include environmental settings, system requirements, major architectural components, and a system classification scheme based on knowledge of the supported engineering domain(s). An overview of the design approaches used in developing the major architectural components of an ECS is presented with examples taken from the surveyed systems. Attention is drawn to four major areas of ECS development: interdisciplinary usage; standardization; knowledge utilization; and computer science technology transfer.

  13. Preface to FP-UML 2009

    NASA Astrophysics Data System (ADS)

    Trujillo, Juan; Kim, Dae-Kyoo

    The Unified Modeling Language (UML) has been widely accepted as the standard object-oriented (OO) modeling language for modeling various aspects of software and information systems. The UML is an extensible language, in the sense that it provides mechanisms to introduce new elements for specific domains if necessary, such as web applications, database applications, business modeling, software development processes, data warehouses. Furthermore, the latest version of UML 2.0 got even bigger and more complicated with more diagrams for some good reasons. Although UML provides different diagrams for modeling different aspects of a software system, not all of them need to be applied in most cases. Therefore, heuristics, design guidelines, lessons learned from experiences are extremely important for the effective use of UML 2.0 and to avoid unnecessary complication. Also, approaches are needed to better manage UML 2.0 and its extensions so they do not become too complex too manage in the end.

  14. Parallel Domain Decomposition Formulation and Software for Large-Scale Sparse Symmetrical/Unsymmetrical Aeroacoustic Applications

    NASA Technical Reports Server (NTRS)

    Nguyen, D. T.; Watson, Willie R. (Technical Monitor)

    2005-01-01

    The overall objectives of this research work are to formulate and validate efficient parallel algorithms, and to efficiently design/implement computer software for solving large-scale acoustic problems, arised from the unified frameworks of the finite element procedures. The adopted parallel Finite Element (FE) Domain Decomposition (DD) procedures should fully take advantages of multiple processing capabilities offered by most modern high performance computing platforms for efficient parallel computation. To achieve this objective. the formulation needs to integrate efficient sparse (and dense) assembly techniques, hybrid (or mixed) direct and iterative equation solvers, proper pre-conditioned strategies, unrolling strategies, and effective processors' communicating schemes. Finally, the numerical performance of the developed parallel finite element procedures will be evaluated by solving series of structural, and acoustic (symmetrical and un-symmetrical) problems (in different computing platforms). Comparisons with existing "commercialized" and/or "public domain" software are also included, whenever possible.

  15. Measuring health care process quality with software quality measures.

    PubMed

    Yildiz, Ozkan; Demirörs, Onur

    2012-01-01

    Existing quality models focus on some specific diseases, clinics or clinical areas. Although they contain structure, process, or output type measures, there is no model which measures quality of health care processes comprehensively. In addition, due to the not measured overall process quality, hospitals cannot compare quality of processes internally and externally. To bring a solution to above problems, a new model is developed from software quality measures. We have adopted the ISO/IEC 9126 software quality standard for health care processes. Then, JCIAS (Joint Commission International Accreditation Standards for Hospitals) measurable elements were added to model scope for unifying functional requirements. Assessment (diagnosing) process measurement results are provided in this paper. After the application, it was concluded that the model determines weak and strong aspects of the processes, gives a more detailed picture for the process quality, and provides quantifiable information to hospitals to compare their processes with multiple organizations.

  16. Unified web-based network management based on distributed object orientated software agents

    NASA Astrophysics Data System (ADS)

    Djalalian, Amir; Mukhtar, Rami; Zukerman, Moshe

    2002-09-01

    This paper presents an architecture that provides a unified web interface to managed network devices that support CORBA, OSI or Internet-based network management protocols. A client gains access to managed devices through a web browser, which is used to issue management operations and receive event notifications. The proposed architecture is compatible with both the OSI Management reference Model and CORBA. The steps required for designing the building blocks of such architecture are identified.

  17. Statistical Theory for the "RCT-YES" Software: Design-Based Causal Inference for RCTs. NCEE 2015-4011

    ERIC Educational Resources Information Center

    Schochet, Peter Z.

    2015-01-01

    This report presents the statistical theory underlying the "RCT-YES" software that estimates and reports impacts for RCTs for a wide range of designs used in social policy research. The report discusses a unified, non-parametric design-based approach for impact estimation using the building blocks of the Neyman-Rubin-Holland causal…

  18. MISSION: Mission and Safety Critical Support Environment. Executive overview

    NASA Technical Reports Server (NTRS)

    Mckay, Charles; Atkinson, Colin

    1992-01-01

    For mission and safety critical systems it is necessary to: improve definition, evolution and sustenance techniques; lower development and maintenance costs; support safe, timely and affordable system modifications; and support fault tolerance and survivability. The goal of the MISSION project is to lay the foundation for a new generation of integrated systems software providing a unified infrastructure for mission and safety critical applications and systems. This will involve the definition of a common, modular target architecture and a supporting infrastructure.

  19. Designing Control System Application Software for Change

    NASA Technical Reports Server (NTRS)

    Boulanger, Richard

    2001-01-01

    The Unified Modeling Language (UML) was used to design the Environmental Systems Test Stand (ESTS) control system software. The UML was chosen for its ability to facilitate a clear dialog between software designer and customer, from which requirements are discovered and documented in a manner which transposes directly to program objects. Applying the UML to control system software design has resulted in a baseline set of documents from which change and effort of that change can be accurately measured. As the Environmental Systems Test Stand evolves, accurate estimates of the time and effort required to change the control system software will be made. Accurate quantification of the cost of software change can be before implementation, improving schedule and budget accuracy.

  20. OpenSQUID: A Flexible Open-Source Software Framework for the Control of SQUID Electronics

    DOE PAGES

    Jaeckel, Felix T.; Lafler, Randy J.; Boyd, S. T. P.

    2013-02-06

    We report commercially available computer-controlled SQUID electronics are usually delivered with software providing a basic user interface for adjustment of SQUID tuning parameters, such as bias current, flux offset, and feedback loop settings. However, in a research context it would often be useful to be able to modify this code and/or to have full control over all these parameters from researcher-written software. In the case of the STAR Cryoelectronics PCI/PFL family of SQUID control electronics, the supplied software contains modules for automatic tuning and noise characterization, but does not provide an interface for user code. On the other hand, themore » Magnicon SQUIDViewer software package includes a public application programming interface (API), but lacks auto-tuning and noise characterization features. To overcome these and other limitations, we are developing an "open-source" framework for controlling SQUID electronics which should provide maximal interoperability with user software, a unified user interface for electronics from different manufacturers, and a flexible platform for the rapid development of customized SQUID auto-tuning and other advanced features. Finally, we have completed a first implementation for the STAR Cryoelectronics hardware and have made the source code for this ongoing project available to the research community on SourceForge (http://opensquid.sourceforge.net) under the GNU public license.« less

  1. Development of Matlab GUI educational software to assist a laboratory of physical optics

    NASA Astrophysics Data System (ADS)

    Fernández, Elena; Fuentes, Rosa; García, Celia; Pascual, Inmaculada

    2014-07-01

    Physical optics is one of the subjects in the Grade of Optics and Optometry in Spanish universities. The students who come to this degree often have difficulties to understand subjects that are related to physics. For this reason, the aim of this work is to develop optics simulation software that provides a virtual laboratory for studying the effects of different aspects of physical optics phenomena. This software can let optical undergraduates simulate many optical systems for a better understanding of the practical competences associated with the theoretical concepts studied in class. This interactive environment unifies the information that brings the manual of the practices, provides the visualization of the physical phenomena and allows users to vary the values of the parameters that come into play to check its effect. So, this virtual tool is the perfect complement to learning more about the practices developed in the laboratory. This software will be developed through the choices which have the Matlab to generate Graphical User Interfaces or GUIs. A set of knobs, buttons and handles will be included in the GUI's in order to control the parameters of the different physics phenomena. Graphics can also be inserted in the GUIs to show the behavior of such phenomena. Specifically, by using this software, the student is able to analyze the behaviour of the transmittance and reflectance of the TE and TM modes, the polarized light through of the Malus'Law or degree of polarization.

  2. A complete solution classification and unified algorithmic treatment for the one- and two-step asymmetric S-transverse mass event scale statistic

    NASA Astrophysics Data System (ADS)

    Walker, Joel W.

    2014-08-01

    The M T2, or "s-transverse mass", statistic was developed to associate a parent mass scale to a missing transverse energy signature, given that escaping particles are generally expected in pairs, while collider experiments are sensitive to just a single transverse momentum vector sum. This document focuses on the generalized extension of that statistic to asymmetric one- and two-step decay chains, with arbitrary child particle masses and upstream missing transverse momentum. It provides a unified theoretical formulation, complete solution classification, taxonomy of critical points, and technical algorithmic prescription for treatment of the event scale. An implementation of the described algorithm is available for download, and is also a deployable component of the author's selection cut software package AEAC uS (Algorithmic Event Arbiter and C ut Selector). appendices address combinatoric event assembly, algorithm validation, and a complete pseudocode.

  3. ArrayNinja: An Open Source Platform for Unified Planning and Analysis of Microarray Experiments.

    PubMed

    Dickson, B M; Cornett, E M; Ramjan, Z; Rothbart, S B

    2016-01-01

    Microarray-based proteomic platforms have emerged as valuable tools for studying various aspects of protein function, particularly in the field of chromatin biochemistry. Microarray technology itself is largely unrestricted in regard to printable material and platform design, and efficient multidimensional optimization of assay parameters requires fluidity in the design and analysis of custom print layouts. This motivates the need for streamlined software infrastructure that facilitates the combined planning and analysis of custom microarray experiments. To this end, we have developed ArrayNinja as a portable, open source, and interactive application that unifies the planning and visualization of microarray experiments and provides maximum flexibility to end users. Array experiments can be planned, stored to a private database, and merged with the imaged results for a level of data interaction and centralization that is not currently attainable with available microarray informatics tools. © 2016 Elsevier Inc. All rights reserved.

  4. GCS component development cycle

    NASA Astrophysics Data System (ADS)

    Rodríguez, Jose A.; Macias, Rosa; Molgo, Jordi; Guerra, Dailos; Pi, Marti

    2012-09-01

    The GTC1 is an optical-infrared 10-meter segmented mirror telescope at the ORM observatory in Canary Islands (Spain). First light was at 13/07/2007 and since them it is in the operation phase. The GTC control system (GCS) is a distributed object & component oriented system based on RT-CORBA8 and it is responsible for the management and operation of the telescope, including its instrumentation. GCS has used the Rational Unified process (RUP9) in its development. RUP is an iterative software development process framework. After analysing (use cases) and designing (UML10) any of GCS subsystems, an initial component description of its interface is obtained and from that information a component specification is written. In order to improve the code productivity, GCS has adopted the code generation to transform this component specification into the skeleton of component classes based on a software framework, called Device Component Framework. Using the GCS development tools, based on javadoc and gcc, in only one step, the component is generated, compiled and deployed to be tested for the first time through our GUI inspector. The main advantages of this approach are the following: It reduces the learning curve of new developers and the development error rate, allows a systematic use of design patterns in the development and software reuse, speeds up the deliverables of the software product and massively increase the timescale, design consistency and design quality, and eliminates the future refactoring process required for the code.

  5. Subunit mass analysis for monitoring antibody oxidation.

    PubMed

    Sokolowska, Izabela; Mo, Jingjie; Dong, Jia; Lewis, Michael J; Hu, Ping

    2017-04-01

    Methionine oxidation is a common posttranslational modification (PTM) of monoclonal antibodies (mAbs). Oxidation can reduce the in-vivo half-life, efficacy and stability of the product. Peptide mapping is commonly used to monitor the levels of oxidation, but this is a relatively time-consuming method. A high-throughput, automated subunit mass analysis method was developed to monitor antibody methionine oxidation. In this method, samples were treated with IdeS, EndoS and dithiothreitol to generate three individual IgG subunits (light chain, Fd' and single chain Fc). These subunits were analyzed by reversed phase-ultra performance liquid chromatography coupled with an online quadrupole time-of-flight mass spectrometer and the levels of oxidation on each subunit were quantitated based on the deconvoluted mass spectra using the UNIFI software. The oxidation results obtained by subunit mass analysis correlated well with the results obtained by peptide mapping. Method qualification demonstrated that this subunit method had excellent repeatability and intermediate precision. In addition, UNIFI software used in this application allows automated data acquisition and processing, which makes this method suitable for high-throughput process monitoring and product characterization. Finally, subunit mass analysis revealed the different patterns of Fc methionine oxidation induced by chemical and photo stress, which makes it attractive for investigating the root cause of oxidation.

  6. An Integrated Strategy for Global Qualitative and Quantitative Profiling of Traditional Chinese Medicine Formulas: Baoyuan Decoction as a Case

    NASA Astrophysics Data System (ADS)

    Ma, Xiaoli; Guo, Xiaoyu; Song, Yuelin; Qiao, Lirui; Wang, Wenguang; Zhao, Mingbo; Tu, Pengfei; Jiang, Yong

    2016-12-01

    Clarification of the chemical composition of traditional Chinese medicine formulas (TCMFs) is a challenge due to the variety of structures and the complexity of plant matrices. Herein, an integrated strategy was developed by hyphenating ultra-performance liquid chromatography (UPLC), quadrupole time-of-flight (Q-TOF), hybrid triple quadrupole-linear ion trap mass spectrometry (Qtrap-MS), and the novel post-acquisition data processing software UNIFI to achieve automatic, rapid, accurate, and comprehensive qualitative and quantitative analysis of the chemical components in TCMFs. As a proof-of-concept, the chemical profiling of Baoyuan decoction (BYD), which is an ancient TCMF that is clinically used for the treatment of coronary heart disease that consists of Ginseng Radix et Rhizoma, Astragali Radix, Glycyrrhizae Radix et Rhizoma Praeparata Cum Melle, and Cinnamomi Cortex, was performed. As many as 236 compounds were plausibly or unambiguously identified, and 175 compounds were quantified or relatively quantified by the scheduled multiple reaction monitoring (sMRM) method. The findings demonstrate that the strategy integrating the rapidity of UNIFI software, the efficiency of UPLC, the accuracy of Q-TOF-MS, and the sensitivity and quantitation ability of Qtrap-MS provides a method for the efficient and comprehensive chemome characterization and quality control of complex TCMFs.

  7. The virtual digital nuclear power plant: A modern tool for supporting the lifecycle of VVER-based nuclear power units

    NASA Astrophysics Data System (ADS)

    Arkadov, G. V.; Zhukavin, A. P.; Kroshilin, A. E.; Parshikov, I. A.; Solov'ev, S. L.; Shishov, A. V.

    2014-10-01

    The article describes the "Virtual Digital VVER-Based Nuclear Power Plant" computerized system comprising a totality of verified initial data (sets of input data for a model intended for describing the behavior of nuclear power plant (NPP) systems in design and emergency modes of their operation) and a unified system of new-generation computation codes intended for carrying out coordinated computation of the variety of physical processes in the reactor core and NPP equipment. Experiments with the demonstration version of the "Virtual Digital VVER-Based NPP" computerized system has shown that it is in principle possible to set up a unified system of computation codes in a common software environment for carrying out interconnected calculations of various physical phenomena at NPPs constructed according to the standard AES-2006 project. With the full-scale version of the "Virtual Digital VVER-Based NPP" computerized system put in operation, the concerned engineering, design, construction, and operating organizations will have access to all necessary information relating to the NPP power unit project throughout its entire lifecycle. The domestically developed commercial-grade software product set to operate as an independently operating application to the project will bring about additional competitive advantages in the modern market of nuclear power technologies.

  8. A Comparison and Evaluation of Real-Time Software Systems Modeling Languages

    NASA Technical Reports Server (NTRS)

    Evensen, Kenneth D.; Weiss, Kathryn Anne

    2010-01-01

    A model-driven approach to real-time software systems development enables the conceptualization of software, fostering a more thorough understanding of its often complex architecture and behavior while promoting the documentation and analysis of concerns common to real-time embedded systems such as scheduling, resource allocation, and performance. Several modeling languages have been developed to assist in the model-driven software engineering effort for real-time systems, and these languages are beginning to gain traction with practitioners throughout the aerospace industry. This paper presents a survey of several real-time software system modeling languages, namely the Architectural Analysis and Design Language (AADL), the Unified Modeling Language (UML), Systems Modeling Language (SysML), the Modeling and Analysis of Real-Time Embedded Systems (MARTE) UML profile, and the AADL for UML profile. Each language has its advantages and disadvantages, and in order to adequately describe a real-time software system's architecture, a complementary use of multiple languages is almost certainly necessary. This paper aims to explore these languages in the context of understanding the value each brings to the model-driven software engineering effort and to determine if it is feasible and practical to combine aspects of the various modeling languages to achieve more complete coverage in architectural descriptions. To this end, each language is evaluated with respect to a set of criteria such as scope, formalisms, and architectural coverage. An example is used to help illustrate the capabilities of the various languages.

  9. Wearable and low-stress ambulatory blood pressure monitoring technology for hypertension diagnosis.

    PubMed

    Altintas, Ersin; Takoh, Kimiyasu; Ohno, Yuji; Abe, Katsumi; Akagawa, Takeshi; Ariyama, Tetsuri; Kubo, Masahiro; Tsuda, Kenichiro; Tochikubo, Osamu

    2015-01-01

    We propose a highly wearable, upper-arm type, oscillometric-based blood pressure monitoring technology with low-stress. The low-stress is realized by new developments in the hardware and software design. In the hardware design, conventional armband; cuff, is almost halved in volume thanks to a flexible plastic core and a liquid bag which enhances the fitness and pressure uniformity over the arm. Reduced air bag volume enables smaller motor pump size and battery leading to a thinner, more compact and more wearable unified device. In the software design, a new prediction algorithm enabled to apply less stress (and less pain) on arm of the patient. Proof-of-concept experiments on volunteers show a high accuracy on both technologies. This paper mainly introduces hardware developments. The system is promising for less-painful and less-stressful 24-hour blood pressure monitoring in hypertension managements and related healthcare solutions.

  10. A Tale of Two Observing Systems: Interoperability in the World of Microsoft Windows

    NASA Astrophysics Data System (ADS)

    Babin, B. L.; Hu, L.

    2008-12-01

    Louisiana Universities Marine Consortium's (LUMCON) and Dauphin Island Sea Lab's (DISL) Environmental Monitoring System provide a unified coastal ocean observing system. These two systems are mirrored to maintain autonomy while offering an integrated data sharing environment. Both systems collect data via Campbell Scientific Data loggers, store the data in Microsoft SQL servers, and disseminate the data in real- time on the World Wide Web via Microsoft Internet Information Servers and Active Server Pages (ASP). The utilization of Microsoft Windows technologies presented many challenges to these observing systems as open source tools for interoperability grow. The current open source tools often require the installation of additional software. In order to make data available through common standards formats, "home grown" software has been developed. One example of this is the development of software to generate xml files for transmission to the National Data Buoy Center (NDBC). OOSTethys partners develop, test and implement easy-to-use, open-source, OGC-compliant software., and have created a working prototype of networked, semantically interoperable, real-time data systems. Partnering with OOSTethys, we are developing a cookbook to implement OGC web services. The implementation will be written in ASP, will run in a Microsoft operating system environment, and will serve data via Sensor Observation Services (SOS). This cookbook will give observing systems running Microsoft Windows the tools to easily participate in the Open Geospatial Consortium (OGC) Oceans Interoperability Experiment (OCEANS IE).

  11. IPMP Global Fit - A one-step direct data analysis tool for predictive microbiology.

    PubMed

    Huang, Lihan

    2017-12-04

    The objective of this work is to develop and validate a unified optimization algorithm for performing one-step global regression analysis of isothermal growth and survival curves for determination of kinetic parameters in predictive microbiology. The algorithm is incorporated with user-friendly graphical interfaces (GUIs) to develop a data analysis tool, the USDA IPMP-Global Fit. The GUIs are designed to guide the users to easily navigate through the data analysis process and properly select the initial parameters for different combinations of mathematical models. The software is developed for one-step kinetic analysis to directly construct tertiary models by minimizing the global error between the experimental observations and mathematical models. The current version of the software is specifically designed for constructing tertiary models with time and temperature as the independent model parameters in the package. The software is tested with a total of 9 different combinations of primary and secondary models for growth and survival of various microorganisms. The results of data analysis show that this software provides accurate estimates of kinetic parameters. In addition, it can be used to improve the experimental design and data collection for more accurate estimation of kinetic parameters. IPMP-Global Fit can be used in combination with the regular USDA-IPMP for solving the inverse problems and developing tertiary models in predictive microbiology. Published by Elsevier B.V.

  12. An integrated radar model solution for mission level performance and cost trades

    NASA Astrophysics Data System (ADS)

    Hodge, John; Duncan, Kerron; Zimmerman, Madeline; Drupp, Rob; Manno, Mike; Barrett, Donald; Smith, Amelia

    2017-05-01

    A fully integrated Mission-Level Radar model is in development as part of a multi-year effort under the Northrop Grumman Mission Systems (NGMS) sector's Model Based Engineering (MBE) initiative to digitally interconnect and unify previously separate performance and cost models. In 2016, an NGMS internal research and development (IR and D) funded multidisciplinary team integrated radio frequency (RF), power, control, size, weight, thermal, and cost models together using a commercial-off-the-shelf software, ModelCenter, for an Active Electronically Scanned Array (AESA) radar system. Each represented model was digitally connected with standard interfaces and unified to allow end-to-end mission system optimization and trade studies. The radar model was then linked to the Air Force's own mission modeling framework (AFSIM). The team first had to identify the necessary models, and with the aid of subject matter experts (SMEs) understand and document the inputs, outputs, and behaviors of the component models. This agile development process and collaboration enabled rapid integration of disparate models and the validation of their combined system performance. This MBE framework will allow NGMS to design systems more efficiently and affordably, optimize architectures, and provide increased value to the customer. The model integrates detailed component models that validate cost and performance at the physics level with high-level models that provide visualization of a platform mission. This connectivity of component to mission models allows hardware and software design solutions to be better optimized to meet mission needs, creating cost-optimal solutions for the customer, while reducing design cycle time through risk mitigation and early validation of design decisions.

  13. Embracing Open Software Development in Solar Physics

    NASA Astrophysics Data System (ADS)

    Hughitt, V. K.; Ireland, J.; Christe, S.; Mueller, D.

    2012-12-01

    We discuss two ongoing software projects in solar physics that have adopted best practices of the open source software community. The first, the Helioviewer Project, is a powerful data visualization tool which includes online and Java interfaces inspired by Google Maps (tm). This effort allows users to find solar features and events of interest, and download the corresponding data. Having found data of interest, the user now has to analyze it. The dominant solar data analysis platform is an open-source library called SolarSoft (SSW). Although SSW itself is open-source, the programming language used is IDL, a proprietary language with licensing costs that are prohibative for many institutions and individuals. SSW is composed of a collection of related scripts written by missions and individuals for solar data processing and analysis, without any consistent data structures or common interfaces. Further, at the time when SSW was initially developed, many of the best software development processes of today (mirrored and distributed version control, unit testing, continuous integration, etc.) were not standard, and have not since been adopted. The challenges inherent in developing SolarSoft led to a second software project known as SunPy. SunPy is an open-source Python-based library which seeks to create a unified solar data analysis environment including a number of core datatypes such as Maps, Lightcurves, and Spectra which have consistent interfaces and behaviors. By taking advantage of the large and sophisticated body of scientific software already available in Python (e.g. SciPy, NumPy, Matplotlib), and by adopting many of the best practices refined in open-source software development, SunPy has been able to develop at a very rapid pace while still ensuring a high level of reliability. The Helioviewer Project and SunPy represent two pioneering technologies in solar physics - simple yet flexible data visualization and a powerful, new data analysis environment. We discuss the development of both these efforts and how they are beginning to influence the solar physics community.

  14. Design and Development of a Run-Time Monitor for Multi-Core Architectures in Cloud Computing

    PubMed Central

    Kang, Mikyung; Kang, Dong-In; Crago, Stephen P.; Park, Gyung-Leen; Lee, Junghoon

    2011-01-01

    Cloud computing is a new information technology trend that moves computing and data away from desktops and portable PCs into large data centers. The basic principle of cloud computing is to deliver applications as services over the Internet as well as infrastructure. A cloud is a type of parallel and distributed system consisting of a collection of inter-connected and virtualized computers that are dynamically provisioned and presented as one or more unified computing resources. The large-scale distributed applications on a cloud require adaptive service-based software, which has the capability of monitoring system status changes, analyzing the monitored information, and adapting its service configuration while considering tradeoffs among multiple QoS features simultaneously. In this paper, we design and develop a Run-Time Monitor (RTM) which is a system software to monitor the application behavior at run-time, analyze the collected information, and optimize cloud computing resources for multi-core architectures. RTM monitors application software through library instrumentation as well as underlying hardware through a performance counter optimizing its computing configuration based on the analyzed data. PMID:22163811

  15. Design and development of a run-time monitor for multi-core architectures in cloud computing.

    PubMed

    Kang, Mikyung; Kang, Dong-In; Crago, Stephen P; Park, Gyung-Leen; Lee, Junghoon

    2011-01-01

    Cloud computing is a new information technology trend that moves computing and data away from desktops and portable PCs into large data centers. The basic principle of cloud computing is to deliver applications as services over the Internet as well as infrastructure. A cloud is a type of parallel and distributed system consisting of a collection of inter-connected and virtualized computers that are dynamically provisioned and presented as one or more unified computing resources. The large-scale distributed applications on a cloud require adaptive service-based software, which has the capability of monitoring system status changes, analyzing the monitored information, and adapting its service configuration while considering tradeoffs among multiple QoS features simultaneously. In this paper, we design and develop a Run-Time Monitor (RTM) which is a system software to monitor the application behavior at run-time, analyze the collected information, and optimize cloud computing resources for multi-core architectures. RTM monitors application software through library instrumentation as well as underlying hardware through a performance counter optimizing its computing configuration based on the analyzed data.

  16. An object-oriented class library for medical software development.

    PubMed

    O'Kane, K C; McColligan, E E

    1996-12-01

    The objective of this research is the development of a Medical Object Library (MOL) consisting of reusable, inheritable, portable, extendable C++ classes that facilitate rapid development of medical software at reduced cost and increased functionality. The result of this research is a library of class objects that range in function from string and hierarchical file handling entities to high level, procedural agents that perform increasingly complex, integrated tasks. A system built upon these classes is compatible with any other system similarly constructed with respect to data definitions, semantics, data organization and storage. As new objects are built, they can be added to the class library for subsequent use. The MOL is a toolkit of software objects intended to support a common file access methodology, a unified medical record structure, consistent message processing, standard graphical display facilities and uniform data collection procedures. This work emphasizes the relationship that potentially exists between the structure of a hierarchical medical record and procedural language components by means of a hierarchical class library and tree structured file access facility. In doing so, it attempts to establish interest in and demonstrate the practicality of the hierarchical medical record model in the modern context of object oriented programming.

  17. A mass spectrometry proteomics data management platform.

    PubMed

    Sharma, Vagisha; Eng, Jimmy K; Maccoss, Michael J; Riffle, Michael

    2012-09-01

    Mass spectrometry-based proteomics is increasingly being used in biomedical research. These experiments typically generate a large volume of highly complex data, and the volume and complexity are only increasing with time. There exist many software pipelines for analyzing these data (each typically with its own file formats), and as technology improves, these file formats change and new formats are developed. Files produced from these myriad software programs may accumulate on hard disks or tape drives over time, with older files being rendered progressively more obsolete and unusable with each successive technical advancement and data format change. Although initiatives exist to standardize the file formats used in proteomics, they do not address the core failings of a file-based data management system: (1) files are typically poorly annotated experimentally, (2) files are "organically" distributed across laboratory file systems in an ad hoc manner, (3) files formats become obsolete, and (4) searching the data and comparing and contrasting results across separate experiments is very inefficient (if possible at all). Here we present a relational database architecture and accompanying web application dubbed Mass Spectrometry Data Platform that is designed to address the failings of the file-based mass spectrometry data management approach. The database is designed such that the output of disparate software pipelines may be imported into a core set of unified tables, with these core tables being extended to support data generated by specific pipelines. Because the data are unified, they may be queried, viewed, and compared across multiple experiments using a common web interface. Mass Spectrometry Data Platform is open source and freely available at http://code.google.com/p/msdapl/.

  18. Study on Network Error Analysis and Locating based on Integrated Information Decision System

    NASA Astrophysics Data System (ADS)

    Yang, F.; Dong, Z. H.

    2017-10-01

    Integrated information decision system (IIDS) integrates multiple sub-system developed by many facilities, including almost hundred kinds of software, which provides with various services, such as email, short messages, drawing and sharing. Because the under-layer protocols are different, user standards are not unified, many errors are occurred during the stages of setup, configuration, and operation, which seriously affect the usage. Because the errors are various, which may be happened in different operation phases, stages, TCP/IP communication protocol layers, sub-system software, it is necessary to design a network error analysis and locating tool for IIDS to solve the above problems. This paper studies on network error analysis and locating based on IIDS, which provides strong theory and technology supports for the running and communicating of IIDS.

  19. Towards a general object-oriented software development methodology

    NASA Technical Reports Server (NTRS)

    Seidewitz, ED; Stark, Mike

    1986-01-01

    Object diagrams were used to design a 5000 statement team training exercise and to design the entire dynamics simulator. The object diagrams are also being used to design another 50,000 statement Ada system and a personal computer based system that will be written in Modula II. The design methodology evolves out of these experiences as well as the limitations of other methods that were studied. Object diagrams, abstraction analysis, and associated principles provide a unified framework which encompasses concepts from Yourdin, Booch, and Cherry. This general object-oriented approach handles high level system design, possibly with concurrency, through object-oriented decomposition down to a completely functional level. How object-oriented concepts can be used in other phases of the software life-cycle, such as specification and testing is being studied concurrently.

  20. Development of an automated large-scale protein-crystallization and monitoring system for high-throughput protein-structure analyses.

    PubMed

    Hiraki, Masahiko; Kato, Ryuichi; Nagai, Minoru; Satoh, Tadashi; Hirano, Satoshi; Ihara, Kentaro; Kudo, Norio; Nagae, Masamichi; Kobayashi, Masanori; Inoue, Michio; Uejima, Tamami; Oda, Shunichiro; Chavas, Leonard M G; Akutsu, Masato; Yamada, Yusuke; Kawasaki, Masato; Matsugaki, Naohiro; Igarashi, Noriyuki; Suzuki, Mamoru; Wakatsuki, Soichi

    2006-09-01

    Protein crystallization remains one of the bottlenecks in crystallographic analysis of macromolecules. An automated large-scale protein-crystallization system named PXS has been developed consisting of the following subsystems, which proceed in parallel under unified control software: dispensing precipitants and protein solutions, sealing crystallization plates, carrying robot, incubators, observation system and image-storage server. A sitting-drop crystallization plate specialized for PXS has also been designed and developed. PXS can set up 7680 drops for vapour diffusion per hour, which includes time for replenishing supplies such as disposable tips and crystallization plates. Images of the crystallization drops are automatically recorded according to a preprogrammed schedule and can be viewed by users remotely using web-based browser software. A number of protein crystals were successfully produced and several protein structures could be determined directly from crystals grown by PXS. In other cases, X-ray quality crystals were obtained by further optimization by manual screening based on the conditions found by PXS.

  1. Design and Development of a Flight Route Modification, Logging, and Communication Network

    NASA Technical Reports Server (NTRS)

    Merlino, Daniel K.; Wilson, C. Logan; Carboneau, Lindsey M.; Wilder, Andrew J.; Underwood, Matthew C.

    2016-01-01

    There is an overwhelming desire to create and enhance communication mechanisms between entities that operate within the National Airspace System. Furthermore, airlines are always extremely interested in increasing the efficiency of their flights. An innovative system prototype was developed and tested that improves collaborative decision making without modifying existing infrastructure or operational procedures within the current Air Traffic Management System. This system enables collaboration between flight crew and airline dispatchers to share and assess optimized flight routes through an Internet connection. Using a sophisticated medium-fidelity flight simulation environment, a rapid-prototyping development, and a unified modeling language, the software was designed to ensure reliability and scalability for future growth and applications. Ensuring safety and security were primary design goals, therefore the software does not interact or interfere with major flight control or safety systems. The system prototype demonstrated an unprecedented use of in-flight Internet to facilitate effective communication with Airline Operations Centers, which may contribute to increased flight efficiency for airlines.

  2. A Vision on the Status and Evolution of HEP Physics Software Tools

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Canal, P.; Elvira, D.; Hatcher, R.

    2013-07-28

    This paper represents the vision of the members of the Fermilab Scientific Computing Division's Computational Physics Department (SCD-CPD) on the status and the evolution of various HEP software tools such as the Geant4 detector simulation toolkit, the Pythia and GENIE physics generators, and the ROOT data analysis framework. The goal of this paper is to contribute ideas to the Snowmass 2013 process toward the composition of a unified document on the current status and potential evolution of the physics software tools which are essential to HEP.

  3. Research on Spoken Dialogue Systems

    NASA Technical Reports Server (NTRS)

    Aist, Gregory; Hieronymus, James; Dowding, John; Hockey, Beth Ann; Rayner, Manny; Chatzichrisafis, Nikos; Farrell, Kim; Renders, Jean-Michel

    2010-01-01

    Research in the field of spoken dialogue systems has been performed with the goal of making such systems more robust and easier to use in demanding situations. The term "spoken dialogue systems" signifies unified software systems containing speech-recognition, speech-synthesis, dialogue management, and ancillary components that enable human users to communicate, using natural spoken language or nearly natural prescribed spoken language, with other software systems that provide information and/or services.

  4. Do You Need ERP? In the Business World, Enterprise Resource Planning Software Keeps Costs down and Productivity up. Should Districts Follow Suit?

    ERIC Educational Resources Information Center

    Careless, James

    2007-01-01

    Enterprise resource planning (ERP) software does what school leaders have always wanted their computer systems to do: It sees all. By integrating every IT application an organization has--from purchasing and inventory control to payroll--ERPs create a single unified system. Not only does this give IT managers a holistic view to what is happening…

  5. Do You Need ERP? In the Business World, Enterprise Resource Planning Software Keeps Costs down and Productivity up. Should Districts Follow Suit?

    ERIC Educational Resources Information Center

    Careless, James

    2007-01-01

    Enterprise resource planning software does what school leaders have always wanted their computer systems to do: It sees all. By integrating every IT application an organization has--from purchasing and inventory control to payroll--ERPs create a single unified system. Not only does this give IT managers a holistic view to what is happening in the…

  6. A unified structural/terminological interoperability framework based on LexEVS: application to TRANSFoRm.

    PubMed

    Ethier, Jean-François; Dameron, Olivier; Curcin, Vasa; McGilchrist, Mark M; Verheij, Robert A; Arvanitis, Theodoros N; Taweel, Adel; Delaney, Brendan C; Burgun, Anita

    2013-01-01

    Biomedical research increasingly relies on the integration of information from multiple heterogeneous data sources. Despite the fact that structural and terminological aspects of interoperability are interdependent and rely on a common set of requirements, current efforts typically address them in isolation. We propose a unified ontology-based knowledge framework to facilitate interoperability between heterogeneous sources, and investigate if using the LexEVS terminology server is a viable implementation method. We developed a framework based on an ontology, the general information model (GIM), to unify structural models and terminologies, together with relevant mapping sets. This allowed a uniform access to these resources within LexEVS to facilitate interoperability by various components and data sources from implementing architectures. Our unified framework has been tested in the context of the EU Framework Program 7 TRANSFoRm project, where it was used to achieve data integration in a retrospective diabetes cohort study. The GIM was successfully instantiated in TRANSFoRm as the clinical data integration model, and necessary mappings were created to support effective information retrieval for software tools in the project. We present a novel, unifying approach to address interoperability challenges in heterogeneous data sources, by representing structural and semantic models in one framework. Systems using this architecture can rely solely on the GIM that abstracts over both the structure and coding. Information models, terminologies and mappings are all stored in LexEVS and can be accessed in a uniform manner (implementing the HL7 CTS2 service functional model). The system is flexible and should reduce the effort needed from data sources personnel for implementing and managing the integration.

  7. A unified structural/terminological interoperability framework based on LexEVS: application to TRANSFoRm

    PubMed Central

    Ethier, Jean-François; Dameron, Olivier; Curcin, Vasa; McGilchrist, Mark M; Verheij, Robert A; Arvanitis, Theodoros N; Taweel, Adel; Delaney, Brendan C; Burgun, Anita

    2013-01-01

    Objective Biomedical research increasingly relies on the integration of information from multiple heterogeneous data sources. Despite the fact that structural and terminological aspects of interoperability are interdependent and rely on a common set of requirements, current efforts typically address them in isolation. We propose a unified ontology-based knowledge framework to facilitate interoperability between heterogeneous sources, and investigate if using the LexEVS terminology server is a viable implementation method. Materials and methods We developed a framework based on an ontology, the general information model (GIM), to unify structural models and terminologies, together with relevant mapping sets. This allowed a uniform access to these resources within LexEVS to facilitate interoperability by various components and data sources from implementing architectures. Results Our unified framework has been tested in the context of the EU Framework Program 7 TRANSFoRm project, where it was used to achieve data integration in a retrospective diabetes cohort study. The GIM was successfully instantiated in TRANSFoRm as the clinical data integration model, and necessary mappings were created to support effective information retrieval for software tools in the project. Conclusions We present a novel, unifying approach to address interoperability challenges in heterogeneous data sources, by representing structural and semantic models in one framework. Systems using this architecture can rely solely on the GIM that abstracts over both the structure and coding. Information models, terminologies and mappings are all stored in LexEVS and can be accessed in a uniform manner (implementing the HL7 CTS2 service functional model). The system is flexible and should reduce the effort needed from data sources personnel for implementing and managing the integration. PMID:23571850

  8. Benchmarking and Evaluating Unified Memory for OpenMP GPU Offloading

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mishra, Alok; Li, Lingda; Kong, Martin

    Here, the latest OpenMP standard offers automatic device offloading capabilities which facilitate GPU programming. Despite this, there remain many challenges. One of these is the unified memory feature introduced in recent GPUs. GPUs in current and future HPC systems have enhanced support for unified memory space. In such systems, CPU and GPU can access each other's memory transparently, that is, the data movement is managed automatically by the underlying system software and hardware. Memory over subscription is also possible in these systems. However, there is a significant lack of knowledge about how this mechanism will perform, and how programmers shouldmore » use it. We have modified several benchmarks codes, in the Rodinia benchmark suite, to study the behavior of OpenMP accelerator extensions and have used them to explore the impact of unified memory in an OpenMP context. We moreover modified the open source LLVM compiler to allow OpenMP programs to exploit unified memory. The results of our evaluation reveal that, while the performance of unified memory is comparable with that of normal GPU offloading for benchmarks with little data reuse, it suffers from significant overhead when GPU memory is over subcribed for benchmarks with large amount of data reuse. Based on these results, we provide several guidelines for programmers to achieve better performance with unified memory.« less

  9. A networked modular hardware and software system for MRI-guided robotic prostate interventions

    NASA Astrophysics Data System (ADS)

    Su, Hao; Shang, Weijian; Harrington, Kevin; Camilo, Alex; Cole, Gregory; Tokuda, Junichi; Hata, Nobuhiko; Tempany, Clare; Fischer, Gregory S.

    2012-02-01

    Magnetic resonance imaging (MRI) provides high resolution multi-parametric imaging, large soft tissue contrast, and interactive image updates making it an ideal modality for diagnosing prostate cancer and guiding surgical tools. Despite a substantial armamentarium of apparatuses and systems has been developed to assist surgical diagnosis and therapy for MRI-guided procedures over last decade, the unified method to develop high fidelity robotic systems in terms of accuracy, dynamic performance, size, robustness and modularity, to work inside close-bore MRI scanner still remains a challenge. In this work, we develop and evaluate an integrated modular hardware and software system to support the surgical workflow of intra-operative MRI, with percutaneous prostate intervention as an illustrative case. Specifically, the distinct apparatuses and methods include: 1) a robot controller system for precision closed loop control of piezoelectric motors, 2) a robot control interface software that connects the 3D Slicer navigation software and the robot controller to exchange robot commands and coordinates using the OpenIGTLink open network communication protocol, and 3) MRI scan plane alignment to the planned path and imaging of the needle as it is inserted into the target location. A preliminary experiment with ex-vivo phantom validates the system workflow, MRI-compatibility and shows that the robotic system has a better than 0.01mm positioning accuracy.

  10. Fusing Symbolic and Numerical Diagnostic Computations

    NASA Technical Reports Server (NTRS)

    James, Mark

    2007-01-01

    X-2000 Anomaly Detection Language denotes a developmental computing language, and the software that establishes and utilizes the language, for fusing two diagnostic computer programs, one implementing a numerical analysis method, the other implementing a symbolic analysis method into a unified event-based decision analysis software system for realtime detection of events (e.g., failures) in a spacecraft, aircraft, or other complex engineering system. The numerical analysis method is performed by beacon-based exception analysis for multi-missions (BEAMs), which has been discussed in several previous NASA Tech Briefs articles. The symbolic analysis method is, more specifically, an artificial-intelligence method of the knowledge-based, inference engine type, and its implementation is exemplified by the Spacecraft Health Inference Engine (SHINE) software. The goal in developing the capability to fuse numerical and symbolic diagnostic components is to increase the depth of analysis beyond that previously attainable, thereby increasing the degree of confidence in the computed results. In practical terms, the sought improvement is to enable detection of all or most events, with no or few false alarms.

  11. Cytoscape: a software environment for integrated models of biomolecular interaction networks.

    PubMed

    Shannon, Paul; Markiel, Andrew; Ozier, Owen; Baliga, Nitin S; Wang, Jonathan T; Ramage, Daniel; Amin, Nada; Schwikowski, Benno; Ideker, Trey

    2003-11-01

    Cytoscape is an open source software project for integrating biomolecular interaction networks with high-throughput expression data and other molecular states into a unified conceptual framework. Although applicable to any system of molecular components and interactions, Cytoscape is most powerful when used in conjunction with large databases of protein-protein, protein-DNA, and genetic interactions that are increasingly available for humans and model organisms. Cytoscape's software Core provides basic functionality to layout and query the network; to visually integrate the network with expression profiles, phenotypes, and other molecular states; and to link the network to databases of functional annotations. The Core is extensible through a straightforward plug-in architecture, allowing rapid development of additional computational analyses and features. Several case studies of Cytoscape plug-ins are surveyed, including a search for interaction pathways correlating with changes in gene expression, a study of protein complexes involved in cellular recovery to DNA damage, inference of a combined physical/functional interaction network for Halobacterium, and an interface to detailed stochastic/kinetic gene regulatory models.

  12. Considerations for Using Agile in DoD Acquisition

    DTIC Science & Technology

    2010-04-01

    successfully used in manufacturing throughout the world for decades, such as ―just-in- time,‖ Lean, Kanban , and work-flow-based planning. Another new...of this analysis is provided in Table 2. 29 Kanban / lean style of Agile might be the most relevant for this phase. 31 | CMU/SEI-2010-TN-002...family of approaches, including Kanban [14], Rational Unified Process (RUP), Personal Software Process (PSP), Team Software Process (TSP), and Cleanroom

  13. The Care management Information system for the home Care Network (SI GESCAD): support for care coordination and continuity of care in the Brazilian Unified health system (SUS).

    PubMed

    Pires, Maria Raquel Gomes Maia; Gottems, Leila Bernarda Donato; Vasconcelos Filho, José Eurico; Silva, Kênia Lara; Gamarski, Ricardo

    2015-06-01

    The present article describes the development of the initial version of the Brazilian Care Management Information System for the Home Care Network (SI GESCAD). This system was created to enhance comprehensive care, care coordination and the continuity of care provided to the patients, family and caretakers of the Home Care (HC) program. We also present a reflection on the contributions, limitations and possibilities of the SI GESCAD within the scope of the Home Care Network of the Brazilian Unified Health System (RAS-AD). This was a study on technology production based on a multi-method protocol. It discussed software engineering and human-computer interaction (HCI) based on user-centered design, as well as evolutionary and interactive software process (prototyping and spiral). A functional prototype of the GESCAD was finalized, which allowed for the management of HC to take into consideration the patient's social context, family and caretakers. The system also proved to help in the management of activities of daily living (ADLs), clinical care and the monitoring of variables associated with type 2 HC. The SI GESCAD allowed for a more horizontal work process for HC teams at the RAS-AD/SUS level of care, with positive repercussions on care coordination and continuity of care.

  14. MLM Builder: An Integrated Suite for Development and Maintenance of Arden Syntax Medical Logic Modules

    PubMed Central

    Sailors, R. Matthew

    1997-01-01

    The Arden Syntax specification for sharable computerized medical knowledge bases has not been widely utilized in the medical informatics community because of a lack of tools for developing Arden Syntax knowledge bases (Medical Logic Modules). The MLM Builder is a Microsoft Windows-hosted CASE (Computer Aided Software Engineering) tool designed to aid in the development and maintenance of Arden Syntax Medical Logic Modules (MLMs). The MLM Builder consists of the MLM Writer (an MLM generation tool), OSCAR (an anagram of Object-oriented ARden Syntax Compiler), a test database, and the MLManager (an MLM management information system). Working together, these components form a self-contained, unified development environment for the creation, testing, and maintenance of Arden Syntax Medical Logic Modules.

  15. Improving Earth Science Metadata: Modernizing ncISO

    NASA Astrophysics Data System (ADS)

    O'Brien, K.; Schweitzer, R.; Neufeld, D.; Burger, E. F.; Signell, R. P.; Arms, S. C.; Wilcox, K.

    2016-12-01

    ncISO is a package of tools developed at NOAA's National Center for Environmental Information (NCEI) that facilitates the generation of ISO 19115-2 metadata from NetCDF data sources. The tool currently exists in two iterations: a command line utility and a web-accessible service within the THREDDS Data Server (TDS). Several projects, including NOAA's Unified Access Framework (UAF), depend upon ncISO to generate the ISO-compliant metadata from their data holdings and use the resulting information to populate discovery tools such as NCEI's ESRI Geoportal and NOAA's data.noaa.gov CKAN system. In addition to generating ISO 19115-2 metadata, the tool calculates a rubric score based on how well the dataset follows the Attribute Conventions for Dataset Discovery (ACDD). The result of this rubric calculation, along with information about what has been included and what is missing is displayed in an HTML document generated by the ncISO software package. Recently ncISO has fallen behind in terms of supporting updates to conventions such updates to the ACDD. With the blessing of the original programmer, NOAA's UAF has been working to modernize the ncISO software base. In addition to upgrading ncISO to utilize version1.3 of the ACDD, we have been working with partners at Unidata and IOOS to unify the tool's code base. In essence, we are merging the command line capabilities into the same software that will now be used by the TDS service, allowing easier updates when conventions such as ACDD are updated in the future. In this presentation, we will discuss the work the UAF project has done to support updated conventions within ncISO, as well as describe how the updated tool is helping to improve metadata throughout the earth and ocean sciences.

  16. A Mass Spectrometry Proteomics Data Management Platform*

    PubMed Central

    Sharma, Vagisha; Eng, Jimmy K.; MacCoss, Michael J.; Riffle, Michael

    2012-01-01

    Mass spectrometry-based proteomics is increasingly being used in biomedical research. These experiments typically generate a large volume of highly complex data, and the volume and complexity are only increasing with time. There exist many software pipelines for analyzing these data (each typically with its own file formats), and as technology improves, these file formats change and new formats are developed. Files produced from these myriad software programs may accumulate on hard disks or tape drives over time, with older files being rendered progressively more obsolete and unusable with each successive technical advancement and data format change. Although initiatives exist to standardize the file formats used in proteomics, they do not address the core failings of a file-based data management system: (1) files are typically poorly annotated experimentally, (2) files are “organically” distributed across laboratory file systems in an ad hoc manner, (3) files formats become obsolete, and (4) searching the data and comparing and contrasting results across separate experiments is very inefficient (if possible at all). Here we present a relational database architecture and accompanying web application dubbed Mass Spectrometry Data Platform that is designed to address the failings of the file-based mass spectrometry data management approach. The database is designed such that the output of disparate software pipelines may be imported into a core set of unified tables, with these core tables being extended to support data generated by specific pipelines. Because the data are unified, they may be queried, viewed, and compared across multiple experiments using a common web interface. Mass Spectrometry Data Platform is open source and freely available at http://code.google.com/p/msdapl/. PMID:22611296

  17. [Research on tumor information grid framework].

    PubMed

    Zhang, Haowei; Qin, Zhu; Liu, Ying; Tan, Jianghao; Cao, Haitao; Chen, Youping; Zhang, Ke; Ding, Yuqing

    2013-10-01

    In order to realize tumor disease information sharing and unified management, we utilized grid technology to make the data and software resources which distributed in various medical institutions for effective integration so that we could make the heterogeneous resources consistent and interoperable in both semantics and syntax aspects. This article describes the tumor grid framework, the type of the service being packaged in Web Service Description Language (WSDL) and extensible markup language schemas definition (XSD), the client use the serialized document to operate the distributed resources. The service objects could be built by Unified Modeling Language (UML) as middle ware to create application programming interface. All of the grid resources are registered in the index and released in the form of Web Services based on Web Services Resource Framework (WSRF). Using the system we can build a multi-center, large sample and networking tumor disease resource sharing framework to improve the level of development in medical scientific research institutions and the patient's quality of life.

  18. Feature-based component model for design of embedded systems

    NASA Astrophysics Data System (ADS)

    Zha, Xuan Fang; Sriram, Ram D.

    2004-11-01

    An embedded system is a hybrid of hardware and software, which combines software's flexibility and hardware real-time performance. Embedded systems can be considered as assemblies of hardware and software components. An Open Embedded System Model (OESM) is currently being developed at NIST to provide a standard representation and exchange protocol for embedded systems and system-level design, simulation, and testing information. This paper proposes an approach to representing an embedded system feature-based model in OESM, i.e., Open Embedded System Feature Model (OESFM), addressing models of embedded system artifacts, embedded system components, embedded system features, and embedded system configuration/assembly. The approach provides an object-oriented UML (Unified Modeling Language) representation for the embedded system feature model and defines an extension to the NIST Core Product Model. The model provides a feature-based component framework allowing the designer to develop a virtual embedded system prototype through assembling virtual components. The framework not only provides a formal precise model of the embedded system prototype but also offers the possibility of designing variation of prototypes whose members are derived by changing certain virtual components with different features. A case study example is discussed to illustrate the embedded system model.

  19. Real Time Metrology Using Heterodyne Interferometry

    NASA Astrophysics Data System (ADS)

    Evans, Joseph T..., Jr.

    1983-11-01

    The Air Force Weapons Laboratory (AFWL) located at Albuquerque, NM has developed a digital heterodyne interferometer capable of real-time, closed loop analysis and control of adaptive optics. The device uses independent phase modulation of two orthogonal polarizations of an argon ion laser to produce a temporally phase modulated interferogram of the test object in a Twyman-Green interferometer. Differential phase detection under the control of a Data General minicomputer helps reconstruct the phase front without noise effects from amplitude modulation in the optical train. The system consists of the interferometer optics, phase detection circuitry, and the minicomputer, allowing for complete software control of the process. The software has been unified into a powerful package that performs automatic data acquisition, OPD reconstruction, and Zernike analysis of the resulting wavefront. The minicomputer has the capability to control external devices so that closed loop analysis and control is possible. New software under development will provide a framework of data acquisition, display, and storage packages which can be integrated with analysis and control packages customized to the user's needs. Preliminary measurements with the system show that it is noise limited by laser beam phase quality and vibration of the optics. Active measures are necessary to reduce the impact of these noise sources.

  20. VLTI auxiliary telescopes: a full object-oriented approach

    NASA Astrophysics Data System (ADS)

    Chiozzi, Gianluca; Duhoux, Philippe; Karban, Robert

    2000-06-01

    The Very Large Telescope (VLT) Telescope Control Software (TCS) is a portable system. It is now in use or will be used in a whole family of ESO telescopes VLT Unit Telescopes, VLTI Auxiliary Telescopes, NTT, La Silla 3.6, VLT Survey Telescope and Astronomical Site Monitors in Paranal and La Silla). Although it has been developed making extensive usage of Object Oriented (OO) methodologies, the overall development process chosen at the beginning of the project used traditional methods. In order to warranty a longer lifetime to the system (improving documentation and maintainability) and to prepare for future projects, we have introduced a full OO process. We have taken as a basis the United Software Development Process with the Unified Modeling Language (UML) and we have adapted the process to our specific needs. This paper describes how the process has been applied to the VLTI Auxiliary Telescopes Control Software (ATCS). The ATCS is based on the portable VLT TCS, but some subsystems are new or have specific characteristics. The complete process has been applied to the new subsystems, while reused code has been integrated in the UML models. We have used the ATCS on one side to tune the process and train the team members and on the other side to provide a UML and WWW based documentation for the portable VLT TCS.

  1. Leveraging e-Science infrastructure for electrochemical research.

    PubMed

    Peachey, Tom; Mashkina, Elena; Lee, Chong-Yong; Enticott, Colin; Abramson, David; Bond, Alan M; Elton, Darrell; Gavaghan, David J; Stevenson, Gareth P; Kennedy, Gareth F

    2011-08-28

    As in many scientific disciplines, modern chemistry involves a mix of experimentation and computer-supported theory. Historically, these skills have been provided by different groups, and range from traditional 'wet' laboratory science to advanced numerical simulation. Increasingly, progress is made by global collaborations, in which new theory may be developed in one part of the world and applied and tested in the laboratory elsewhere. e-Science, or cyber-infrastructure, underpins such collaborations by providing a unified platform for accessing scientific instruments, computers and data archives, and collaboration tools. In this paper we discuss the application of advanced e-Science software tools to electrochemistry research performed in three different laboratories--two at Monash University in Australia and one at the University of Oxford in the UK. We show that software tools that were originally developed for a range of application domains can be applied to electrochemical problems, in particular Fourier voltammetry. Moreover, we show that, by replacing ad-hoc manual processes with e-Science tools, we obtain more accurate solutions automatically.

  2. Exploiting current-generation graphics hardware for synthetic-scene generation

    NASA Astrophysics Data System (ADS)

    Tanner, Michael A.; Keen, Wayne A.

    2010-04-01

    Increasing seeker frame rate and pixel count, as well as the demand for higher levels of scene fidelity, have driven scene generation software for hardware-in-the-loop (HWIL) and software-in-the-loop (SWIL) testing to higher levels of parallelization. Because modern PC graphics cards provide multiple computational cores (240 shader cores for a current NVIDIA Corporation GeForce and Quadro cards), implementation of phenomenology codes on graphics processing units (GPUs) offers significant potential for simultaneous enhancement of simulation frame rate and fidelity. To take advantage of this potential requires algorithm implementation that is structured to minimize data transfers between the central processing unit (CPU) and the GPU. In this paper, preliminary methodologies developed at the Kinetic Hardware In-The-Loop Simulator (KHILS) will be presented. Included in this paper will be various language tradeoffs between conventional shader programming, Compute Unified Device Architecture (CUDA) and Open Computing Language (OpenCL), including performance trades and possible pathways for future tool development.

  3. Designer's unified cost model

    NASA Technical Reports Server (NTRS)

    Freeman, William T.; Ilcewicz, L. B.; Swanson, G. D.; Gutowski, T.

    1992-01-01

    A conceptual and preliminary designers' cost prediction model has been initiated. The model will provide a technically sound method for evaluating the relative cost of different composite structural designs, fabrication processes, and assembly methods that can be compared to equivalent metallic parts or assemblies. The feasibility of developing cost prediction software in a modular form for interfacing with state of the art preliminary design tools and computer aided design programs is being evaluated. The goal of this task is to establish theoretical cost functions that relate geometric design features to summed material cost and labor content in terms of process mechanics and physics. The output of the designers' present analytical tools will be input for the designers' cost prediction model to provide the designer with a data base and deterministic cost methodology that allows one to trade and synthesize designs with both cost and weight as objective functions for optimization. The approach, goals, plans, and progress is presented for development of COSTADE (Cost Optimization Software for Transport Aircraft Design Evaluation).

  4. The jABC Approach to Rigorous Collaborative Development of SCM Applications

    NASA Astrophysics Data System (ADS)

    Hörmann, Martina; Margaria, Tiziana; Mender, Thomas; Nagel, Ralf; Steffen, Bernhard; Trinh, Hong

    Our approach to the model-driven collaborative design of IKEA's P3 Delivery Management Process uses the jABC [9] for model driven mediation and choreography to complement a RUP-based (Rational Unified Process) development process. jABC is a framework for service development based on Lightweight Process Coordination. Users (product developers and system/software designers) easily develop services and applications by composing reusable building-blocks into (flow-) graph structures that can be animated, analyzed, simulated, verified, executed, and compiled. This way of handling the collaborative design of complex embedded systems has proven to be effective and adequate for the cooperation of non-programmers and non-technical people, which is the focus of this contribution, and it is now being rolled out in the operative practice.

  5. Concepts associated with a unified life cycle analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Whelan, Gene; Peffers, Melissa S.; Tolle, Duane A.

    There is a risk associated with most things in the world, and all things have a life cycle unto themselves, even brownfields. Many components can be described by a''cycle of life.'' For example, five such components are life-form, chemical, process, activity, and idea, although many more may exist. Brownfields may touch upon several of these life cycles. Each life cycle can be represented as independent software; therefore, a software technology structure is being formulated to allow for the seamless linkage of software products, representing various life-cycle aspects. Because classes of these life cycles tend to be independent of each other,more » the current research programs and efforts do not have to be revamped; therefore, this unified life-cycle paradigm builds upon current technology and is backward compatible while embracing future technology. Only when two of these life cycles coincide and one impacts the other is there connectivity and a transfer of information at the interface. The current framework approaches (e.g., FRAMES, 3MRA, etc.) have a design that is amenable to capturing (1) many of these underlying philosophical concepts to assure backward compatibility of diverse independent assessment frameworks and (2) linkage communication to help transfer the needed information at the points of intersection. The key effort will be to identify (1) linkage points (i.e., portals) between life cycles, (2) the type and form of data passing between life cycles, and (3) conditions when life cycles interact and communicate. This paper discusses design aspects associated with a unified life-cycle analysis, which can support not only brownfields but also other types of assessments.« less

  6. A unified teleoperated-autonomous dual-arm robotic system

    NASA Technical Reports Server (NTRS)

    Hayati, Samad; Lee, Thomas S.; Tso, Kam Sing; Backes, Paul G.; Lloyd, John

    1991-01-01

    A description is given of complete robot control facility built as part of a NASA telerobotics program to develop a state-of-the-art robot control environment for performing experiments in the repair and assembly of spacelike hardware to gain practical knowledge of such work and to improve the associated technology. The basic architecture of the manipulator control subsystem is presented. The multiarm Robot Control C Library (RCCL), a key software component of the system, is described, along with its implementation on a Sun-4 computer. The system's simulation capability is also described, and the teleoperation and shared control features are explained.

  7. Designers' unified cost model

    NASA Technical Reports Server (NTRS)

    Freeman, W.; Ilcewicz, L.; Swanson, G.; Gutowski, T.

    1992-01-01

    The Structures Technology Program Office (STPO) at NASA LaRC has initiated development of a conceptual and preliminary designers' cost prediction model. The model will provide a technically sound method for evaluating the relative cost of different composite structural designs, fabrication processes, and assembly methods that can be compared to equivalent metallic parts or assemblies. The feasibility of developing cost prediction software in a modular form for interfacing with state-of-the-art preliminary design tools and computer aided design programs is being evaluated. The goal of this task is to establish theoretical cost functions that relate geometric design features to summed material cost and labor content in terms of process mechanics and physics. The output of the designers' present analytical tools will be input for the designers' cost prediction model to provide the designer with a database and deterministic cost methodology that allows one to trade and synthesize designs with both cost and weight as objective functions for optimization. This paper presents the team members, approach, goals, plans, and progress to date for development of COSTADE (Cost Optimization Software for Transport Aircraft Design Evaluation).

  8. Biological Dynamics Markup Language (BDML): an open format for representing quantitative biological dynamics data

    PubMed Central

    Kyoda, Koji; Tohsato, Yukako; Ho, Kenneth H. L.; Onami, Shuichi

    2015-01-01

    Motivation: Recent progress in live-cell imaging and modeling techniques has resulted in generation of a large amount of quantitative data (from experimental measurements and computer simulations) on spatiotemporal dynamics of biological objects such as molecules, cells and organisms. Although many research groups have independently dedicated their efforts to developing software tools for visualizing and analyzing these data, these tools are often not compatible with each other because of different data formats. Results: We developed an open unified format, Biological Dynamics Markup Language (BDML; current version: 0.2), which provides a basic framework for representing quantitative biological dynamics data for objects ranging from molecules to cells to organisms. BDML is based on Extensible Markup Language (XML). Its advantages are machine and human readability and extensibility. BDML will improve the efficiency of development and evaluation of software tools for data visualization and analysis. Availability and implementation: A specification and a schema file for BDML are freely available online at http://ssbd.qbic.riken.jp/bdml/. Contact: sonami@riken.jp Supplementary Information: Supplementary data are available at Bioinformatics online. PMID:25414366

  9. Biological Dynamics Markup Language (BDML): an open format for representing quantitative biological dynamics data.

    PubMed

    Kyoda, Koji; Tohsato, Yukako; Ho, Kenneth H L; Onami, Shuichi

    2015-04-01

    Recent progress in live-cell imaging and modeling techniques has resulted in generation of a large amount of quantitative data (from experimental measurements and computer simulations) on spatiotemporal dynamics of biological objects such as molecules, cells and organisms. Although many research groups have independently dedicated their efforts to developing software tools for visualizing and analyzing these data, these tools are often not compatible with each other because of different data formats. We developed an open unified format, Biological Dynamics Markup Language (BDML; current version: 0.2), which provides a basic framework for representing quantitative biological dynamics data for objects ranging from molecules to cells to organisms. BDML is based on Extensible Markup Language (XML). Its advantages are machine and human readability and extensibility. BDML will improve the efficiency of development and evaluation of software tools for data visualization and analysis. A specification and a schema file for BDML are freely available online at http://ssbd.qbic.riken.jp/bdml/. Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press.

  10. Integration of Heterogeneous Bibliographic Information through Data Abstractions.

    ERIC Educational Resources Information Center

    Breazeal, Juliette Ow

    This study examines the integration of heterogeneous bibliographic information resources from geographically distributed locations in an automated, unified, and controlled way using abstract data types called "classes" through the Message-Object Model defined in Smalltalk-80 software. The concept of achieving data consistency by…

  11. Clinical results of HIS, RIS, PACS integration using data integration CASE tools

    NASA Astrophysics Data System (ADS)

    Taira, Ricky K.; Chan, Hing-Ming; Breant, Claudine M.; Huang, Lu J.; Valentino, Daniel J.

    1995-05-01

    Current infrastructure research in PACS is dominated by the development of communication networks (local area networks, teleradiology, ATM networks, etc.), multimedia display workstations, and hierarchical image storage architectures. However, limited work has been performed on developing flexible, expansible, and intelligent information processing architectures for the vast decentralized image and text data repositories prevalent in healthcare environments. Patient information is often distributed among multiple data management systems. Current large-scale efforts to integrate medical information and knowledge sources have been costly with limited retrieval functionality. Software integration strategies to unify distributed data and knowledge sources is still lacking commercially. Systems heterogeneity (i.e., differences in hardware platforms, communication protocols, database management software, nomenclature, etc.) is at the heart of the problem and is unlikely to be standardized in the near future. In this paper, we demonstrate the use of newly available CASE (computer- aided software engineering) tools to rapidly integrate HIS, RIS, and PACS information systems. The advantages of these tools include fast development time (low-level code is generated from graphical specifications), and easy system maintenance (excellent documentation, easy to perform changes, and centralized code repository in an object-oriented database). The CASE tools are used to develop and manage the `middle-ware' in our client- mediator-serve architecture for systems integration. Our architecture is scalable and can accommodate heterogeneous database and communication protocols.

  12. Mushu, a free- and open source BCI signal acquisition, written in Python.

    PubMed

    Venthur, Bastian; Blankertz, Benjamin

    2012-01-01

    The following paper describes Mushu, a signal acquisition software for retrieval and online streaming of Electroencephalography (EEG) data. It is written, but not limited, to the needs of Brain Computer Interfacing (BCI). It's main goal is to provide a unified interface to EEG data regardless of the amplifiers used. It runs under all major operating systems, like Windows, Mac OS and Linux, is written in Python and is free- and open source software licensed under the terms of the GNU General Public License.

  13. Transient deformational properties of high temperature alloys used in solid oxide fuel cell stacks

    NASA Astrophysics Data System (ADS)

    Molla, Tesfaye Tadesse; Kwok, Kawai; Frandsen, Henrik Lund

    2017-05-01

    Stresses and probability of failure during operation of solid oxide fuel cells (SOFCs) is affected by the deformational properties of the different components of the SOFC stack. Though the overall stress relaxes with time during steady state operation, large stresses would normally appear through transients in operation including temporary shut downs. These stresses are highly affected by the transient creep behavior of metallic components in the SOFC stack. This study investigates whether a variation of the so-called Chaboche's unified power law together with isotropic hardening can represent the transient behavior of Crofer 22 APU, a typical iron-chromium alloy used in SOFC stacks. The material parameters for the model are determined by measurements involving relaxation and constant strain rate experiments. The constitutive law is implemented into commercial finite element software using a user-defined material model. This is used to validate the developed constitutive law to experiments with constant strain rate, cyclic and creep experiments. The predictions from the developed model are found to agree well with experimental data. It is therefore concluded that Chaboche's unified power law can be applied to describe the high temperature inelastic deformational behaviors of Crofer 22 APU used for metallic interconnects in SOFC stacks.

  14. Advances in Software Tools for Pre-processing and Post-processing of Overset Grid Computations

    NASA Technical Reports Server (NTRS)

    Chan, William M.

    2004-01-01

    Recent developments in three pieces of software for performing pre-processing and post-processing work on numerical computations using overset grids are presented. The first is the OVERGRID graphical interface which provides a unified environment for the visualization, manipulation, generation and diagnostics of geometry and grids. Modules are also available for automatic boundary conditions detection, flow solver input preparation, multiple component dynamics input preparation and dynamics animation, simple solution viewing for moving components, and debris trajectory analysis input preparation. The second is a grid generation script library that enables rapid creation of grid generation scripts. A sample of recent applications will be described. The third is the OVERPLOT graphical interface for displaying and analyzing history files generated by the flow solver. Data displayed include residuals, component forces and moments, number of supersonic and reverse flow points, and various dynamics parameters.

  15. The Cooperate Assistive Teamwork Environment for Software Description Languages.

    PubMed

    Groenda, Henning; Seifermann, Stephan; Müller, Karin; Jaworek, Gerhard

    2015-01-01

    Versatile description languages such as the Unified Modeling Language (UML) are commonly used in software engineering across different application domains in theory and practice. They often use graphical notations and leverage visual memory for expressing complex relations. Those notations are hard to access for people with visual impairment and impede their smooth inclusion in an engineering team. Existing approaches provide textual notations but require manual synchronization between the notations. This paper presents requirements for an accessible and language-aware team work environment as well as our plan for the assistive implementation of Cooperate. An industrial software engineering team consisting of people with and without visual impairment will evaluate the implementation.

  16. DataSpread: Unifying Databases and Spreadsheets.

    PubMed

    Bendre, Mangesh; Sun, Bofan; Zhang, Ding; Zhou, Xinyan; Chang, Kevin ChenChuan; Parameswaran, Aditya

    2015-08-01

    Spreadsheet software is often the tool of choice for ad-hoc tabular data management, processing, and visualization, especially on tiny data sets. On the other hand, relational database systems offer significant power, expressivity, and efficiency over spreadsheet software for data management, while lacking in the ease of use and ad-hoc analysis capabilities. We demonstrate DataSpread, a data exploration tool that holistically unifies databases and spreadsheets. It continues to offer a Microsoft Excel-based spreadsheet front-end, while in parallel managing all the data in a back-end database, specifically, PostgreSQL. DataSpread retains all the advantages of spreadsheets, including ease of use, ad-hoc analysis and visualization capabilities, and a schema-free nature, while also adding the advantages of traditional relational databases, such as scalability and the ability to use arbitrary SQL to import, filter, or join external or internal tables and have the results appear in the spreadsheet. DataSpread needs to reason about and reconcile differences in the notions of schema, addressing of cells and tuples, and the current "pane" (which exists in spreadsheets but not in traditional databases), and support data modifications at both the front-end and the back-end. Our demonstration will center on our first and early prototype of the DataSpread, and will give the attendees a sense for the enormous data exploration capabilities offered by unifying spreadsheets and databases.

  17. DataSpread: Unifying Databases and Spreadsheets

    PubMed Central

    Bendre, Mangesh; Sun, Bofan; Zhang, Ding; Zhou, Xinyan; Chang, Kevin ChenChuan; Parameswaran, Aditya

    2015-01-01

    Spreadsheet software is often the tool of choice for ad-hoc tabular data management, processing, and visualization, especially on tiny data sets. On the other hand, relational database systems offer significant power, expressivity, and efficiency over spreadsheet software for data management, while lacking in the ease of use and ad-hoc analysis capabilities. We demonstrate DataSpread, a data exploration tool that holistically unifies databases and spreadsheets. It continues to offer a Microsoft Excel-based spreadsheet front-end, while in parallel managing all the data in a back-end database, specifically, PostgreSQL. DataSpread retains all the advantages of spreadsheets, including ease of use, ad-hoc analysis and visualization capabilities, and a schema-free nature, while also adding the advantages of traditional relational databases, such as scalability and the ability to use arbitrary SQL to import, filter, or join external or internal tables and have the results appear in the spreadsheet. DataSpread needs to reason about and reconcile differences in the notions of schema, addressing of cells and tuples, and the current “pane” (which exists in spreadsheets but not in traditional databases), and support data modifications at both the front-end and the back-end. Our demonstration will center on our first and early prototype of the DataSpread, and will give the attendees a sense for the enormous data exploration capabilities offered by unifying spreadsheets and databases. PMID:26900487

  18. The Development of Web-based Graphical User Interface for Unified Modeling Data with Multi (Correlated) Responses

    NASA Astrophysics Data System (ADS)

    Made Tirta, I.; Anggraeni, Dian

    2018-04-01

    Statistical models have been developed rapidly into various directions to accommodate various types of data. Data collected from longitudinal, repeated measured, clustered data (either continuous, binary, count, or ordinal), are more likely to be correlated. Therefore statistical model for independent responses, such as Generalized Linear Model (GLM), Generalized Additive Model (GAM) are not appropriate. There are several models available to apply for correlated responses including GEEs (Generalized Estimating Equations), for marginal model and various mixed effect model such as GLMM (Generalized Linear Mixed Models) and HGLM (Hierarchical Generalized Linear Models) for subject spesific models. These models are available on free open source software R, but they can only be accessed through command line interface (using scrit). On the othe hand, most practical researchers very much rely on menu based or Graphical User Interface (GUI). We develop, using Shiny framework, standard pull down menu Web-GUI that unifies most models for correlated responses. The Web-GUI has accomodated almost all needed features. It enables users to do and compare various modeling for repeated measure data (GEE, GLMM, HGLM, GEE for nominal responses) much more easily trough online menus. This paper discusses the features of the Web-GUI and illustrates the use of them. In General we find that GEE, GLMM, HGLM gave very closed results.

  19. Architectural Design of a LMS with LTSA-Conformance

    ERIC Educational Resources Information Center

    Sengupta, Souvik; Dasgupta, Ranjan

    2017-01-01

    This paper illustrates an approach for architectural design of a Learning Management System (LMS), which is verifiable against the Learning Technology System Architecture (LTSA) conformance rules. We introduce a new method for software architectural design that extends the Unified Modeling Language (UML) component diagram with the formal…

  20. A versatile nondestructive evaluation imaging workstation

    NASA Technical Reports Server (NTRS)

    Chern, E. James; Butler, David W.

    1994-01-01

    Ultrasonic C-scan and eddy current imaging systems are of the pointwise type evaluation systems that rely on a mechanical scanner to physically maneuver a probe relative to the specimen point by point in order to acquire data and generate images. Since the ultrasonic C-scan and eddy current imaging systems are based on the same mechanical scanning mechanisms, the two systems can be combined using the same PC platform with a common mechanical manipulation subsystem and integrated data acquisition software. Based on this concept, we have developed an IBM PC-based combined ultrasonic C-scan and eddy current imaging system. The system is modularized and provides capacity for future hardware and software expansions. Advantages associated with the combined system are: (1) eliminated duplication of the computer and mechanical hardware, (2) unified data acquisition, processing and storage software, (3) reduced setup time for repetitious ultrasonic and eddy current scans, and (4) improved system efficiency. The concept can be adapted to many engineering systems by integrating related PC-based instruments into one multipurpose workstation such as dispensing, machining, packaging, sorting, and other industrial applications.

  1. A versatile nondestructive evaluation imaging workstation

    NASA Astrophysics Data System (ADS)

    Chern, E. James; Butler, David W.

    1994-02-01

    Ultrasonic C-scan and eddy current imaging systems are of the pointwise type evaluation systems that rely on a mechanical scanner to physically maneuver a probe relative to the specimen point by point in order to acquire data and generate images. Since the ultrasonic C-scan and eddy current imaging systems are based on the same mechanical scanning mechanisms, the two systems can be combined using the same PC platform with a common mechanical manipulation subsystem and integrated data acquisition software. Based on this concept, we have developed an IBM PC-based combined ultrasonic C-scan and eddy current imaging system. The system is modularized and provides capacity for future hardware and software expansions. Advantages associated with the combined system are: (1) eliminated duplication of the computer and mechanical hardware, (2) unified data acquisition, processing and storage software, (3) reduced setup time for repetitious ultrasonic and eddy current scans, and (4) improved system efficiency. The concept can be adapted to many engineering systems by integrating related PC-based instruments into one multipurpose workstation such as dispensing, machining, packaging, sorting, and other industrial applications.

  2. Software Hardware Asset Reuse Enterprise (SHARE) Repository Framework Final Report: Component Specification and Ontology

    DTIC Science & Technology

    2009-08-19

    SSDS Ship Self Defense System TSTS Total Ship Training System UDDI Universal Description, Discovery, and Integration UML Unified Modeling...34ContractorOrganization" type="ContractorOrganizationType"> <xs:annotation> <xs:documentation>Identifies a contractor organization resposible for the

  3. Peridigm summary report : lessons learned in development with agile components.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Salinger, Andrew Gerhard; Mitchell, John Anthony; Littlewood, David John

    2011-09-01

    This report details efforts to deploy Agile Components for rapid development of a peridynamics code, Peridigm. The goal of Agile Components is to enable the efficient development of production-quality software by providing a well-defined, unifying interface to a powerful set of component-based software. Specifically, Agile Components facilitate interoperability among packages within the Trilinos Project, including data management, time integration, uncertainty quantification, and optimization. Development of the Peridigm code served as a testbed for Agile Components and resulted in a number of recommendations for future development. Agile Components successfully enabled rapid integration of Trilinos packages into Peridigm. A cost of thismore » approach, however, was a set of restrictions on Peridigm's architecture which impacted the ability to track history-dependent material data, dynamically modify the model discretization, and interject user-defined routines into the time integration algorithm. These restrictions resulted in modifications to the Agile Components approach, as implemented in Peridigm, and in a set of recommendations for future Agile Components development. Specific recommendations include improved handling of material states, a more flexible flow control model, and improved documentation. A demonstration mini-application, SimpleODE, was developed at the onset of this project and is offered as a potential supplement to Agile Components documentation.« less

  4. Novel features and enhancements in BioBin, a tool for the biologically inspired binning and association analysis of rare variants

    PubMed Central

    Byrska-Bishop, Marta; Wallace, John; Frase, Alexander T; Ritchie, Marylyn D

    2018-01-01

    Abstract Motivation BioBin is an automated bioinformatics tool for the multi-level biological binning of sequence variants. Herein, we present a significant update to BioBin which expands the software to facilitate a comprehensive rare variant analysis and incorporates novel features and analysis enhancements. Results In BioBin 2.3, we extend our software tool by implementing statistical association testing, updating the binning algorithm, as well as incorporating novel analysis features providing for a robust, highly customizable, and unified rare variant analysis tool. Availability and implementation The BioBin software package is open source and freely available to users at http://www.ritchielab.com/software/biobin-download Contact mdritchie@geisinger.edu Supplementary information Supplementary data are available at Bioinformatics online. PMID:28968757

  5. SeisCode: A seismological software repository for discovery and collaboration

    NASA Astrophysics Data System (ADS)

    Trabant, C.; Reyes, C. G.; Clark, A.; Karstens, R.

    2012-12-01

    SeisCode is a community repository for software used in seismological and related fields. The repository is intended to increase discoverability of such software and to provide a long-term home for software projects. Other places exist where seismological software may be found, but none meet the requirements necessary for an always current, easy to search, well documented, and citable resource for projects. Organizations such as IRIS, ORFEUS, and the USGS have websites with lists of available or contributed seismological software. Since the authors themselves do often not maintain these lists, the documentation often consists of a sentence or paragraph, and the available software may be outdated. Repositories such as GoogleCode and SourceForge, which are directly maintained by the authors, provide version control and issue tracking but do not provide a unified way of locating geophysical software scattered in and among countless unrelated projects. Additionally, projects are hosted at language-specific sites such as Mathworks and PyPI, in FTP directories, and in websites strewn across the Web. Search engines are only partially effective discovery tools, as the desired software is often hidden deep within the results. SeisCode provides software authors a place to present their software, codes, scripts, tutorials, and examples to the seismological community. Authors can choose their own level of involvement. At one end of the spectrum, the author might simply create a web page that points to an existing site. At the other extreme, an author may choose to leverage the many tools provided by SeisCode, such as a source code management tool with integrated issue tracking, forums, news feeds, downloads, wikis, and more. For software development projects with multiple authors, SeisCode can also be used as a central site for collaboration. SeisCode provides the community with an easy way to discover software, while providing authors a way to build a community around their software packages. IRIS invites the seismological community to browse and to submit projects to https://seiscode.iris.washington.edu/

  6. SCIFIO: an extensible framework to support scientific image formats.

    PubMed

    Hiner, Mark C; Rueden, Curtis T; Eliceiri, Kevin W

    2016-12-07

    No gold standard exists in the world of scientific image acquisition; a proliferation of instruments each with its own proprietary data format has made out-of-the-box sharing of that data nearly impossible. In the field of light microscopy, the Bio-Formats library was designed to translate such proprietary data formats to a common, open-source schema, enabling sharing and reproduction of scientific results. While Bio-Formats has proved successful for microscopy images, the greater scientific community was lacking a domain-independent framework for format translation. SCIFIO (SCientific Image Format Input and Output) is presented as a freely available, open-source library unifying the mechanisms of reading and writing image data. The core of SCIFIO is its modular definition of formats, the design of which clearly outlines the components of image I/O to encourage extensibility, facilitated by the dynamic discovery of the SciJava plugin framework. SCIFIO is structured to support coexistence of multiple domain-specific open exchange formats, such as Bio-Formats' OME-TIFF, within a unified environment. SCIFIO is a freely available software library developed to standardize the process of reading and writing scientific image formats.

  7. A Productivity Enhancement Study of the FMSO (Fleet Material Support Office) Software Effort.

    DTIC Science & Technology

    1983-11-01

    Project management 20.~~~~~4 AUSTRAC’ *I**ne- eeee~ Iffteeee.sit aiftneilt by ble ninbee) This report presents the results of a productivity enhance...they a _-o.a: tc ouz=..e.s. it is -he opinion of the authors that FISO is weli managed and that employee morale is genarally goDd, bum tnat the or...Syszem that will suppcrt comput-r progriaming work, documen- tation and software management . h :his should be a / unified system (all parts of it cin

  8. Evaluation of a deidentification (De-Id) software engine to share pathology reports and clinical documents for research.

    PubMed

    Gupta, Dilip; Saul, Melissa; Gilbertson, John

    2004-02-01

    We evaluated a comprehensive deidentification engine at the University of Pittsburgh Medical Center (UPMC), Pittsburgh, PA, that uses a complex set of rules, dictionaries, pattern-matching algorithms, and the Unified Medical Language System to identify and replace identifying text in clinical reports while preserving medical information for sharing in research. In our initial data set of 967 surgical pathology reports, the software did not suppress outside (103), UPMC (47), and non-UPMC (56) accession numbers; dates (7); names (9) or initials (25) of case pathologists; or hospital or laboratory names (46). In 150 reports, some clinical information was suppressed inadvertently (overmarking). The engine retained eponymic patient names, eg, Barrett and Gleason. In the second evaluation (1,000 reports), the software did not suppress outside (90) or UPMC (6) accession numbers or names (4) or initials (2) of case pathologists. In the third evaluation, the software removed names of patients, hospitals (297/300), pathologists (297/300), transcriptionists, residents and physicians, dates of procedures, and accession numbers (298/300). By the end of the evaluation, the system was reliably and specifically removing safe-harbor identifiers and producing highly readable deidentified text without removing important clinical information. Collaboration between pathology domain experts and system developers and continuous quality assurance are needed to optimize ongoing deidentification processes.

  9. Integrating Formal Methods and Testing 2002

    NASA Technical Reports Server (NTRS)

    Cukic, Bojan

    2002-01-01

    Traditionally, qualitative program verification methodologies and program testing are studied in separate research communities. None of them alone is powerful and practical enough to provide sufficient confidence in ultra-high reliability assessment when used exclusively. Significant advances can be made by accounting not only tho formal verification and program testing. but also the impact of many other standard V&V techniques, in a unified software reliability assessment framework. The first year of this research resulted in the statistical framework that, given the assumptions on the success of the qualitative V&V and QA procedures, significantly reduces the amount of testing needed to confidently assess reliability at so-called high and ultra-high levels (10-4 or higher). The coming years shall address the methodologies to realistically estimate the impacts of various V&V techniques to system reliability and include the impact of operational risk to reliability assessment. Combine formal correctness verification, process and product metrics, and other standard qualitative software assurance methods with statistical testing with the aim of gaining higher confidence in software reliability assessment for high-assurance applications. B) Quantify the impact of these methods on software reliability. C) Demonstrate that accounting for the effectiveness of these methods reduces the number of tests needed to attain certain confidence level. D) Quantify and justify the reliability estimate for systems developed using various methods.

  10. Technical Note: scuda: A software platform for cumulative dose assessment.

    PubMed

    Park, Seyoun; McNutt, Todd; Plishker, William; Quon, Harry; Wong, John; Shekhar, Raj; Lee, Junghoon

    2016-10-01

    Accurate tracking of anatomical changes and computation of actually delivered dose to the patient are critical for successful adaptive radiation therapy (ART). Additionally, efficient data management and fast processing are practically important for the adoption in clinic as ART involves a large amount of image and treatment data. The purpose of this study was to develop an accurate and efficient Software platform for CUmulative Dose Assessment (scuda) that can be seamlessly integrated into the clinical workflow. scuda consists of deformable image registration (DIR), segmentation, dose computation modules, and a graphical user interface. It is connected to our image PACS and radiotherapy informatics databases from which it automatically queries/retrieves patient images, radiotherapy plan, beam data, and daily treatment information, thus providing an efficient and unified workflow. For accurate registration of the planning CT and daily CBCTs, the authors iteratively correct CBCT intensities by matching local intensity histograms during the DIR process. Contours of the target tumor and critical structures are then propagated from the planning CT to daily CBCTs using the computed deformations. The actual delivered daily dose is computed using the registered CT and patient setup information by a superposition/convolution algorithm, and accumulated using the computed deformation fields. Both DIR and dose computation modules are accelerated by a graphics processing unit. The cumulative dose computation process has been validated on 30 head and neck (HN) cancer cases, showing 3.5 ± 5.0 Gy (mean±STD) absolute mean dose differences between the planned and the actually delivered doses in the parotid glands. On average, DIR, dose computation, and segmentation take 20 s/fraction and 17 min for a 35-fraction treatment including additional computation for dose accumulation. The authors developed a unified software platform that provides accurate and efficient monitoring of anatomical changes and computation of actually delivered dose to the patient, thus realizing an efficient cumulative dose computation workflow. Evaluation on HN cases demonstrated the utility of our platform for monitoring the treatment quality and detecting significant dosimetric variations that are keys to successful ART.

  11. Technical Note: SCUDA: A software platform for cumulative dose assessment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Park, Seyoun; McNutt, Todd; Quon, Harry

    Purpose: Accurate tracking of anatomical changes and computation of actually delivered dose to the patient are critical for successful adaptive radiation therapy (ART). Additionally, efficient data management and fast processing are practically important for the adoption in clinic as ART involves a large amount of image and treatment data. The purpose of this study was to develop an accurate and efficient Software platform for CUmulative Dose Assessment (SCUDA) that can be seamlessly integrated into the clinical workflow. Methods: SCUDA consists of deformable image registration (DIR), segmentation, dose computation modules, and a graphical user interface. It is connected to our imagemore » PACS and radiotherapy informatics databases from which it automatically queries/retrieves patient images, radiotherapy plan, beam data, and daily treatment information, thus providing an efficient and unified workflow. For accurate registration of the planning CT and daily CBCTs, the authors iteratively correct CBCT intensities by matching local intensity histograms during the DIR process. Contours of the target tumor and critical structures are then propagated from the planning CT to daily CBCTs using the computed deformations. The actual delivered daily dose is computed using the registered CT and patient setup information by a superposition/convolution algorithm, and accumulated using the computed deformation fields. Both DIR and dose computation modules are accelerated by a graphics processing unit. Results: The cumulative dose computation process has been validated on 30 head and neck (HN) cancer cases, showing 3.5 ± 5.0 Gy (mean±STD) absolute mean dose differences between the planned and the actually delivered doses in the parotid glands. On average, DIR, dose computation, and segmentation take 20 s/fraction and 17 min for a 35-fraction treatment including additional computation for dose accumulation. Conclusions: The authors developed a unified software platform that provides accurate and efficient monitoring of anatomical changes and computation of actually delivered dose to the patient, thus realizing an efficient cumulative dose computation workflow. Evaluation on HN cases demonstrated the utility of our platform for monitoring the treatment quality and detecting significant dosimetric variations that are keys to successful ART.« less

  12. The integration of quantitative information with an intelligent decision support system for residential energy retrofits

    NASA Astrophysics Data System (ADS)

    Mo, Yunjeong

    The purpose of this research is to support the development of an intelligent Decision Support System (DSS) by integrating quantitative information with expert knowledge in order to facilitate effective retrofit decision-making. To achieve this goal, the Energy Retrofit Decision Process Framework is analyzed. Expert system shell software, a retrofit measure cost database, and energy simulation software are needed for developing the DSS; Exsys Corvid, the NREM database and BEopt were chosen for implementing an integration model. This integration model demonstrates the holistic function of a residential energy retrofit system for existing homes, by providing a prioritized list of retrofit measures with cost information, energy simulation and expert advice. The users, such as homeowners and energy auditors, can acquire all of the necessary retrofit information from this unified system without having to explore several separate systems. The integration model plays the role of a prototype for the finalized intelligent decision support system. It implements all of the necessary functions for the finalized DSS, including integration of the database, energy simulation and expert knowledge.

  13. Software for Training in Pre-College Mathematics

    NASA Technical Reports Server (NTRS)

    Shelton, Robert O.; Moebes, Travis A.; VanAlstine, Scot

    2003-01-01

    The Intelligent Math Tutor (IMT) is a computer program for training students in pre-college and college-level mathematics courses, including fundamentals, intermediate algebra, college algebra, and trigonometry. The IMT can be executed on a server computer for access by students via the Internet; alternatively, it can be executed on students computers equipped with compact- disk/read-only-memory (CD-ROM) drives. The IMT provides interactive exercises, assessment, tracking, and an on-line graphing calculator with algebraic-manipulation capabilities. The IMT provides an innovative combination of content, delivery mechanism, and artificial intelligence. Careful organization and presentation of the content make it possible to provide intelligent feedback to the student based on performance on exercises and tests. The tracking and feedback mechanisms are implemented within the capabilities of a commercial off-the-shelf development software tool and are written in the Unified Modeling Language to maximize reuse and minimize development cost. The graphical calculator is a standard feature of most college and pre-college algebra and trigonometry courses. Placing this functionality in a Java applet decreases the cost, provides greater capabilities, and provides an opportunity to integrate the calculator with the lessons.

  14. Software use cases to elicit the software requirements analysis within the ASTRI project

    NASA Astrophysics Data System (ADS)

    Conforti, Vito; Antolini, Elisa; Bonnoli, Giacomo; Bruno, Pietro; Bulgarelli, Andrea; Capalbi, Milvia; Fioretti, Valentina; Fugazza, Dino; Gardiol, Daniele; Grillo, Alessandro; Leto, Giuseppe; Lombardi, Saverio; Lucarelli, Fabrizio; Maccarone, Maria Concetta; Malaguti, Giuseppe; Pareschi, Giovanni; Russo, Federico; Sangiorgi, Pierluca; Schwarz, Joseph; Scuderi, Salvatore; Tanci, Claudio; Tosti, Gino; Trifoglio, Massimo; Vercellone, Stefano; Zanmar Sanchez, Ricardo

    2016-07-01

    The Italian National Institute for Astrophysics (INAF) is leading the Astrofisica con Specchi a Tecnologia Replicante Italiana (ASTRI) project whose main purpose is the realization of small size telescopes (SST) for the Cherenkov Telescope Array (CTA). The first goal of the ASTRI project has been the development and operation of an innovative end-to-end telescope prototype using a dual-mirror optical configuration (SST-2M) equipped with a camera based on silicon photo-multipliers and very fast read-out electronics. The ASTRI SST-2M prototype has been installed in Italy at the INAF "M.G. Fracastoro" Astronomical Station located at Serra La Nave, on Mount Etna, Sicily. This prototype will be used to test several mechanical, optical, control hardware and software solutions which will be used in the ASTRI mini-array, comprising nine telescopes proposed to be placed at the CTA southern site. The ASTRI mini-array is a collaborative and international effort led by INAF and carried out by Italy, Brazil and South-Africa. We present here the use cases, through UML (Unified Modeling Language) diagrams and text details, that describe the functional requirements of the software that will manage the ASTRI SST-2M prototype, and the lessons learned thanks to these activities. We intend to adopt the same approach for the Mini Array Software System that will manage the ASTRI miniarray operations. Use cases are of importance for the whole software life cycle; in particular they provide valuable support to the validation and verification activities. Following the iterative development approach, which breaks down the software development into smaller chunks, we have analysed the requirements, developed, and then tested the code in repeated cycles. The use case technique allowed us to formalize the problem through user stories that describe how the user procedurally interacts with the software system. Through the use cases we improved the communication among team members, fostered common agreement about system requirements, defined the normal and alternative course of events, understood better the business process, and defined the system test to ensure that the delivered software works properly. We present a summary of the ASTRI SST-2M prototype use cases, and how the lessons learned can be exploited for the ASTRI mini-array proposed for the CTA Observatory.

  15. JAMI: a Java library for molecular interactions and data interoperability.

    PubMed

    Sivade Dumousseau, M; Koch, M; Shrivastava, A; Alonso-López, D; De Las Rivas, J; Del-Toro, N; Combe, C W; Meldal, B H M; Heimbach, J; Rappsilber, J; Sullivan, J; Yehudi, Y; Orchard, S

    2018-04-11

    A number of different molecular interactions data download formats now exist, designed to allow access to these valuable data by diverse user groups. These formats include the PSI-XML and MITAB standard interchange formats developed by Molecular Interaction workgroup of the HUPO-PSI in addition to other, use-specific downloads produced by other resources. The onus is currently on the user to ensure that a piece of software is capable of read/writing all necessary versions of each format. This problem may increase, as data providers strive to meet ever more sophisticated user demands and data types. A collaboration between EMBL-EBI and the University of Cambridge has produced JAMI, a single library to unify standard molecular interaction data formats such as PSI-MI XML and PSI-MITAB. The JAMI free, open-source library enables the development of molecular interaction computational tools and pipelines without the need to produce different versions of software to read different versions of the data formats. Software and tools developed on top of the JAMI framework are able to integrate and support both PSI-MI XML and PSI-MITAB. The use of JAMI avoids the requirement to chain conversions between formats in order to reach a desired output format and prevents code and unit test duplication as the code becomes more modular. JAMI's model interfaces are abstracted from the underlying format, hiding the complexity and requirements of each data format from developers using JAMI as a library.

  16. UAF: a generic OPC unified architecture framework

    NASA Astrophysics Data System (ADS)

    Pessemier, Wim; Deconinck, Geert; Raskin, Gert; Saey, Philippe; Van Winckel, Hans

    2012-09-01

    As an emerging Service Oriented Architecture (SOA) specically designed for industrial automation and process control, the OPC Unied Architecture specication should be regarded as an attractive candidate for controlling scientic instrumentation. Even though an industry-backed standard such as OPC UA can oer substantial added value to these projects, its inherent complexity poses an important obstacle for adopting the technology. Building OPC UA applications requires considerable eort, even when taking advantage of a COTS Software Development Kit (SDK). The OPC Unied Architecture Framework (UAF) attempts to reduce this burden by introducing an abstraction layer between the SDK and the application code in order to achieve a better separation of the technical and the functional concerns. True to its industrial origin, the primary requirement of the framework is to maintain interoperability by staying close to the standard specications, and by expecting the minimum compliance from other OPC UA servers and clients. UAF can therefore be regarded as a software framework to quickly and comfortably develop and deploy OPC UA-based applications, while remaining compatible to third party OPC UA-compliant toolkits, servers (such as PLCs) and clients (such as SCADA software). In the rst phase, as covered by this paper, only the client-side of UAF has been tackled in order to transparently handle discovery, session management, subscriptions, monitored items etc. We describe the design principles and internal architecture of our open-source software project, the rst results of the framework running at the Mercator Telescope, and we give a preview of the planned server-side implementation.

  17. DEMO: Action Recommendation for Cyber Resilience

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rodriguez, Luke R.; Curtis, Darren S.; Choudhury, Sutanay

    In this demonstration we show the usefulness of our unifying graph-based model for the representation of infrastructure, behavior, and missions of cyber enterprise in both a software simulation and on an Amazon Web Services (AWS) instance. We show the effectiveness of our recommendation algorithm for preserving various system health metrics in both cases.

  18. A comparative study of the Unified System for Orbit Computation and the Flight Design System. [computer programs for mission planning tasks associated with space shuttle

    NASA Technical Reports Server (NTRS)

    Maag, W.

    1977-01-01

    The Flight Design System (FDS) and the Unified System for Orbit Computation (USOC) are compared and described in relation to mission planning for the shuttle transportation system (STS). The FDS is designed to meet the requirements of a standardized production tool and the USOC is designed for rapid generation of particular application programs. The main emphasis in USOC is put on adaptability to new types of missions. It is concluded that a software system having a USOC-like structure, adapted to the specific needs of MPAD, would be appropriate to support planning tasks in the area unique to STS missions.

  19. Integrated System Health Management (ISHM) and Autonomy

    NASA Technical Reports Server (NTRS)

    Figueroa, Fernando; Walker, Mark G.

    2018-01-01

    Systems capabilities on ISHM (Integrated System Health Management) and autonomy have traditionally been addressed separately. This means that ISHM functions, such as anomaly detection, diagnostics, prognostics, and comprehensive system awareness have not been considered traditionally in the context of autonomy functions such as planning, scheduling, and mission execution. One key reason is that although they address systems capabilities, both ISHM and autonomy have traditionally individually been approached as independent strategies and models for analysis. Additionally, to some degree, a unified paradigm for ISHM and autonomy has been difficult to implement due to limitations of hardware and software. This paper explores a unified treatment of ISHM and autonomy in the context of distributed hierarchical autonomous operations.

  20. OVERGRID: A Unified Overset Grid Generation Graphical Interface

    NASA Technical Reports Server (NTRS)

    Chan, William M.; Akien, Edwin W. (Technical Monitor)

    1999-01-01

    This paper presents a unified graphical interface and gridding strategy for performing overset grid generation. The interface called OVERGRID has been specifically designed to follow an efficient overset gridding strategy, and contains general grid manipulation capabilities as well as modules that are specifically suited for overset grids. General grid utilities include functions for grid redistribution, smoothing, concatenation, extraction, extrapolation, projection, and many others. Modules specially tailored for overset grids include a seam curve extractor, hyperbolic and algebraic surface grid generators, a hyperbolic volume grid generator, and a Cartesian box grid generator, Grid visualization is achieved using OpenGL while widgets are constructed with Tcl/Tk. The software is portable between various platforms from UNIX workstations to personal computers.

  1. Integrated Controlling System and Unified Database for High Throughput Protein Crystallography Experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gaponov, Yu.A.; Igarashi, N.; Hiraki, M.

    2004-05-12

    An integrated controlling system and a unified database for high throughput protein crystallography experiments have been developed. Main features of protein crystallography experiments (purification, crystallization, crystal harvesting, data collection, data processing) were integrated into the software under development. All information necessary to perform protein crystallography experiments is stored (except raw X-ray data that are stored in a central data server) in a MySQL relational database. The database contains four mutually linked hierarchical trees describing protein crystals, data collection of protein crystal and experimental data processing. A database editor was designed and developed. The editor supports basic database functions to view,more » create, modify and delete user records in the database. Two search engines were realized: direct search of necessary information in the database and object oriented search. The system is based on TCP/IP secure UNIX sockets with four predefined sending and receiving behaviors, which support communications between all connected servers and clients with remote control functions (creating and modifying data for experimental conditions, data acquisition, viewing experimental data, and performing data processing). Two secure login schemes were designed and developed: a direct method (using the developed Linux clients with secure connection) and an indirect method (using the secure SSL connection using secure X11 support from any operating system with X-terminal and SSH support). A part of the system has been implemented on a new MAD beam line, NW12, at the Photon Factory Advanced Ring for general user experiments.« less

  2. 24 CFR 578.11 - Unified Funding Agency.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 24 Housing and Urban Development 3 2014-04-01 2013-04-01 true Unified Funding Agency. 578.11... of Care § 578.11 Unified Funding Agency. (a) Becoming a Unified Funding Agency. To become designated as the Unified Funding Agency (UFA) for a Continuum, a collaborative applicant must be selected by...

  3. The SCEC Unified Community Velocity Model (UCVM) Software Framework for Distributing and Querying Seismic Velocity Models

    NASA Astrophysics Data System (ADS)

    Maechling, P. J.; Taborda, R.; Callaghan, S.; Shaw, J. H.; Plesch, A.; Olsen, K. B.; Jordan, T. H.; Goulet, C. A.

    2017-12-01

    Crustal seismic velocity models and datasets play a key role in regional three-dimensional numerical earthquake ground-motion simulation, full waveform tomography, modern physics-based probabilistic earthquake hazard analysis, as well as in other related fields including geophysics, seismology, and earthquake engineering. The standard material properties provided by a seismic velocity model are P- and S-wave velocities and density for any arbitrary point within the geographic volume for which the model is defined. Many seismic velocity models and datasets are constructed by synthesizing information from multiple sources and the resulting models are delivered to users in multiple file formats, such as text files, binary files, HDF-5 files, structured and unstructured grids, and through computer applications that allow for interactive querying of material properties. The Southern California Earthquake Center (SCEC) has developed the Unified Community Velocity Model (UCVM) software framework to facilitate the registration and distribution of existing and future seismic velocity models to the SCEC community. The UCVM software framework is designed to provide a standard query interface to multiple, alternative velocity models, even if the underlying velocity models are defined in different formats or use different geographic projections. The UCVM framework provides a comprehensive set of open-source tools for querying seismic velocity model properties, combining regional 3D models and 1D background models, visualizing 3D models, and generating computational models in the form of regular grids or unstructured meshes that can be used as inputs for ground-motion simulations. The UCVM framework helps researchers compare seismic velocity models and build equivalent simulation meshes from alternative velocity models. These capabilities enable researchers to evaluate the impact of alternative velocity models in ground-motion simulations and seismic hazard analysis applications. In this poster, we summarize the key components of the UCVM framework and describe the impact it has had in various computational geoscientific applications.

  4. The community-driven BiG CZ software system for integration and analysis of bio- and geoscience data in the critical zone

    NASA Astrophysics Data System (ADS)

    Aufdenkampe, A. K.; Mayorga, E.; Horsburgh, J. S.; Lehnert, K. A.; Zaslavsky, I.; Valentine, D. W., Jr.; Richard, S. M.; Cheetham, R.; Meyer, F.; Henry, C.; Berg-Cross, G.; Packman, A. I.; Aronson, E. L.

    2014-12-01

    Here we present the prototypes of a new scientific software system designed around the new Observations Data Model version 2.0 (ODM2, https://github.com/UCHIC/ODM2) to substantially enhance integration of biological and Geological (BiG) data for Critical Zone (CZ) science. The CZ science community takes as its charge the effort to integrate theory, models and data from the multitude of disciplines collectively studying processes on the Earth's surface. The central scientific challenge of the CZ science community is to develop a "grand unifying theory" of the critical zone through a theory-model-data fusion approach, for which the key missing need is a cyberinfrastructure for seamless 4D visual exploration of the integrated knowledge (data, model outputs and interpolations) from all the bio and geoscience disciplines relevant to critical zone structure and function, similar to today's ability to easily explore historical satellite imagery and photographs of the earth's surface using Google Earth. This project takes the first "BiG" steps toward answering that need. The overall goal of this project is to co-develop with the CZ science and broader community, including natural resource managers and stakeholders, a web-based integration and visualization environment for joint analysis of cross-scale bio and geoscience processes in the critical zone (BiG CZ), spanning experimental and observational designs. We will: (1) Engage the CZ and broader community to co-develop and deploy the BiG CZ software stack; (2) Develop the BiG CZ Portal web application for intuitive, high-performance map-based discovery, visualization, access and publication of data by scientists, resource managers, educators and the general public; (3) Develop the BiG CZ Toolbox to enable cyber-savvy CZ scientists to access BiG CZ Application Programming Interfaces (APIs); and (4) Develop the BiG CZ Central software stack to bridge data systems developed for multiple critical zone domains into a single metadata catalog. The entire BiG CZ Software system is being developed on public repositories as a modular suite of open source software projects. It will be built around a new Observations Data Model Version 2.0 (ODM2) that has been developed by members of the BiG CZ project team, with community input, under separate funding.

  5. LFRic: Building a new Unified Model

    NASA Astrophysics Data System (ADS)

    Melvin, Thomas; Mullerworth, Steve; Ford, Rupert; Maynard, Chris; Hobson, Mike

    2017-04-01

    The LFRic project, named for Lewis Fry Richardson, aims to develop a replacement for the Met Office Unified Model in order to meet the challenges which will be presented by the next generation of exascale supercomputers. This project, a collaboration between the Met Office, STFC Daresbury and the University of Manchester, builds on the earlier GungHo project to redesign the dynamical core, in partnership with NERC. The new atmospheric model aims to retain the performance of the current ENDGame dynamical core and associated subgrid physics, while also enabling a far greater scalability and flexibility to accommodate future supercomputer architectures. Design of the model revolves around a principle of a 'separation of concerns', whereby the natural science aspects of the code can be developed without worrying about the underlying architecture, while machine dependent optimisations can be carried out at a high level. These principles are put into practice through the development of an autogenerated Parallel Systems software layer (known as the PSy layer) using a domain-specific compiler called PSyclone. The prototype model includes a re-write of the dynamical core using a mixed finite element method, in which different function spaces are used to represent the various fields. It is able to run in parallel with MPI and OpenMP and has been tested on over 200,000 cores. In this talk an overview of the both the natural science and computational science implementations of the model will be presented.

  6. MOOSE IPL Extensions (Control Logic)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Permann, Cody

    In FY-2015, the development of MOOSE was driven by the needs of the NEAMS MOOSE-based applications, BISON, MARMOT, and RELAP-7. An emphasis was placed on the continued upkeep and improvement MOOSE in support of the product line integration goals. New unified documentation tools have been developed, several improvements to regression testing have been enforced and overall better software quality practices have been implemented. In addition the Multiapps and Transfers systems have seen significant refactoring and robustness improvements, as has the “Restart and Recover” system in support of Multiapp simulations. Finally, a completely new “Control Logic” system has been engineered tomore » replace the prototype system currently in use in the RELAP-7 code. The development of this system continues and is expected to handle existing needs as well as support future enhancements.« less

  7. 24 CFR 578.41 - Unified Funding Agency costs.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 24 Housing and Urban Development 3 2014-04-01 2013-04-01 true Unified Funding Agency costs. 578.41 Section 578.41 Housing and Urban Development Regulations Relating to Housing and Urban Development... § 578.41 Unified Funding Agency costs. (a) In general. UFAs may use up to 3 percent of their FPRN, or a...

  8. VIDANA: Data Management System for Nano Satellites

    NASA Astrophysics Data System (ADS)

    Montenegro, Sergio; Walter, Thomas; Dilger, Erik

    2013-08-01

    A Vidana data management system is a network of software and hardware components. This implies a software network, a hardware network and a smooth connection between both of them. Our strategy is based on our innovative middleware. A reliable interconnection network (SW & HW) which can interconnect many unreliable redundant components such as sensors, actuators, communication devices, computers, and storage elements,... and software components! Component failures are detected, the affected device is disabled and its function is taken over by a redundant component. Our middleware doesn't connect only software, but also devices and software together. Software and hardware communicate with each other without having to distinguish which functions are in software and which are implemented in hardware. Components may be turned on and off at any time, and the whole system will autonomously adapt to its new configuration in order to continue fulfilling its task. In VIDANA we aim dynamic adaptability (run tine), static adaptability (tailoring), and unified HW/SW communication protocols. For many of these aspects we use "learn from the nature" where we can find astonishing reference implementations.

  9. Permutation-based inference for the AUC: A unified approach for continuous and discontinuous data.

    PubMed

    Pauly, Markus; Asendorf, Thomas; Konietschke, Frank

    2016-11-01

    We investigate rank-based studentized permutation methods for the nonparametric Behrens-Fisher problem, that is, inference methods for the area under the ROC curve. We hereby prove that the studentized permutation distribution of the Brunner-Munzel rank statistic is asymptotically standard normal, even under the alternative. Thus, incidentally providing the hitherto missing theoretical foundation for the Neubert and Brunner studentized permutation test. In particular, we do not only show its consistency, but also that confidence intervals for the underlying treatment effects can be computed by inverting this permutation test. In addition, we derive permutation-based range-preserving confidence intervals. Extensive simulation studies show that the permutation-based confidence intervals appear to maintain the preassigned coverage probability quite accurately (even for rather small sample sizes). For a convenient application of the proposed methods, a freely available software package for the statistical software R has been developed. A real data example illustrates the application. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  10. Towards a flexible array control and operation framework for CTA

    NASA Astrophysics Data System (ADS)

    Birsin, E.; Colomé, J.; Hoffmann, D.; Koeppel, H.; Lamanna, G.; Le Flour, T.; Lopatin, A.; Lyard, E.; Melkumyan, D.; Oya, I.; Panazol, J.-L.; Schlenstedt, S.; Schmidt, T.; Schwanke, U.; Stegmann, C.; Walter, R.; Wegner, P.; CTA Consortium

    2012-12-01

    The Cherenkov Telescope Array (CTA) [1] will be the successor to current Imaging Atmospheric Cherenkov Telescopes (IACT) like H.E.S.S., MAGIC and VERITAS. CTA will improve in sensitivity by about an order of magnitude compared to the current generation of IACTs. The energy range will extend from well below 100 GeV to above 100 TeV. To accomplish these goals, CTA will consist of two arrays, one in each hemisphere, consisting of 50-80 telescopes and composed of three different telescope types with different mirror sizes. It will be the first open observatory for very high energy γ-ray astronomy. The Array Control working group of CTA is currently evaluating existing technologies which are best suited for a project like CTA. The considered solutions comprise the ALMA Common Software (ACS), the OPC Unified Architecture (OPC UA) and the Data Distribution Service (DDS) for bulk data transfer. The first applications, like an automatic observation scheduler and the control software for some prototype instrumentation have been developed.

  11. Data Analysis with Graphical Models: Software Tools

    NASA Technical Reports Server (NTRS)

    Buntine, Wray L.

    1994-01-01

    Probabilistic graphical models (directed and undirected Markov fields, and combined in chain graphs) are used widely in expert systems, image processing and other areas as a framework for representing and reasoning with probabilities. They come with corresponding algorithms for performing probabilistic inference. This paper discusses an extension to these models by Spiegelhalter and Gilks, plates, used to graphically model the notion of a sample. This offers a graphical specification language for representing data analysis problems. When combined with general methods for statistical inference, this also offers a unifying framework for prototyping and/or generating data analysis algorithms from graphical specifications. This paper outlines the framework and then presents some basic tools for the task: a graphical version of the Pitman-Koopman Theorem for the exponential family, problem decomposition, and the calculation of exact Bayes factors. Other tools already developed, such as automatic differentiation, Gibbs sampling, and use of the EM algorithm, make this a broad basis for the generation of data analysis software.

  12. OASIS: a data and software distribution service for Open Science Grid

    NASA Astrophysics Data System (ADS)

    Bockelman, B.; Caballero Bejar, J.; De Stefano, J.; Hover, J.; Quick, R.; Teige, S.

    2014-06-01

    The Open Science Grid encourages the concept of software portability: a user's scientific application should be able to run at as many sites as possible. It is necessary to provide a mechanism for OSG Virtual Organizations to install software at sites. Since its initial release, the OSG Compute Element has provided an application software installation directory to Virtual Organizations, where they can create their own sub-directory, install software into that sub-directory, and have the directory shared on the worker nodes at that site. The current model has shortcomings with regard to permissions, policies, versioning, and the lack of a unified, collective procedure or toolset for deploying software across all sites. Therefore, a new mechanism for data and software distributing is desirable. The architecture for the OSG Application Software Installation Service (OASIS) is a server-client model: the software and data are installed only once in a single place, and are automatically distributed to all client sites simultaneously. Central file distribution offers other advantages, including server-side authentication and authorization, activity records, quota management, data validation and inspection, and well-defined versioning and deletion policies. The architecture, as well as a complete analysis of the current implementation, will be described in this paper.

  13. Research on the application of wisdom technology in smart city

    NASA Astrophysics Data System (ADS)

    Li, Juntao; Ma, Shuai; Gu, Weihua; Chen, Weiyi

    2015-12-01

    This paper first analyzes the concept of smart technology, the relationship between wisdom technology and smart city, and discusses the practical application of IOT(Internet of things) in smart city to explore a better way to realize smart city; then Introduces the basic concepts of cloud computing and smart city, and explains the relationship between the two; Discusses five advantages of cloud computing that applies to smart city construction: a unified and highly efficient, large-scale infrastructure software and hardware management, service scheduling and resource management, security control and management, energy conservation and management platform layer, and to promote modern practical significance of the development of services, promoting regional social and economic development faster. Finally, a brief description of the wisdom technology and smart city management is presented.

  14. Educational-research laboratory "electric circuits" on the base of digital technologies

    NASA Astrophysics Data System (ADS)

    Koroteyev, V. I.; Florentsev, V. V.; Florentseva, N. I.

    2017-01-01

    The problem of research activity of trainees' activation in the educational-research laboratory "Electric Circuits" using innovative methodological solutions and digital technologies is considered. The main task is in creation of the unified experimental research information-educational environment "Electrical Engineering". The problems arising during the developing and application of the modern software and hardware, experimental and research stands and digital control and measuring systems are presented. This paper presents the main stages of development and creation of educational-research laboratory "Electrical Circuits" at the Department of Electrical Engineering of NRNU MEPhI. The authors also consider the analogues of the described research complex offered by various educational institutions and companies. The analysis of their strengths and weaknesses, on which the advantages of the proposed solution are based, is held.

  15. Global review of open access risk assessment software packages valid for global or continental scale analysis

    NASA Astrophysics Data System (ADS)

    Daniell, James; Simpson, Alanna; Gunasekara, Rashmin; Baca, Abigail; Schaefer, Andreas; Ishizawa, Oscar; Murnane, Rick; Tijssen, Annegien; Deparday, Vivien; Forni, Marc; Himmelfarb, Anne; Leder, Jan

    2015-04-01

    Over the past few decades, a plethora of open access software packages for the calculation of earthquake, volcanic, tsunami, storm surge, wind and flood have been produced globally. As part of the World Bank GFDRR Review released at the Understanding Risk 2014 Conference, over 80 such open access risk assessment software packages were examined. Commercial software was not considered in the evaluation. A preliminary analysis was used to determine whether the 80 models were currently supported and if they were open access. This process was used to select a subset of 31 models that include 8 earthquake models, 4 cyclone models, 11 flood models, and 8 storm surge/tsunami models for more detailed analysis. By using multi-criteria analysis (MCDA) and simple descriptions of the software uses, the review allows users to select a few relevant software packages for their own testing and development. The detailed analysis evaluated the models on the basis of over 100 criteria and provides a synopsis of available open access natural hazard risk modelling tools. In addition, volcano software packages have since been added making the compendium of risk software tools in excess of 100. There has been a huge increase in the quality and availability of open access/source software over the past few years. For example, private entities such as Deltares now have an open source policy regarding some flood models (NGHS). In addition, leaders in developing risk models in the public sector, such as Geoscience Australia (EQRM, TCRM, TsuDAT, AnuGA) or CAPRA (ERN-Flood, Hurricane, CRISIS2007 etc.), are launching and/or helping many other initiatives. As we achieve greater interoperability between modelling tools, we will also achieve a future wherein different open source and open access modelling tools will be increasingly connected and adapted towards unified multi-risk model platforms and highly customised solutions. It was seen that many software tools could be improved by enabling user-defined exposure and vulnerability. Without this function, many tools can only be used regionally and not at global or continental scale. It is becoming increasingly easy to use multiple packages for a single region and/or hazard to characterize the uncertainty in the risk, or use as checks for the sensitivities in the analysis. There is a potential for valuable synergy between existing software. A number of open source software packages could be combined to generate a multi-risk model with multiple views of a hazard. This extensive review has simply attempted to provide a platform for dialogue between all open source and open access software packages and to hopefully inspire collaboration between developers, given the great work done by all open access and open source developers.

  16. Image pattern recognition supporting interactive analysis and graphical visualization

    NASA Technical Reports Server (NTRS)

    Coggins, James M.

    1992-01-01

    Image Pattern Recognition attempts to infer properties of the world from image data. Such capabilities are crucial for making measurements from satellite or telescope images related to Earth and space science problems. Such measurements can be the required product itself, or the measurements can be used as input to a computer graphics system for visualization purposes. At present, the field of image pattern recognition lacks a unified scientific structure for developing and evaluating image pattern recognition applications. The overall goal of this project is to begin developing such a structure. This report summarizes results of a 3-year research effort in image pattern recognition addressing the following three principal aims: (1) to create a software foundation for the research and identify image pattern recognition problems in Earth and space science; (2) to develop image measurement operations based on Artificial Visual Systems; and (3) to develop multiscale image descriptions for use in interactive image analysis.

  17. A user-centered, object-oriented methodology for developing Health Information Systems: a Clinical Information System (CIS) example.

    PubMed

    Konstantinidis, Georgios; Anastassopoulos, George C; Karakos, Alexandros S; Anagnostou, Emmanouil; Danielides, Vasileios

    2012-04-01

    The aim of this study is to present our perspectives on healthcare analysis and design and the lessons learned from our experience with the development of a distributed, object-oriented Clinical Information System (CIS). In order to overcome known issues regarding development, implementation and finally acceptance of a CIS by the physicians we decided to develop a novel object-oriented methodology by integrating usability principles and techniques in a simplified version of a well established software engineering process (SEP), the Unified Process (UP). A multilayer architecture has been defined and implemented with the use of a vendor application framework. Our first experiences from a pilot implementation of our CIS are positive. This approach allowed us to gain a socio-technical understanding of the domain and enabled us to identify all the important factors that define both the structure and the behavior of a Health Information System.

  18. Dataflow models for fault-tolerant control systems

    NASA Technical Reports Server (NTRS)

    Papadopoulos, G. M.

    1984-01-01

    Dataflow concepts are used to generate a unified hardware/software model of redundant physical systems which are prone to faults. Basic results in input congruence and synchronization are shown to reduce to a simple model of data exchanges between processing sites. Procedures are given for the construction of congruence schemata, the distinguishing features of any correctly designed redundant system.

  19. PyMidas: Interface from Python to Midas

    NASA Astrophysics Data System (ADS)

    Maisala, Sami; Oittinen, Tero

    2014-01-01

    PyMidas is an interface between Python and MIDAS, the major ESO legacy general purpose data processing system. PyMidas allows a user to exploit both the rich legacy of MIDAS software and the power of Python scripting in a unified interactive environment. PyMidas also allows the usage of other Python-based astronomical analysis systems such as PyRAF.

  20. Inexpensive Audio Activities: Earbud-based Sound Experiments

    NASA Astrophysics Data System (ADS)

    Allen, Joshua; Boucher, Alex; Meggison, Dean; Hruby, Kate; Vesenka, James

    2016-11-01

    Inexpensive alternatives to a number of classic introductory physics sound laboratories are presented including interference phenomena, resonance conditions, and frequency shifts. These can be created using earbuds, economical supplies such as Giant Pixie Stix® wrappers, and free software available for PCs and mobile devices. We describe two interference laboratories (beat frequency and two-speaker interference) and two resonance laboratories (quarter- and half-wavelength). Lastly, a Doppler laboratory using rotating earbuds is explained. The audio signal captured by all experiments is analyzed on free spectral analysis software and many of the experiments incorporate the unifying theme of measuring the speed of sound in air.

  1. Quantum Computing Architectural Design

    NASA Astrophysics Data System (ADS)

    West, Jacob; Simms, Geoffrey; Gyure, Mark

    2006-03-01

    Large scale quantum computers will invariably require scalable architectures in addition to high fidelity gate operations. Quantum computing architectural design (QCAD) addresses the problems of actually implementing fault-tolerant algorithms given physical and architectural constraints beyond those of basic gate-level fidelity. Here we introduce a unified framework for QCAD that enables the scientist to study the impact of varying error correction schemes, architectural parameters including layout and scheduling, and physical operations native to a given architecture. Our software package, aptly named QCAD, provides compilation, manipulation/transformation, multi-paradigm simulation, and visualization tools. We demonstrate various features of the QCAD software package through several examples.

  2. Exploring the complementarity of THz pulse imaging and DCE-MRIs: Toward a unified multi-channel classification and a deep learning framework.

    PubMed

    Yin, X-X; Zhang, Y; Cao, J; Wu, J-L; Hadjiloucas, S

    2016-12-01

    We provide a comprehensive account of recent advances in biomedical image analysis and classification from two complementary imaging modalities: terahertz (THz) pulse imaging and dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI). The work aims to highlight underlining commonalities in both data structures so that a common multi-channel data fusion framework can be developed. Signal pre-processing in both datasets is discussed briefly taking into consideration advances in multi-resolution analysis and model based fractional order calculus system identification. Developments in statistical signal processing using principal component and independent component analysis are also considered. These algorithms have been developed independently by the THz-pulse imaging and DCE-MRI communities, and there is scope to place them in a common multi-channel framework to provide better software standardization at the pre-processing de-noising stage. A comprehensive discussion of feature selection strategies is also provided and the importance of preserving textural information is highlighted. Feature extraction and classification methods taking into consideration recent advances in support vector machine (SVM) and extreme learning machine (ELM) classifiers and their complex extensions are presented. An outlook on Clifford algebra classifiers and deep learning techniques suitable to both types of datasets is also provided. The work points toward the direction of developing a new unified multi-channel signal processing framework for biomedical image analysis that will explore synergies from both sensing modalities for inferring disease proliferation. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  3. SeaDataNet Pan-European infrastructure for Ocean & Marine Data Management

    NASA Astrophysics Data System (ADS)

    Manzella, G. M.; Maillard, C.; Maudire, G.; Schaap, D.; Rickards, L.; Nast, F.; Balopoulos, E.; Mikhailov, N.; Vladymyrov, V.; Pissierssens, P.; Schlitzer, R.; Beckers, J. M.; Barale, V.

    2007-12-01

    SEADATANET is developing a Pan-European data management infrastructure to insure access to a large number of marine environmental data (i.e. temperature, salinity current, sea level, chemical, physical and biological properties), safeguard and long term archiving. Data are derived from many different sensors installed on board of research vessels, satellite and the various platforms of the marine observing system. SeaDataNet allows to have information on real time and archived marine environmental data collected at a pan-european level, through directories on marine environmental data and projects. SeaDataNet allows the access to the most comprehensive multidisciplinary sets of marine in-situ and remote sensing data, from about 40 laboratories, through user friendly tools. The data selection and access is operated through the Common Data Index (CDI), XML files compliant with ISO standards and unified dictionaries. Technical Developments carried out by SeaDataNet includes: A library of Standards - Meta-data standards, compliant with ISO 19115, for communication and interoperability between the data platforms. Software of interoperable on line system - Interconnection of distributed data centres by interfacing adapted communication technology tools. Off-Line Data Management software - software representing the minimum equipment of all the data centres is developed by AWI "Ocean Data View (ODV)". Training, Education and Capacity Building - Training 'on the job' is carried out by IOC-Unesco in Ostende. SeaDataNet Virtual Educational Centre internet portal provides basic tools for informal education

  4. Near-Earth Object Survey Simulation Software

    NASA Astrophysics Data System (ADS)

    Naidu, Shantanu P.; Chesley, Steven R.; Farnocchia, Davide

    2017-10-01

    There is a significant interest in Near-Earth objects (NEOs) because they pose an impact threat to Earth, offer valuable scientific information, and are potential targets for robotic and human exploration. The number of NEO discoveries has been rising rapidly over the last two decades with over 1800 being discovered last year, making the total number of known NEOs >16000. Pan-STARRS and the Catalina Sky Survey are currently the most prolific NEO surveys, having discovered >1600 NEOs between them in 2016. As next generation surveys such as Large Synoptic Survey Telescope (LSST) and the proposed Near-Earth Object Camera (NEOCam) become operational in the next decade, the discovery rate is expected to increase tremendously. Coordination between various survey telescopes will be necessary in order to optimize NEO discoveries and create a unified global NEO discovery network. We are collaborating on a community-based, open-source software project to simulate asteroid surveys to facilitate such coordination and develop strategies for improving discovery efficiency. Our effort so far has focused on development of a fast and efficient tool capable of accepting user-defined asteroid population models and telescope parameters such as a list of pointing angles and camera field-of-view, and generating an output list of detectable asteroids. The software takes advantage of the widely used and tested SPICE library and architecture developed by NASA’s Navigation and Ancillary Information Facility (Acton, 1996) for saving and retrieving asteroid trajectories and camera pointing. Orbit propagation is done using OpenOrb (Granvik et al. 2009) but future versions will allow the user to plug in a propagator of their choice. The software allows the simulation of both ground-based and space-based surveys. Performance is being tested using the Grav et al. (2011) asteroid population model and the LSST simulated survey “enigma_1189”.

  5. ObsPy: A Python toolbox for seismology - Current state, applications, and ecosystem around it

    NASA Astrophysics Data System (ADS)

    Lecocq, Thomas; Megies, Tobias; Krischer, Lion; Sales de Andrade, Elliott; Barsch, Robert; Beyreuther, Moritz

    2016-04-01

    ObsPy (http://www.obspy.org) is a community-driven, open-source project offering a bridge for seismology into the scientific Python ecosystem. It provides * read and write support for essentially all commonly used waveform, station, and event metadata formats with a unified interface, * a comprehensive signal processing toolbox tuned to the needs of seismologists, * integrated access to all large data centers, web services and databases, and * convenient wrappers to third party codes like libmseed and evalresp. Python, in contrast to many other languages and tools, is simple enough to enable an exploratory and interactive coding style desired by many scientists. At the same time it is a full-fledged programming language usable by software engineers to build complex and large programs. This combination makes it very suitable for use in seismology where research code often has to be translated to stable and production ready environments. It furthermore offers many freely available high quality scientific modules covering most needs in developing scientific software. ObsPy has been in constant development for more than 5 years and nowadays enjoys a large rate of adoption in the community with thousands of users. Successful applications include time-dependent and rotational seismology, big data processing, event relocations, and synthetic studies about attenuation kernels and full-waveform inversions to name a few examples. Additionally it sparked the development of several more specialized packages slowly building a modern seismological ecosystem around it. This contribution will give a short introduction and overview of ObsPy and highlight a number of use cases and software built around it. We will furthermore discuss the issue of sustainability of scientific software.

  6. ObsPy: A Python toolbox for seismology - Current state, applications, and ecosystem around it

    NASA Astrophysics Data System (ADS)

    Krischer, L.; Megies, T.; Sales de Andrade, E.; Barsch, R.; Beyreuther, M.

    2015-12-01

    ObsPy (http://www.obspy.org) is a community-driven, open-source project offering a bridge for seismology into the scientific Python ecosystem. It provides read and write support for essentially all commonly used waveform, station, and event metadata formats with a unified interface, a comprehensive signal processing toolbox tuned to the needs of seismologists, integrated access to all large data centers, web services and databases, and convenient wrappers to third party codes like libmseed and evalresp. Python, in contrast to many other languages and tools, is simple enough to enable an exploratory and interactive coding style desired by many scientists. At the same time it is a full-fledged programming language usable by software engineers to build complex and large programs. This combination makes it very suitable for use in seismology where research code often has to be translated to stable and production ready environments. It furthermore offers many freely available high quality scientific modules covering most needs in developing scientific software.ObsPy has been in constant development for more than 5 years and nowadays enjoys a large rate of adoption in the community with thousands of users. Successful applications include time-dependent and rotational seismology, big data processing, event relocations, and synthetic studies about attenuation kernels and full-waveform inversions to name a few examples. Additionally it sparked the development of several more specialized packages slowly building a modern seismological ecosystem around it.This contribution will give a short introduction and overview of ObsPy and highlight a number of us cases and software built around it. We will furthermore discuss the issue of sustainability of scientific software.

  7. Development of a unified guidance system for geocentric transfer. [for solar electric propulsion spacecraft

    NASA Technical Reports Server (NTRS)

    Cake, J. E.; Regetz, J. D., Jr.

    1975-01-01

    A method is presented for open loop guidance of a solar electric propulsion spacecraft to geosynchronous orbit. The method consists of determining the thrust vector profiles on the ground with an optimization computer program, and performing updates based on the difference between the actual trajectory and that predicted with a precision simulation computer program. The motivation for performing the guidance analysis during the mission planning phase is discussed, and a spacecraft design option that employs attitude orientation constraints is presented. The improvements required in both the optimization program and simulation program are set forth, together with the efforts to integrate the programs into the ground support software for the guidance system.

  8. Development of a unified guidance system for geocentric transfer. [solar electric propulsion spacecraft

    NASA Technical Reports Server (NTRS)

    Cake, J. E.; Regetz, J. D., Jr.

    1975-01-01

    A method is presented for open loop guidance of a solar electric propulsion spacecraft to geosynchronsus orbit. The method consists of determining the thrust vector profiles on the ground with an optimization computer program, and performing updates based on the difference between the actual trajectory and that predicted with a precision simulation computer program. The motivation for performing the guidance analysis during the mission planning phase is discussed, and a spacecraft design option that employs attitude orientation constraints is presented. The improvements required in both the optimization program and simulation program are set forth, together with the efforts to integrate the programs into the ground support software for the guidance system.

  9. Using CLIPS in a distributed system: The Network Control Center (NCC) expert system

    NASA Technical Reports Server (NTRS)

    Wannemacher, Tom

    1990-01-01

    This paper describes an intelligent troubleshooting system for the Help Desk domain. It was developed on an IBM-compatible 80286 PC using Microsoft C and CLIPS and an AT&T 3B2 minicomputer using the UNIFY database and a combination of shell script, C programs and SQL queries. The two computers are linked by a lan. The functions of this system are to help non-technical NCC personnel handle trouble calls, to keep a log of problem calls with complete, concise information, and to keep a historical database of problems. The database helps identify hardware and software problem areas and provides a source of new rules for the troubleshooting knowledge base.

  10. Consistent multiphysics simulation of a central tower CSP plant as applied to ISTORE

    NASA Astrophysics Data System (ADS)

    Votyakov, Evgeny V.; Papanicolas, Costas N.

    2017-06-01

    We present a unified consistent multiphysics approach to model a central tower CSP plant. The framework for the model includes Monte Carlo ray tracing (RT) and computational fluid dynamics (CFD) components utilizing the OpenFOAM C++ software library. The RT part works effectively with complex surfaces of engineering design given in CAD formats. The CFD simulation, which is based on 3D Navier-Stokes equations, takes into account all possible heat transfer mechanisms: radiation, conduction, and convection. Utilizing this package, the solar field of the experimental Platform for Research, Observation, and TEchnological Applications in Solar Energy (PROTEAS) and the Integrated STOrage and Receiver (ISTORE), developed at the Cyprus Institute, are being examined.

  11. A Generic Communication Protocol for Remote Laboratories: an Implementation on e-lab

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Henriques, Rafael B.; Fernandes, H.; Duarte, Andre S.

    2015-07-01

    The remote laboratories at IST (Instituto Superior Tecnico), e-lab, serve as a valuable tool for education and training based on remote control technologies. Due to the high number and increase of remotely operated experiments a generic protocol was developed to perform the communication between the software driver and the respective experimental setup in an easier and more unified way. The training in these fields of students and personnel can take advantage of such infrastructure with the purpose of deploying new experiments in a faster way. More than 10 experiments using the generic protocol are available on-line in a 24 xmore » 7 way. (authors)« less

  12. Mission Simulation Toolkit

    NASA Technical Reports Server (NTRS)

    Pisaich, Gregory; Flueckiger, Lorenzo; Neukom, Christian; Wagner, Mike; Buchanan, Eric; Plice, Laura

    2007-01-01

    The Mission Simulation Toolkit (MST) is a flexible software system for autonomy research. It was developed as part of the Mission Simulation Facility (MSF) project that was started in 2001 to facilitate the development of autonomous planetary robotic missions. Autonomy is a key enabling factor for robotic exploration. There has been a large gap between autonomy software (at the research level), and software that is ready for insertion into near-term space missions. The MST bridges this gap by providing a simulation framework and a suite of tools for supporting research and maturation of autonomy. MST uses a distributed framework based on the High Level Architecture (HLA) standard. A key feature of the MST framework is the ability to plug in new models to replace existing ones with the same services. This enables significant simulation flexibility, particularly the mixing and control of fidelity level. In addition, the MST provides automatic code generation from robot interfaces defined with the Unified Modeling Language (UML), methods for maintaining synchronization across distributed simulation systems, XML-based robot description, and an environment server. Finally, the MSF supports a number of third-party products including dynamic models and terrain databases. Although the communication objects and some of the simulation components that are provided with this toolkit are specifically designed for terrestrial surface rovers, the MST can be applied to any other domain, such as aerial, aquatic, or space.

  13. A UML Profile for State Analysis

    NASA Technical Reports Server (NTRS)

    Murray, Alex; Rasmussen, Robert

    2010-01-01

    State Analysis is a systems engineering methodology for the specification and design of control systems, developed at the Jet Propulsion Laboratory. The methodology emphasizes an analysis of the system under control in terms of States and their properties and behaviors and their effects on each other, a clear separation of the control system from the controlled system, cognizance in the control system of the controlled system's State, goal-based control built on constraining the controlled system's States, and disciplined techniques for State discovery and characterization. State Analysis (SA) introduces two key diagram types: State Effects and Goal Network diagrams. The team at JPL developed a tool for performing State Analysis. The tool includes a drawing capability, backed by a database that supports the diagram types and the organization of the elements of the SA models. But the tool does not support the usual activities of software engineering and design - a disadvantage, since systems to which State Analysis can be applied tend to be very software-intensive. This motivated the work described in this paper: the development of a preliminary Unified Modeling Language (UML) profile for State Analysis. Having this profile would enable systems engineers to specify a system using the methods and graphical language of State Analysis, which is easily linked with a larger system model in SysML (Systems Modeling Language), while also giving software engineers engaged in implementing the specified control system immediate access to and use of the SA model, in the same language, UML, used for other software design. That is, a State Analysis profile would serve as a shared modeling bridge between system and software models for the behavior aspects of the system. This paper begins with an overview of State Analysis and its underpinnings, followed by an overview of the mapping of SA constructs to the UML metamodel. It then delves into the details of these mappings and the constraints associated with them. Finally, we give an example of the use of the profile for expressing an example SA model.

  14. Students Perception towards the Implementation of Computer Graphics Technology in Class via Unified Theory of Acceptance and Use of Technology (UTAUT) Model

    NASA Astrophysics Data System (ADS)

    Binti Shamsuddin, Norsila

    Technology advancement and development in a higher learning institution is a chance for students to be motivated to learn in depth in the information technology areas. Students should take hold of the opportunity to blend their skills towards these technologies as preparation for them when graduating. The curriculum itself can rise up the students' interest and persuade them to be directly involved in the evolvement of the technology. The aim of this study is to see how deep is the students' involvement as well as their acceptance towards the adoption of the technology used in Computer Graphics and Image Processing subjects. The study will be towards the Bachelor students in Faculty of Industrial Information Technology (FIIT), Universiti Industri Selangor (UNISEL); Bac. In Multimedia Industry, BSc. Computer Science and BSc. Computer Science (Software Engineering). This study utilizes the new Unified Theory of Acceptance and Use of Technology (UTAUT) to further validate the model and enhance our understanding of the adoption of Computer Graphics and Image Processing Technologies. Four (4) out of eight (8) independent factors in UTAUT will be studied towards the dependent factor.

  15. Achieving control and interoperability through unified model-based systems and software engineering

    NASA Technical Reports Server (NTRS)

    Rasmussen, Robert; Ingham, Michel; Dvorak, Daniel

    2005-01-01

    Control and interoperation of complex systems is one of the most difficult challenges facing NASA's Exploration Systems Mission Directorate. An integrated but diverse array of vehicles, habitats, and supporting facilities, evolving over the long course of the enterprise, must perform ever more complex tasks while moving steadily away from the sphere of ground support and intervention.

  16. Finding common ground by unifying autonomy indices to understand needed capabilities

    NASA Astrophysics Data System (ADS)

    Bihl, Trevor; Cox, Chadwick; Jenkins, Todd

    2018-05-01

    Autonomous machines promise to reduce the workload of human operators by replacing some or all cognitive functions with intelligent software. However, development is retarded by disagreement among researchers at very basic levels, including what is meant by autonomy and how to achieve it. Clear definitions are few and no one has successfully bridged the gap between philosophical notions and engineering methods. A variety of autonomy measures are reviewed, highlighting their strengths and weaknesses. Various researchers have developed these autonomy measures to facilitate discussions of capabilities. These measures are also a means of comparing and contrasting autonomy approaches. We contend that any properly structured set of measures are not only useful for these functions, but it provides both a philosophical and practical justification, it outlines developmental steps, it suggests schematic constraints, and it implies requirements for tests. As such, we make recommendations for the further developments of autonomy measures.

  17. The UMLS Knowledge Source Server: an experience in Web 2.0 technologies.

    PubMed

    Thorn, Karen E; Bangalore, Anantha K; Browne, Allen C

    2007-10-11

    The UMLS Knowledge Source Server (UMLSKS), developed at the National Library of Medicine (NLM), makes the knowledge sources of the Unified Medical Language System (UMLS) available to the research community over the Internet. In 2003, the UMLSKS was redesigned utilizing state-of-the-art technologies available at that time. That design offered a significant improvement over the prior version but presented a set of technology-dependent issues that limited its functionality and usability. Four areas of desired improvement were identified: software interfaces, web interface content, system maintenance/deployment, and user authentication. By employing next generation web technologies, newer authentication paradigms and further refinements in modular design methods, these areas could be addressed and corrected to meet the ever increasing needs of UMLSKS developers. In this paper we detail the issues present with the existing system and describe the new system's design using new technologies considered entrants in the Web 2.0 development era.

  18. The Use of UML for Software Requirements Expression and Management

    NASA Technical Reports Server (NTRS)

    Murray, Alex; Clark, Ken

    2015-01-01

    It is common practice to write English-language "shall" statements to embody detailed software requirements in aerospace software applications. This paper explores the use of the UML language as a replacement for the English language for this purpose. Among the advantages offered by the Unified Modeling Language (UML) is a high degree of clarity and precision in the expression of domain concepts as well as architecture and design. Can this quality of UML be exploited for the definition of software requirements? While expressing logical behavior, interface characteristics, timeliness constraints, and other constraints on software using UML is commonly done and relatively straight-forward, achieving the additional aspects of the expression and management of software requirements that stakeholders expect, especially traceability, is far less so. These other characteristics, concerned with auditing and quality control, include the ability to trace a requirement to a parent requirement (which may well be an English "shall" statement), to trace a requirement to verification activities or scenarios which verify that requirement, and to trace a requirement to elements of the software design which implement that requirement. UML Use Cases, designed for capturing requirements, have not always been satisfactory. Some applications of them simply use the Use Case model element as a repository for English requirement statements. Other applications of Use Cases, in which Use Cases are incorporated into behavioral diagrams that successfully communicate the behaviors and constraints required of the software, do indeed take advantage of UML's clarity, but not in ways that support the traceability features mentioned above. Our approach uses the Stereotype construct of UML to precisely identify elements of UML constructs, especially behaviors such as State Machines and Activities, as requirements, and also to achieve the necessary mapping capabilities. We describe this approach in the context of a space-based software application currently under development at the Jet Propulsion Laboratory.

  19. The caCORE Software Development Kit: streamlining construction of interoperable biomedical information services.

    PubMed

    Phillips, Joshua; Chilukuri, Ram; Fragoso, Gilberto; Warzel, Denise; Covitz, Peter A

    2006-01-06

    Robust, programmatically accessible biomedical information services that syntactically and semantically interoperate with other resources are challenging to construct. Such systems require the adoption of common information models, data representations and terminology standards as well as documented application programming interfaces (APIs). The National Cancer Institute (NCI) developed the cancer common ontologic representation environment (caCORE) to provide the infrastructure necessary to achieve interoperability across the systems it develops or sponsors. The caCORE Software Development Kit (SDK) was designed to provide developers both within and outside the NCI with the tools needed to construct such interoperable software systems. The caCORE SDK requires a Unified Modeling Language (UML) tool to begin the development workflow with the construction of a domain information model in the form of a UML Class Diagram. Models are annotated with concepts and definitions from a description logic terminology source using the Semantic Connector component. The annotated model is registered in the Cancer Data Standards Repository (caDSR) using the UML Loader component. System software is automatically generated using the Codegen component, which produces middleware that runs on an application server. The caCORE SDK was initially tested and validated using a seven-class UML model, and has been used to generate the caCORE production system, which includes models with dozens of classes. The deployed system supports access through object-oriented APIs with consistent syntax for retrieval of any type of data object across all classes in the original UML model. The caCORE SDK is currently being used by several development teams, including by participants in the cancer biomedical informatics grid (caBIG) program, to create compatible data services. caBIG compatibility standards are based upon caCORE resources, and thus the caCORE SDK has emerged as a key enabling technology for caBIG. The caCORE SDK substantially lowers the barrier to implementing systems that are syntactically and semantically interoperable by providing workflow and automation tools that standardize and expedite modeling, development, and deployment. It has gained acceptance among developers in the caBIG program, and is expected to provide a common mechanism for creating data service nodes on the data grid that is under development.

  20. Role of the ATLAS Grid Information System (AGIS) in Distributed Data Analysis and Simulation

    NASA Astrophysics Data System (ADS)

    Anisenkov, A. V.

    2018-03-01

    In modern high-energy physics experiments, particular attention is paid to the global integration of information and computing resources into a unified system for efficient storage and processing of experimental data. Annually, the ATLAS experiment performed at the Large Hadron Collider at the European Organization for Nuclear Research (CERN) produces tens of petabytes raw data from the recording electronics and several petabytes of data from the simulation system. For processing and storage of such super-large volumes of data, the computing model of the ATLAS experiment is based on heterogeneous geographically distributed computing environment, which includes the worldwide LHC computing grid (WLCG) infrastructure and is able to meet the requirements of the experiment for processing huge data sets and provide a high degree of their accessibility (hundreds of petabytes). The paper considers the ATLAS grid information system (AGIS) used by the ATLAS collaboration to describe the topology and resources of the computing infrastructure, to configure and connect the high-level software systems of computer centers, to describe and store all possible parameters, control, configuration, and other auxiliary information required for the effective operation of the ATLAS distributed computing applications and services. The role of the AGIS system in the development of a unified description of the computing resources provided by grid sites, supercomputer centers, and cloud computing into a consistent information model for the ATLAS experiment is outlined. This approach has allowed the collaboration to extend the computing capabilities of the WLCG project and integrate the supercomputers and cloud computing platforms into the software components of the production and distributed analysis workload management system (PanDA, ATLAS).

  1. Model-based software engineering for an optical navigation system for spacecraft

    NASA Astrophysics Data System (ADS)

    Franz, T.; Lüdtke, D.; Maibaum, O.; Gerndt, A.

    2017-09-01

    The project Autonomous Terrain-based Optical Navigation (ATON) at the German Aerospace Center (DLR) is developing an optical navigation system for future landing missions on celestial bodies such as the moon or asteroids. Image data obtained by optical sensors can be used for autonomous determination of the spacecraft's position and attitude. Camera-in-the-loop experiments in the Testbed for Robotic Optical Navigation (TRON) laboratory and flight campaigns with unmanned aerial vehicle (UAV) are performed to gather flight data for further development and to test the system in a closed-loop scenario. The software modules are executed in the C++ Tasking Framework that provides the means to concurrently run the modules in separated tasks, send messages between tasks, and schedule task execution based on events. Since the project is developed in collaboration with several institutes in different domains at DLR, clearly defined and well-documented interfaces are necessary. Preventing misconceptions caused by differences between various development philosophies and standards turned out to be challenging. After the first development cycles with manual Interface Control Documents (ICD) and manual implementation of the complex interactions between modules, we switched to a model-based approach. The ATON model covers a graphical description of the modules, their parameters and communication patterns. Type and consistency checks on this formal level help to reduce errors in the system. The model enables the generation of interfaces and unified data types as well as their documentation. Furthermore, the C++ code for the exchange of data between the modules and the scheduling of the software tasks is created automatically. With this approach, changing the data flow in the system or adding additional components (e.g., a second camera) have become trivial.

  2. Model-based software engineering for an optical navigation system for spacecraft

    NASA Astrophysics Data System (ADS)

    Franz, T.; Lüdtke, D.; Maibaum, O.; Gerndt, A.

    2018-06-01

    The project Autonomous Terrain-based Optical Navigation (ATON) at the German Aerospace Center (DLR) is developing an optical navigation system for future landing missions on celestial bodies such as the moon or asteroids. Image data obtained by optical sensors can be used for autonomous determination of the spacecraft's position and attitude. Camera-in-the-loop experiments in the Testbed for Robotic Optical Navigation (TRON) laboratory and flight campaigns with unmanned aerial vehicle (UAV) are performed to gather flight data for further development and to test the system in a closed-loop scenario. The software modules are executed in the C++ Tasking Framework that provides the means to concurrently run the modules in separated tasks, send messages between tasks, and schedule task execution based on events. Since the project is developed in collaboration with several institutes in different domains at DLR, clearly defined and well-documented interfaces are necessary. Preventing misconceptions caused by differences between various development philosophies and standards turned out to be challenging. After the first development cycles with manual Interface Control Documents (ICD) and manual implementation of the complex interactions between modules, we switched to a model-based approach. The ATON model covers a graphical description of the modules, their parameters and communication patterns. Type and consistency checks on this formal level help to reduce errors in the system. The model enables the generation of interfaces and unified data types as well as their documentation. Furthermore, the C++ code for the exchange of data between the modules and the scheduling of the software tasks is created automatically. With this approach, changing the data flow in the system or adding additional components (e.g., a second camera) have become trivial.

  3. Building a Science Software Institute: Synthesizing the Lessons Learned from the ISEES and WSSI Software Institute Conceptualization Efforts

    NASA Astrophysics Data System (ADS)

    Idaszak, R.; Lenhardt, W. C.; Jones, M. B.; Ahalt, S.; Schildhauer, M.; Hampton, S. E.

    2014-12-01

    The NSF, in an effort to support the creation of sustainable science software, funded 16 science software institute conceptualization efforts. The goal of these conceptualization efforts is to explore approaches to creating the institutional, sociological, and physical infrastructures to support sustainable science software. This paper will present the lessons learned from two of these conceptualization efforts, the Institute for Sustainable Earth and Environmental Software (ISEES - http://isees.nceas.ucsb.edu) and the Water Science Software Institute (WSSI - http://waters2i2.org). ISEES is a multi-partner effort led by National Center for Ecological Analysis and Synthesis (NCEAS). WSSI, also a multi-partner effort, is led by the Renaissance Computing Institute (RENCI). The two conceptualization efforts have been collaborating due to the complementarity of their approaches and given the potential synergies of their science focus. ISEES and WSSI have engaged in a number of activities to address the challenges of science software such as workshops, hackathons, and coding efforts. More recently, the two institutes have also collaborated on joint activities including training, proposals, and papers. In addition to presenting lessons learned, this paper will synthesize across the two efforts to project a unified vision for a science software institute.

  4. [Search for potential gastric cancer biomarkers using low molecular weight blood plasma proteome profiling by mass spectrometry].

    PubMed

    Shevchenko, V E; Arnotskaia, N E; Ogorodnikova, E V; Davydov, M M; Ibraev, M A; Turkin, I N; Davydov, M I

    2014-01-01

    Gastric cancer, one of the most widespread malignant tumors, still lacks reliable serum/plasma biomarkers of its early detection. In this study we have developed, unified, and tested a new methodology for search of gastric cancer biomarkers based on profiling of low molecular weight proteome (LMWP) (1-17 kDa). This approach included three main components: sample pre-fractionation, matrix-assisted laser desorption ionization time of flight mass spectrometry (MALDI-TOF-MS), data analysis by a bioinformatics software package. Applicability and perspectives of the developed approach for detection of potential gastric cancer markers during LMWP analysis have been demonstrated using 69 plasma samples from patients with gastric cancer (stages I-IV) and 238 control samples. The study revealed peptides/polypeptides, which may be potentially used for detection of this pathology.

  5. ACS (Alma Common Software) operating a set of robotic telescopes

    NASA Astrophysics Data System (ADS)

    Westhues, C.; Ramolla, M.; Lemke, R.; Haas, M.; Drass, H.; Chini, R.

    2014-07-01

    We use the ALMA Common Software (ACS) to establish a unified middleware for robotic observations with the 40cm Optical, 80cm Infrared and 1.5m Hexapod telescopes located at OCA (Observatorio Cerro Armazones) and the ESO 1-m located at La Silla. ACS permits to hide from the observer the technical specifications, like mount-type or camera-model. Furthermore ACS provides a uniform interface to the different telescopes, allowing us to run the same planning program for each telescope. Observations are carried out for long-term monitoring campaigns to study the variability of stars and AGN. We present here the specific implementation to the different telescopes.

  6. Adaptive unified continuum FEM modeling of a 3D FSI benchmark problem.

    PubMed

    Jansson, Johan; Degirmenci, Niyazi Cem; Hoffman, Johan

    2017-09-01

    In this paper, we address a 3D fluid-structure interaction benchmark problem that represents important characteristics of biomedical modeling. We present a goal-oriented adaptive finite element methodology for incompressible fluid-structure interaction based on a streamline diffusion-type stabilization of the balance equations for mass and momentum for the entire continuum in the domain, which is implemented in the Unicorn/FEniCS software framework. A phase marker function and its corresponding transport equation are introduced to select the constitutive law, where the mesh tracks the discontinuous fluid-structure interface. This results in a unified simulation method for fluids and structures. We present detailed results for the benchmark problem compared with experiments, together with a mesh convergence study. Copyright © 2016 John Wiley & Sons, Ltd.

  7. The application of the unified modeling language in object-oriented analysis of healthcare information systems.

    PubMed

    Aggarwal, Vinod

    2002-10-01

    This paper concerns itself with the beneficial effects of the Unified Modeling Language (UML), a nonproprietary object modeling standard, in specifying, visualizing, constructing, documenting, and communicating the model of a healthcare information system from the user's perspective. The author outlines the process of object-oriented analysis (OOA) using the UML and illustrates this with healthcare examples to demonstrate the practicality of application of the UML by healthcare personnel to real-world information system problems. The UML will accelerate advanced uses of object-orientation such as reuse technology, resulting in significantly higher software productivity. The UML is also applicable in the context of a component paradigm that promises to enhance the capabilities of healthcare information systems and simplify their management and maintenance.

  8. Centrally managed unified shared virtual address space

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wilkes, John

    Systems, apparatuses, and methods for managing a unified shared virtual address space. A host may execute system software and manage a plurality of nodes coupled to the host. The host may send work tasks to the nodes, and for each node, the host may externally manage the node's view of the system's virtual address space. Each node may have a central processing unit (CPU) style memory management unit (MMU) with an internal translation lookaside buffer (TLB). In one embodiment, the host may be coupled to a given node via an input/output memory management unit (IOMMU) interface, where the IOMMU frontendmore » interface shares the TLB with the given node's MMU. In another embodiment, the host may control the given node's view of virtual address space via memory-mapped control registers.« less

  9. OBO to UML: Support for the development of conceptual models in the biomedical domain.

    PubMed

    Waldemarin, Ricardo C; de Farias, Cléver R G

    2018-04-01

    A conceptual model abstractly defines a number of concepts and their relationships for the purposes of understanding and communication. Once a conceptual model is available, it can also be used as a starting point for the development of a software system. The development of conceptual models using the Unified Modeling Language (UML) facilitates the representation of modeled concepts and allows software developers to directly reuse these concepts in the design of a software system. The OBO Foundry represents the most relevant collaborative effort towards the development of ontologies in the biomedical domain. The development of UML conceptual models in the biomedical domain may benefit from the use of domain-specific semantics and notation. Further, the development of these models may also benefit from the reuse of knowledge contained in OBO ontologies. This paper investigates the support for the development of conceptual models in the biomedical domain using UML as a conceptual modeling language and using the support provided by the OBO Foundry for the development of biomedical ontologies, namely entity kind and relationship types definitions provided by the Basic Formal Ontology (BFO) and the OBO Core Relations Ontology (OBO Core), respectively. Further, the paper investigates the support for the reuse of biomedical knowledge currently available in OBOFFF ontologies in the development these conceptual models. The paper describes a UML profile for the OBO Core Relations Ontology, which basically defines a number of stereotypes to represent BFO entity kinds and OBO Core relationship types definitions. The paper also presents a support toolset consisting of a graphical editor named OBO-RO Editor, which directly supports the development of UML models using the extensions defined by our profile, and a command-line tool named OBO2UML, which directly converts an OBOFFF ontology into a UML model. Copyright © 2018 Elsevier Inc. All rights reserved.

  10. Development Roadmap of an Evolvable and Extensible Multi-Mission Telecom Planning and Analysis Framework

    NASA Technical Reports Server (NTRS)

    Cheung, Kar-Ming; Tung, Ramona H.; Lee, Charles H.

    2003-01-01

    In this paper, we describe the development roadmap and discuss the various challenges of an evolvable and extensible multi-mission telecom planning and analysis framework. Our long-term goal is to develop a set of powerful flexible telecommunications analysis tools that can be easily adapted to different missions while maintain the common Deep Space Communication requirements. The ability of re-using the DSN ground models and the common software utilities in our adaptations has contributed significantly to our development efforts measured in terms of consistency, accuracy, and minimal effort redundancy, which can translate into shorter development time and major cost savings for the individual missions. In our roadmap, we will address the design principles, technical achievements and the associated challenges for following telecom analysis tools (i) Telecom Forecaster Predictor - TFP (ii) Unified Telecom Predictor - UTP (iii) Generalized Telecom Predictor - GTP (iv) Generic TFP (v) Web-based TFP (vi) Application Program Interface - API (vii) Mars Relay Network Planning Tool - MRNPT.

  11. A Dependable Massive Storage Service for Medical Imaging.

    PubMed

    Núñez-Gaona, Marco Antonio; Marcelín-Jiménez, Ricardo; Gutiérrez-Martínez, Josefina; Aguirre-Meneses, Heriberto; Gonzalez-Compean, José Luis

    2018-05-18

    We present the construction of Babel, a distributed storage system that meets stringent requirements on dependability, availability, and scalability. Together with Babel, we developed an application that uses our system to store medical images. Accordingly, we show the feasibility of our proposal to provide an alternative solution for massive scientific storage and describe the software architecture style that manages the DICOM images life cycle, utilizing Babel like a virtual local storage component for a picture archiving and communication system (PACS-Babel Interface). Furthermore, we describe the communication interface in the Unified Modeling Language (UML) and show how it can be extended to manage the hard work associated with data migration processes on PACS in case of updates or disaster recovery.

  12. A Vision for the Future of Counseling: The 20/20 "Principles for Unifying and Strengthening the Profession"

    ERIC Educational Resources Information Center

    Kaplan, David M.; Gladding, Samuel T.

    2011-01-01

    This article describes the development of the historic "Principles for Unifying and Strengthening the Profession." An outcome of the "20/20: A Vision for the Future of Counseling" initiative, this document delineates a core set of principles that unifies and advances the counseling profession. "Principles for Unifying and Strengthening the…

  13. Formal verification of software-based medical devices considering medical guidelines.

    PubMed

    Daw, Zamira; Cleaveland, Rance; Vetter, Marcus

    2014-01-01

    Software-based devices have increasingly become an important part of several clinical scenarios. Due to their critical impact on human life, medical devices have very strict safety requirements. It is therefore necessary to apply verification methods to ensure that the safety requirements are met. Verification of software-based devices is commonly limited to the verification of their internal elements without considering the interaction that these elements have with other devices as well as the application environment in which they are used. Medical guidelines define clinical procedures, which contain the necessary information to completely verify medical devices. The objective of this work was to incorporate medical guidelines into the verification process in order to increase the reliability of the software-based medical devices. Medical devices are developed using the model-driven method deterministic models for signal processing of embedded systems (DMOSES). This method uses unified modeling language (UML) models as a basis for the development of medical devices. The UML activity diagram is used to describe medical guidelines as workflows. The functionality of the medical devices is abstracted as a set of actions that is modeled within these workflows. In this paper, the UML models are verified using the UPPAAL model-checker. For this purpose, a formalization approach for the UML models using timed automaton (TA) is presented. A set of requirements is verified by the proposed approach for the navigation-guided biopsy. This shows the capability for identifying errors or optimization points both in the workflow and in the system design of the navigation device. In addition to the above, an open source eclipse plug-in was developed for the automated transformation of UML models into TA models that are automatically verified using UPPAAL. The proposed method enables developers to model medical devices and their clinical environment using clinical workflows as one UML diagram. Additionally, the system design can be formally verified automatically.

  14. PubMedPortable: A Framework for Supporting the Development of Text Mining Applications.

    PubMed

    Döring, Kersten; Grüning, Björn A; Telukunta, Kiran K; Thomas, Philippe; Günther, Stefan

    2016-01-01

    Information extraction from biomedical literature is continuously growing in scope and importance. Many tools exist that perform named entity recognition, e.g. of proteins, chemical compounds, and diseases. Furthermore, several approaches deal with the extraction of relations between identified entities. The BioCreative community supports these developments with yearly open challenges, which led to a standardised XML text annotation format called BioC. PubMed provides access to the largest open biomedical literature repository, but there is no unified way of connecting its data to natural language processing tools. Therefore, an appropriate data environment is needed as a basis to combine different software solutions and to develop customised text mining applications. PubMedPortable builds a relational database and a full text index on PubMed citations. It can be applied either to the complete PubMed data set or an arbitrary subset of downloaded PubMed XML files. The software provides the infrastructure to combine stand-alone applications by exporting different data formats, e.g. BioC. The presented workflows show how to use PubMedPortable to retrieve, store, and analyse a disease-specific data set. The provided use cases are well documented in the PubMedPortable wiki. The open-source software library is small, easy to use, and scalable to the user's system requirements. It is freely available for Linux on the web at https://github.com/KerstenDoering/PubMedPortable and for other operating systems as a virtual container. The approach was tested extensively and applied successfully in several projects.

  15. PubMedPortable: A Framework for Supporting the Development of Text Mining Applications

    PubMed Central

    Döring, Kersten; Grüning, Björn A.; Telukunta, Kiran K.; Thomas, Philippe; Günther, Stefan

    2016-01-01

    Information extraction from biomedical literature is continuously growing in scope and importance. Many tools exist that perform named entity recognition, e.g. of proteins, chemical compounds, and diseases. Furthermore, several approaches deal with the extraction of relations between identified entities. The BioCreative community supports these developments with yearly open challenges, which led to a standardised XML text annotation format called BioC. PubMed provides access to the largest open biomedical literature repository, but there is no unified way of connecting its data to natural language processing tools. Therefore, an appropriate data environment is needed as a basis to combine different software solutions and to develop customised text mining applications. PubMedPortable builds a relational database and a full text index on PubMed citations. It can be applied either to the complete PubMed data set or an arbitrary subset of downloaded PubMed XML files. The software provides the infrastructure to combine stand-alone applications by exporting different data formats, e.g. BioC. The presented workflows show how to use PubMedPortable to retrieve, store, and analyse a disease-specific data set. The provided use cases are well documented in the PubMedPortable wiki. The open-source software library is small, easy to use, and scalable to the user’s system requirements. It is freely available for Linux on the web at https://github.com/KerstenDoering/PubMedPortable and for other operating systems as a virtual container. The approach was tested extensively and applied successfully in several projects. PMID:27706202

  16. EPR-based material modelling of soils

    NASA Astrophysics Data System (ADS)

    Faramarzi, Asaad; Alani, Amir M.

    2013-04-01

    In the past few decades, as a result of the rapid developments in computational software and hardware, alternative computer aided pattern recognition approaches have been introduced to modelling many engineering problems, including constitutive modelling of materials. The main idea behind pattern recognition systems is that they learn adaptively from experience and extract various discriminants, each appropriate for its purpose. In this work an approach is presented for developing material models for soils based on evolutionary polynomial regression (EPR). EPR is a recently developed hybrid data mining technique that searches for structured mathematical equations (representing the behaviour of a system) using genetic algorithm and the least squares method. Stress-strain data from triaxial tests are used to train and develop EPR-based material models for soil. The developed models are compared with some of the well-known conventional material models and it is shown that EPR-based models can provide a better prediction for the behaviour of soils. The main benefits of using EPR-based material models are that it provides a unified approach to constitutive modelling of all materials (i.e., all aspects of material behaviour can be implemented within a unified environment of an EPR model); it does not require any arbitrary choice of constitutive (mathematical) models. In EPR-based material models there are no material parameters to be identified. As the model is trained directly from experimental data therefore, EPR-based material models are the shortest route from experimental research (data) to numerical modelling. Another advantage of EPR-based constitutive model is that as more experimental data become available, the quality of the EPR prediction can be improved by learning from the additional data, and therefore, the EPR model can become more effective and robust. The developed EPR-based material models can be incorporated in finite element (FE) analysis.

  17. Development of a State Machine Sequencer for the Keck Interferometer: Evolution, Development and Lessons Learned using a CASE Tool Approach

    NASA Technical Reports Server (NTRS)

    Rede, Leonard J.; Booth, Andrew; Hsieh, Jonathon; Summer, Kellee

    2004-01-01

    This paper presents a discussion of the evolution of a sequencer from a simple EPICS (Experimental Physics and Industrial Control System) based sequencer into a complex implementation designed utilizing UML (Unified Modeling Language) methodologies and a CASE (Computer Aided Software Engineering) tool approach. The main purpose of the sequencer (called the IF Sequencer) is to provide overall control of the Keck Interferometer to enable science operations be carried out by a single operator (and/or observer). The interferometer links the two 10m telescopes of the W. M. Keck Observatory at Mauna Kea, Hawaii. The IF Sequencer is a high-level, multi-threaded, Hare1 finite state machine, software program designed to orchestrate several lower-level hardware and software hard real time subsystems that must perform their work in a specific and sequential order. The sequencing need not be done in hard real-time. Each state machine thread commands either a high-speed real-time multiple mode embedded controller via CORB A, or slower controllers via EPICS Channel Access interfaces. The overall operation of the system is simplified by the automation. The UML is discussed and our use of it to implement the sequencer is presented. The decision to use the Rhapsody product as our CASE tool is explained and reflected upon. Most importantly, a section on lessons learned is presented and the difficulty of integrating CASE tool automatically generated C++ code into a large control system consisting of multiple infrastructures is presented.

  18. Development of a state machine sequencer for the Keck Interferometer: evolution, development, and lessons learned using a CASE tool approach

    NASA Astrophysics Data System (ADS)

    Reder, Leonard J.; Booth, Andrew; Hsieh, Jonathan; Summers, Kellee R.

    2004-09-01

    This paper presents a discussion of the evolution of a sequencer from a simple Experimental Physics and Industrial Control System (EPICS) based sequencer into a complex implementation designed utilizing UML (Unified Modeling Language) methodologies and a Computer Aided Software Engineering (CASE) tool approach. The main purpose of the Interferometer Sequencer (called the IF Sequencer) is to provide overall control of the Keck Interferometer to enable science operations to be carried out by a single operator (and/or observer). The interferometer links the two 10m telescopes of the W. M. Keck Observatory at Mauna Kea, Hawaii. The IF Sequencer is a high-level, multi-threaded, Harel finite state machine software program designed to orchestrate several lower-level hardware and software hard real-time subsystems that must perform their work in a specific and sequential order. The sequencing need not be done in hard real-time. Each state machine thread commands either a high-speed real-time multiple mode embedded controller via CORBA, or slower controllers via EPICS Channel Access interfaces. The overall operation of the system is simplified by the automation. The UML is discussed and our use of it to implement the sequencer is presented. The decision to use the Rhapsody product as our CASE tool is explained and reflected upon. Most importantly, a section on lessons learned is presented and the difficulty of integrating CASE tool automatically generated C++ code into a large control system consisting of multiple infrastructures is presented.

  19. Knowledge synthesis with maps of neural connectivity.

    PubMed

    Tallis, Marcelo; Thompson, Richard; Russ, Thomas A; Burns, Gully A P C

    2011-01-01

    This paper describes software for neuroanatomical knowledge synthesis based on neural connectivity data. This software supports a mature methodology developed since the early 1990s. Over this time, the Swanson laboratory at USC has generated an account of the neural connectivity of the sub-structures of the hypothalamus, amygdala, septum, hippocampus, and bed nucleus of the stria terminalis. This is based on neuroanatomical data maps drawn into a standard brain atlas by experts. In earlier work, we presented an application for visualizing and comparing anatomical macro connections using the Swanson third edition atlas as a framework for accurate registration. Here we describe major improvements to the NeuARt application based on the incorporation of a knowledge representation of experimental design. We also present improvements in the interface and features of the data mapping components within a unified web-application. As a step toward developing an accurate sub-regional account of neural connectivity, we provide navigational access between the data maps and a semantic representation of area-to-area connections that they support. We do so based on an approach called "Knowledge Engineering from Experimental Design" (KEfED) model that is based on experimental variables. We have extended the underlying KEfED representation of tract-tracing experiments by incorporating the definition of a neuronanatomical data map as a measurement variable in the study design. This paper describes the software design of a web-application that allows anatomical data sets to be described within a standard experimental context and thus indexed by non-spatial experimental design features.

  20. LEGOS: Object-based software components for mission-critical systems. Final report, June 1, 1995--December 31, 1997

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    1998-08-01

    An estimated 85% of the installed base of software is a custom application with a production quantity of one. In practice, almost 100% of military software systems are custom software. Paradoxically, the marginal costs of producing additional units are near zero. So why hasn`t the software market, a market with high design costs and low productions costs evolved like other similar custom widget industries, such as automobiles and hardware chips? The military software industry seems immune to market pressures that have motivated a multilevel supply chain structure in other widget industries: design cost recovery, improve quality through specialization, and enablemore » rapid assembly from purchased components. The primary goal of the ComponentWare Consortium (CWC) technology plan was to overcome barriers to building and deploying mission-critical information systems by using verified, reusable software components (Component Ware). The adoption of the ComponentWare infrastructure is predicated upon a critical mass of the leading platform vendors` inevitable adoption of adopting emerging, object-based, distributed computing frameworks--initially CORBA and COM/OLE. The long-range goal of this work is to build and deploy military systems from verified reusable architectures. The promise of component-based applications is to enable developers to snap together new applications by mixing and matching prefabricated software components. A key result of this effort is the concept of reusable software architectures. A second important contribution is the notion that a software architecture is something that can be captured in a formal language and reused across multiple applications. The formalization and reuse of software architectures provide major cost and schedule improvements. The Unified Modeling Language (UML) is fast becoming the industry standard for object-oriented analysis and design notation for object-based systems. However, the lack of a standard real-time distributed object operating system, lack of a standard Computer-Aided Software Environment (CASE) tool notation and lack of a standard CASE tool repository has limited the realization of component software. The approach to fulfilling this need is the software component factory innovation. The factory approach takes advantage of emerging standards such as UML, CORBA, Java and the Internet. The key technical innovation of the software component factory is the ability to assemble and test new system configurations as well as assemble new tools on demand from existing tools and architecture design repositories.« less

  1. Fast simulation of Proton Induced X-Ray Emission Tomography using CUDA

    NASA Astrophysics Data System (ADS)

    Beasley, D. G.; Marques, A. C.; Alves, L. C.; da Silva, R. C.

    2013-07-01

    A new 3D Proton Induced X-Ray Emission Tomography (PIXE-T) and Scanning Transmission Ion Microscopy Tomography (STIM-T) simulation software has been developed in Java and uses NVIDIA™ Common Unified Device Architecture (CUDA) to calculate the X-ray attenuation for large detector areas. A challenge with PIXE-T is to get sufficient counts while retaining a small beam spot size. Therefore a high geometric efficiency is required. However, as the detector solid angle increases the calculations required for accurate reconstruction of the data increase substantially. To overcome this limitation, the CUDA parallel computing platform was used which enables general purpose programming of NVIDIA graphics processing units (GPUs) to perform computations traditionally handled by the central processing unit (CPU). For simulation performance evaluation, the results of a CPU- and a CUDA-based simulation of a phantom are presented. Furthermore, a comparison with the simulation code in the PIXE-Tomography reconstruction software DISRA (A. Sakellariou, D.N. Jamieson, G.J.F. Legge, 2001) is also shown. Compared to a CPU implementation, the CUDA based simulation is approximately 30× faster.

  2. BioNet Digital Communications Framework

    NASA Technical Reports Server (NTRS)

    Gifford, Kevin; Kuzminsky, Sebastian; Williams, Shea

    2010-01-01

    BioNet v2 is a peer-to-peer middleware that enables digital communication devices to talk to each other. It provides a software development framework, standardized application, network-transparent device integration services, a flexible messaging model, and network communications for distributed applications. BioNet is an implementation of the Constellation Program Command, Control, Communications and Information (C3I) Interoperability specification, given in CxP 70022-01. The system architecture provides the necessary infrastructure for the integration of heterogeneous wired and wireless sensing and control devices into a unified data system with a standardized application interface, providing plug-and-play operation for hardware and software systems. BioNet v2 features a naming schema for mobility and coarse-grained localization information, data normalization within a network-transparent device driver framework, enabling of network communications to non-IP devices, and fine-grained application control of data subscription band width usage. BioNet directly integrates Disruption Tolerant Networking (DTN) as a communications technology, enabling networked communications with assets that are only intermittently connected including orbiting relay satellites and planetary rover vehicles.

  3. Working with the HL7 metamodel in a Model Driven Engineering context.

    PubMed

    Martínez-García, A; García-García, J A; Escalona, M J; Parra-Calderón, C L

    2015-10-01

    HL7 (Health Level 7) International is an organization that defines health information standards. Most HL7 domain information models have been designed according to a proprietary graphic language whose domain models are based on the HL7 metamodel. Many researchers have considered using HL7 in the MDE (Model-Driven Engineering) context. A limitation has been identified: all MDE tools support UML (Unified Modeling Language), which is a standard model language, but most do not support the HL7 proprietary model language. We want to support software engineers without HL7 experience, thus real-world problems would be modeled by them by defining system requirements in UML that are compliant with HL7 domain models transparently. The objective of the present research is to connect HL7 with software analysis using a generic model-based approach. This paper introduces a first approach to an HL7 MDE solution that considers the MIF (Model Interchange Format) metamodel proposed by HL7 by making use of a plug-in developed in the EA (Enterprise Architect) tool. Copyright © 2015 Elsevier Inc. All rights reserved.

  4. SARA: a software environment for the analysis of relaxation data acquired with accordion spectroscopy

    PubMed Central

    Harden, Bradley J.

    2014-01-01

    We present SARA (Software for Accordion Relaxation Analysis), an interactive and user-friendly MATLAB software environment designed for analyzing relaxation data obtained with accordion spectroscopy. Accordion spectroscopy can be used to measure nuclear magnetic resonance (NMR) relaxation rates in a fraction of the time required by traditional methods, yet data analysis can be intimidating and no unified software packages are available to assist investigators. Hence, the technique has not achieved widespread use within the NMR community. SARA offers users a selection of analysis protocols spanning those presented in the literature thus far, with modifications permitting a more general application to crowded spectra such as those of proteins. We discuss the advantages and limitations of each fitting method and suggest a protocol combining the strengths of each procedure to achieve optimal results. In the end, SARA provides an environment for facile extraction of relaxation rates and should promote routine application of accordion relaxation spectroscopy. PMID:24408364

  5. Sequence System Building Blocks: Using a Component Architecture for Sequencing Software

    NASA Technical Reports Server (NTRS)

    Streiffert, Barbara A.; O'Reilly, Taifun

    2005-01-01

    Over the last few years software engineering has made significant strides in making more flexible architectures and designs possible. However, at the same time, spacecraft have become more complex and flight software has become more sophisticated. Typically spacecraft are often one-of-a-kind entities that have different hardware designs, different capabilities, different instruments, etc. Ground software has become more complex and operations teams have had to learn a myriad of tools that all have different user interfaces and represent data in different ways. At Jet Propulsion Laboratory (JPL) these themes have collided to require an new approach to producing ground system software. Two different groups have been looking at tackling this particular problem. One group is working for the JPL Mars Technology Program in the Mars Science Laboratory (MSL) Focused Technology area. The other group is the JPL Multi-Mission Planning and Sequencing Group . The major concept driving these two approaches on a similar path is to provide software that can be a more cohesive flexible system that provides a act of planning and sequencing system of services. This paper describes the efforts that have been made to date to create a unified approach from these disparate groups.

  6. Sequencing System Building Blocks: Using a Component Architecture for Sequencing Software

    NASA Technical Reports Server (NTRS)

    Streiffert, Barbara A.; O'Reilly, Taifun

    2006-01-01

    Over the last few years software engineering has made significant strides in making more flexible architectures and designs possible. However, at the same time, spacecraft have become more complex and flight software has become more sophisticated. Typically spacecraft are often one-of-a-kind entities that have different hardware designs, different capabilities, different instruments, etc. Ground software has become more complex and operations teams have had to learn a myriad of tools that all have different user interfaces and represent data in different ways. At Jet Propulsion Laboratory (JPL) these themes have collided to require a new approach to producing ground system software. Two different groups have been looking at tackling this particular problem. One group is working for the JPL Mars Technology Program in the Mars Science Laboratory (MSL) Focused Technology area. The other group is the JPL Multi-Mission Planning and Sequencing Group. The major concept driving these two approaches on a similar path is to provide software that can be a more cohesive flexible system that provides a set of planning and sequencing system of services. This paper describes the efforts that have been made to date to create a unified approach from these disparate groups.

  7. Extending Cross-Generational Knowledge Flow Research in Edge Organizations

    DTIC Science & Technology

    2008-06-01

    letting Protégé generate the basic user interface, and then gradually write widgets and plug-ins to customize its look-and- feel and behavior . 4 3.0...2007a) focused on cross-generational knowledge flows in edge organizations. We found that cross- generational biases affect tacit knowledge transfer...the software engineering field, many matured methodologies already exist, such as Rational Unified Process (Hunt, 2003) or Extreme Programming (Beck

  8. An Assessment of Security Vulnerabilities Comprehension of Cloud Computing Environments: A Quantitative Study Using the Unified Theory of Acceptance and Use

    ERIC Educational Resources Information Center

    Venkatesh, Vijay P.

    2013-01-01

    The current computing landscape owes its roots to the birth of hardware and software technologies from the 1940s and 1950s. Since then, the advent of mainframes, miniaturized computing, and internetworking has given rise to the now prevalent cloud computing era. In the past few months just after 2010, cloud computing adoption has picked up pace…

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jonathan Helmus, Scott Collis

    The Python-ARM Radar Toolkit (Py-ART) is a collection of radar quality control and retrieval codes which all work on two unifying Python objects: the PyRadar and PyGrid objects. By building ingests to several popular radar formats and then abstracting the interface Py-ART greatly simplifies data processing over several other available utilities. In addition Py-ART makes use of Numpy arrays as its primary storage mechanism enabling use of existing and extensive community software tools.

  10. Towards a Unified Approach to Information Integration - A review paper on data/information fusion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Whitney, Paul D.; Posse, Christian; Lei, Xingye C.

    2005-10-14

    Information or data fusion of data from different sources are ubiquitous in many applications, from epidemiology, medical, biological, political, and intelligence to military applications. Data fusion involves integration of spectral, imaging, text, and many other sensor data. For example, in epidemiology, information is often obtained based on many studies conducted by different researchers at different regions with different protocols. In the medical field, the diagnosis of a disease is often based on imaging (MRI, X-Ray, CT), clinical examination, and lab results. In the biological field, information is obtained based on studies conducted on many different species. In military field, informationmore » is obtained based on data from radar sensors, text messages, chemical biological sensor, acoustic sensor, optical warning and many other sources. Many methodologies are used in the data integration process, from classical, Bayesian, to evidence based expert systems. The implementation of the data integration ranges from pure software design to a mixture of software and hardware. In this review we summarize the methodologies and implementations of data fusion process, and illustrate in more detail the methodologies involved in three examples. We propose a unified multi-stage and multi-path mapping approach to the data fusion process, and point out future prospects and challenges.« less

  11. Rosen's (M,R) system in Unified Modelling Language.

    PubMed

    Zhang, Ling; Williams, Richard A; Gatherer, Derek

    2016-01-01

    Robert Rosen's (M,R) system is an abstract biological network architecture that is allegedly non-computable on a Turing machine. If (M,R) is truly non-computable, there are serious implications for the modelling of large biological networks in computer software. A body of work has now accumulated addressing Rosen's claim concerning (M,R) by attempting to instantiate it in various software systems. However, a conclusive refutation has remained elusive, principally since none of the attempts to date have unambiguously avoided the critique that they have altered the properties of (M,R) in the coding process, producing merely approximate simulations of (M,R) rather than true computational models. In this paper, we use the Unified Modelling Language (UML), a diagrammatic notation standard, to express (M,R) as a system of objects having attributes, functions and relations. We believe that this instantiates (M,R) in such a way than none of the original properties of the system are corrupted in the process. Crucially, we demonstrate that (M,R) as classically represented in the relational biology literature is implicitly a UML communication diagram. Furthermore, since UML is formally compatible with object-oriented computing languages, instantiation of (M,R) in UML strongly implies its computability in object-oriented coding languages. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  12. The brain imaging data structure, a format for organizing and describing outputs of neuroimaging experiments.

    PubMed

    Gorgolewski, Krzysztof J; Auer, Tibor; Calhoun, Vince D; Craddock, R Cameron; Das, Samir; Duff, Eugene P; Flandin, Guillaume; Ghosh, Satrajit S; Glatard, Tristan; Halchenko, Yaroslav O; Handwerker, Daniel A; Hanke, Michael; Keator, David; Li, Xiangrui; Michael, Zachary; Maumet, Camille; Nichols, B Nolan; Nichols, Thomas E; Pellman, John; Poline, Jean-Baptiste; Rokem, Ariel; Schaefer, Gunnar; Sochat, Vanessa; Triplett, William; Turner, Jessica A; Varoquaux, Gaël; Poldrack, Russell A

    2016-06-21

    The development of magnetic resonance imaging (MRI) techniques has defined modern neuroimaging. Since its inception, tens of thousands of studies using techniques such as functional MRI and diffusion weighted imaging have allowed for the non-invasive study of the brain. Despite the fact that MRI is routinely used to obtain data for neuroscience research, there has been no widely adopted standard for organizing and describing the data collected in an imaging experiment. This renders sharing and reusing data (within or between labs) difficult if not impossible and unnecessarily complicates the application of automatic pipelines and quality assurance protocols. To solve this problem, we have developed the Brain Imaging Data Structure (BIDS), a standard for organizing and describing MRI datasets. The BIDS standard uses file formats compatible with existing software, unifies the majority of practices already common in the field, and captures the metadata necessary for most common data processing operations.

  13. The brain imaging data structure, a format for organizing and describing outputs of neuroimaging experiments

    PubMed Central

    Gorgolewski, Krzysztof J.; Auer, Tibor; Calhoun, Vince D.; Craddock, R. Cameron; Das, Samir; Duff, Eugene P.; Flandin, Guillaume; Ghosh, Satrajit S.; Glatard, Tristan; Halchenko, Yaroslav O.; Handwerker, Daniel A.; Hanke, Michael; Keator, David; Li, Xiangrui; Michael, Zachary; Maumet, Camille; Nichols, B. Nolan; Nichols, Thomas E.; Pellman, John; Poline, Jean-Baptiste; Rokem, Ariel; Schaefer, Gunnar; Sochat, Vanessa; Triplett, William; Turner, Jessica A.; Varoquaux, Gaël; Poldrack, Russell A.

    2016-01-01

    The development of magnetic resonance imaging (MRI) techniques has defined modern neuroimaging. Since its inception, tens of thousands of studies using techniques such as functional MRI and diffusion weighted imaging have allowed for the non-invasive study of the brain. Despite the fact that MRI is routinely used to obtain data for neuroscience research, there has been no widely adopted standard for organizing and describing the data collected in an imaging experiment. This renders sharing and reusing data (within or between labs) difficult if not impossible and unnecessarily complicates the application of automatic pipelines and quality assurance protocols. To solve this problem, we have developed the Brain Imaging Data Structure (BIDS), a standard for organizing and describing MRI datasets. The BIDS standard uses file formats compatible with existing software, unifies the majority of practices already common in the field, and captures the metadata necessary for most common data processing operations. PMID:27326542

  14. A comprehensive strategy for designing a Web-based medical curriculum.

    PubMed Central

    Zucker, J.; Chase, H.; Molholt, P.; Bean, C.; Kahn, R. M.

    1996-01-01

    In preparing for a full featured online curriculum, it is necessary to develop scaleable strategies for software design that will support the pedagogical goals of the curriculum and which will address the issues of acquisition and updating of materials, of robust content-based linking, and of integration of the online materials into other methods of learning. A complete online curriculum, as distinct from an individual computerized module, must provide dynamic updating of both content and structure and an easy pathway from the professor's notes to the finished online product. At the College of Physicians and Surgeons, we are developing such strategies including a scripted text conversion process that uses the Hypertext Markup Language (HTML) as structural markup rather than as display markup, automated linking by the use of relational databases and the Unified Medical Language System (UMLS), integration of text, images, and multimedia along with interface designs which promote multiple contexts and collaborative study. PMID:8947624

  15. Standardizing Exoplanet Analysis with the Exoplanet Characterization Tool Kit (ExoCTK)

    NASA Astrophysics Data System (ADS)

    Fowler, Julia; Stevenson, Kevin B.; Lewis, Nikole K.; Fraine, Jonathan D.; Pueyo, Laurent; Bruno, Giovanni; Filippazzo, Joe; Hill, Matthew; Batalha, Natasha; Wakeford, Hannah; Bushra, Rafia

    2018-06-01

    Exoplanet characterization depends critically on analysis tools, models, and spectral libraries that are constantly under development and have no single source nor sense of unified style or methods. The complexity of spectroscopic analysis and initial time commitment required to become competitive is prohibitive to new researchers entering the field, as well as a remaining obstacle for established groups hoping to contribute in a comparable manner to their peers. As a solution, we are developing an open-source, modular data analysis package in Python and a publicly facing web interface including tools that address atmospheric characterization, transit observation planning with JWST, JWST corongraphy simulations, limb darkening, forward modeling, and data reduction, as well as libraries of stellar, planet, and opacity models. The foundation of these software tools and libraries exist within pockets of the exoplanet community, but our project will gather these seedling tools and grow a robust, uniform, and well-maintained exoplanet characterization toolkit.

  16. The caBIG® Life Science Business Architecture Model.

    PubMed

    Boyd, Lauren Becnel; Hunicke-Smith, Scott P; Stafford, Grace A; Freund, Elaine T; Ehlman, Michele; Chandran, Uma; Dennis, Robert; Fernandez, Anna T; Goldstein, Stephen; Steffen, David; Tycko, Benjamin; Klemm, Juli D

    2011-05-15

    Business Architecture Models (BAMs) describe what a business does, who performs the activities, where and when activities are performed, how activities are accomplished and which data are present. The purpose of a BAM is to provide a common resource for understanding business functions and requirements and to guide software development. The cancer Biomedical Informatics Grid (caBIG®) Life Science BAM (LS BAM) provides a shared understanding of the vocabulary, goals and processes that are common in the business of LS research. LS BAM 1.1 includes 90 goals and 61 people and groups within Use Case and Activity Unified Modeling Language (UML) Diagrams. Here we report on the model's current release, LS BAM 1.1, its utility and usage, and plans for future use and continuing development for future releases. The LS BAM is freely available as UML, PDF and HTML (https://wiki.nci.nih.gov/x/OFNyAQ).

  17. Conducting requirements analyses for research using routinely collected health data: a model driven approach.

    PubMed

    de Lusignan, Simon; Cashman, Josephine; Poh, Norman; Michalakidis, Georgios; Mason, Aaron; Desombre, Terry; Krause, Paul

    2012-01-01

    Medical research increasingly requires the linkage of data from different sources. Conducting a requirements analysis for a new application is an established part of software engineering, but rarely reported in the biomedical literature; and no generic approaches have been published as to how to link heterogeneous health data. Literature review, followed by a consensus process to define how requirements for research, using, multiple data sources might be modeled. We have developed a requirements analysis: i-ScheDULEs - The first components of the modeling process are indexing and create a rich picture of the research study. Secondly, we developed a series of reference models of progressive complexity: Data flow diagrams (DFD) to define data requirements; unified modeling language (UML) use case diagrams to capture study specific and governance requirements; and finally, business process models, using business process modeling notation (BPMN). These requirements and their associated models should become part of research study protocols.

  18. Multiaxial Creep-Fatigue and Creep-Ratcheting Failures of Grade 91 and Haynes 230 Alloys Toward Addressing Design Issues of Gen IV Nuclear Power Plants

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hassan, Tasnim; Lissenden, Cliff; Carroll, Laura

    The proposed research will develop systematic sets of uniaxial and multiaxial experimental data at a very high temperature (850-950°C) for Alloy 617. The loading histories to be prescribed in the experiments will induce creep-fatigue and creep-ratcheting failure mechanisms. These experimental responses will be scrutinized in order to quantify the influences of temperature and creep on fatigue and ratcheting failures. A unified constitutive model (UCM) will be developed and validated against these experimental responses. The improved UCM will be incorporated into the widely used finite element commercial software packages ANSYS. The modified ANSYS will be validated so that it can bemore » used for evaluating the very high temperature ASME-NH design-by-analysis methodology for Alloy 617 and thereby addressing the ASME-NH design code issues.« less

  19. The Clinical Next-Generation Sequencing Database: A Tool for the Unified Management of Clinical Information and Genetic Variants to Accelerate Variant Pathogenicity Classification.

    PubMed

    Nishio, Shin-Ya; Usami, Shin-Ichi

    2017-03-01

    Recent advances in next-generation sequencing (NGS) have given rise to new challenges due to the difficulties in variant pathogenicity interpretation and large dataset management, including many kinds of public population databases as well as public or commercial disease-specific databases. Here, we report a new database development tool, named the "Clinical NGS Database," for improving clinical NGS workflow through the unified management of variant information and clinical information. This database software offers a two-feature approach to variant pathogenicity classification. The first of these approaches is a phenotype similarity-based approach. This database allows the easy comparison of the detailed phenotype of each patient with the average phenotype of the same gene mutation at the variant or gene level. It is also possible to browse patients with the same gene mutation quickly. The other approach is a statistical approach to variant pathogenicity classification based on the use of the odds ratio for comparisons between the case and the control for each inheritance mode (families with apparently autosomal dominant inheritance vs. control, and families with apparently autosomal recessive inheritance vs. control). A number of case studies are also presented to illustrate the utility of this database. © 2016 The Authors. **Human Mutation published by Wiley Periodicals, Inc.

  20. TopoCad - A unified system for geospatial data and services

    NASA Astrophysics Data System (ADS)

    Felus, Y. A.; Sagi, Y.; Regev, R.; Keinan, E.

    2013-10-01

    "E-government" is a leading trend in public sector activities in recent years. The Survey of Israel set as a vision to provide all of its services and datasets online. The TopoCad system is the latest software tool developed in order to unify a number of services and databases into one on-line and user friendly system. The TopoCad system is based on Web 1.0 technology; hence the customer is only a consumer of data. All data and services are accessible for the surveyors and geo-information professional in an easy and comfortable way. The future lies in Web 2.0 and Web 3.0 technologies through which professionals can upload their own data for quality control and future assimilation with the national database. A key issue in the development of this complex system was to implement a simple and easy (comfortable) user experience (UX). The user interface employs natural language dialog box in order to understand the user requirements. The system then links spatial data with alpha-numeric data in a flawless manner. The operation of the TopoCad requires no user guide or training. It is intuitive and self-taught. The system utilizes semantic engines and machine understanding technologies to link records from diverse databases in a meaningful way. Thus, the next generation of TopoCad will include five main modules: users and projects information, coordinates transformations and calculations services, geospatial data quality control, linking governmental systems and databases, smart forms and applications. The article describes the first stage of the TopoCad system and gives an overview of its future development.

  1. Development and application of unified algorithms for problems in computational science

    NASA Technical Reports Server (NTRS)

    Shankar, Vijaya; Chakravarthy, Sukumar

    1987-01-01

    A framework is presented for developing computationally unified numerical algorithms for solving nonlinear equations that arise in modeling various problems in mathematical physics. The concept of computational unification is an attempt to encompass efficient solution procedures for computing various nonlinear phenomena that may occur in a given problem. For example, in Computational Fluid Dynamics (CFD), a unified algorithm will be one that allows for solutions to subsonic (elliptic), transonic (mixed elliptic-hyperbolic), and supersonic (hyperbolic) flows for both steady and unsteady problems. The objectives are: development of superior unified algorithms emphasizing accuracy and efficiency aspects; development of codes based on selected algorithms leading to validation; application of mature codes to realistic problems; and extension/application of CFD-based algorithms to problems in other areas of mathematical physics. The ultimate objective is to achieve integration of multidisciplinary technologies to enhance synergism in the design process through computational simulation. Specific unified algorithms for a hierarchy of gas dynamics equations and their applications to two other areas: electromagnetic scattering, and laser-materials interaction accounting for melting.

  2. Establishing a Novel Modeling Tool: A Python-Based Interface for a Neuromorphic Hardware System

    PubMed Central

    Brüderle, Daniel; Müller, Eric; Davison, Andrew; Muller, Eilif; Schemmel, Johannes; Meier, Karlheinz

    2008-01-01

    Neuromorphic hardware systems provide new possibilities for the neuroscience modeling community. Due to the intrinsic parallelism of the micro-electronic emulation of neural computation, such models are highly scalable without a loss of speed. However, the communities of software simulator users and neuromorphic engineering in neuroscience are rather disjoint. We present a software concept that provides the possibility to establish such hardware devices as valuable modeling tools. It is based on the integration of the hardware interface into a simulator-independent language which allows for unified experiment descriptions that can be run on various simulation platforms without modification, implying experiment portability and a huge simplification of the quantitative comparison of hardware and simulator results. We introduce an accelerated neuromorphic hardware device and describe the implementation of the proposed concept for this system. An example setup and results acquired by utilizing both the hardware system and a software simulator are demonstrated. PMID:19562085

  3. Establishing a novel modeling tool: a python-based interface for a neuromorphic hardware system.

    PubMed

    Brüderle, Daniel; Müller, Eric; Davison, Andrew; Muller, Eilif; Schemmel, Johannes; Meier, Karlheinz

    2009-01-01

    Neuromorphic hardware systems provide new possibilities for the neuroscience modeling community. Due to the intrinsic parallelism of the micro-electronic emulation of neural computation, such models are highly scalable without a loss of speed. However, the communities of software simulator users and neuromorphic engineering in neuroscience are rather disjoint. We present a software concept that provides the possibility to establish such hardware devices as valuable modeling tools. It is based on the integration of the hardware interface into a simulator-independent language which allows for unified experiment descriptions that can be run on various simulation platforms without modification, implying experiment portability and a huge simplification of the quantitative comparison of hardware and simulator results. We introduce an accelerated neuromorphic hardware device and describe the implementation of the proposed concept for this system. An example setup and results acquired by utilizing both the hardware system and a software simulator are demonstrated.

  4. The Development of Cadastral Domain Model Oriented at Unified Real Estate Registration of China Based on Ontology

    NASA Astrophysics Data System (ADS)

    Li, M.; Zhu, X.; Shen, C.; Chen, D.; Guo, W.

    2012-07-01

    With the certain regulation of unified real estate registration taken by the Property Law and the step-by-step advance of simultaneous development in urban and rural in China, it is the premise and foundation to clearly specify property rights and their relations in promoting the integrated management of urban and rural land. This paper aims at developing a cadastral domain model oriented at unified real estate registration of China from the perspective of legal and spatial, which set up the foundation for unified real estate registration, and facilitates the effective interchange of cadastral information and the administration of land use. The legal cadastral model is provided based on the analysis of gap between current model and the demand of unified real estate registration, which implies the restrictions between different rights. Then the new cadastral domain model is constructed based on the legal cadastral domain model and CCDM (van Oosterom et al., 2006), which integrate real estate rights of urban land and rural land. Finally, the model is validated by a prototype system. The results show that the model is applicable for unified real estate registration in China.

  5. Efficient particle-in-cell simulation of auroral plasma phenomena using a CUDA enabled graphics processing unit

    NASA Astrophysics Data System (ADS)

    Sewell, Stephen

    This thesis introduces a software framework that effectively utilizes low-cost commercially available Graphic Processing Units (GPUs) to simulate complex scientific plasma phenomena that are modeled using the Particle-In-Cell (PIC) paradigm. The software framework that was developed conforms to the Compute Unified Device Architecture (CUDA), a standard for general purpose graphic processing that was introduced by NVIDIA Corporation. This framework has been verified for correctness and applied to advance the state of understanding of the electromagnetic aspects of the development of the Aurora Borealis and Aurora Australis. For each phase of the PIC methodology, this research has identified one or more methods to exploit the problem's natural parallelism and effectively map it for execution on the graphic processing unit and its host processor. The sources of overhead that can reduce the effectiveness of parallelization for each of these methods have also been identified. One of the novel aspects of this research was the utilization of particle sorting during the grid interpolation phase. The final representation resulted in simulations that executed about 38 times faster than simulations that were run on a single-core general-purpose processing system. The scalability of this framework to larger problem sizes and future generation systems has also been investigated.

  6. The Research and Implementation of MUSER CLEAN Algorithm Based on OpenCL

    NASA Astrophysics Data System (ADS)

    Feng, Y.; Chen, K.; Deng, H.; Wang, F.; Mei, Y.; Wei, S. L.; Dai, W.; Yang, Q. P.; Liu, Y. B.; Wu, J. P.

    2017-03-01

    It's urgent to carry out high-performance data processing with a single machine in the development of astronomical software. However, due to the different configuration of the machine, traditional programming techniques such as multi-threading, and CUDA (Compute Unified Device Architecture)+GPU (Graphic Processing Unit) have obvious limitations in portability and seamlessness between different operation systems. The OpenCL (Open Computing Language) used in the development of MUSER (MingantU SpEctral Radioheliograph) data processing system is introduced. And the Högbom CLEAN algorithm is re-implemented into parallel CLEAN algorithm by the Python language and PyOpenCL extended package. The experimental results show that the CLEAN algorithm based on OpenCL has approximately equally operating efficiency compared with the former CLEAN algorithm based on CUDA. More important, the data processing in merely CPU (Central Processing Unit) environment of this system can also achieve high performance, which has solved the problem of environmental dependence of CUDA+GPU. Overall, the research improves the adaptability of the system with emphasis on performance of MUSER image clean computing. In the meanwhile, the realization of OpenCL in MUSER proves its availability in scientific data processing. In view of the high-performance computing features of OpenCL in heterogeneous environment, it will probably become the preferred technology in the future high-performance astronomical software development.

  7. Development of a Heterogenic Distributed Environment for Spatial Data Processing Using Cloud Technologies

    NASA Astrophysics Data System (ADS)

    Garov, A. S.; Karachevtseva, I. P.; Matveev, E. V.; Zubarev, A. E.; Florinsky, I. V.

    2016-06-01

    We are developing a unified distributed communication environment for processing of spatial data which integrates web-, desktop- and mobile platforms and combines volunteer computing model and public cloud possibilities. The main idea is to create a flexible working environment for research groups, which may be scaled according to required data volume and computing power, while keeping infrastructure costs at minimum. It is based upon the "single window" principle, which combines data access via geoportal functionality, processing possibilities and communication between researchers. Using an innovative software environment the recently developed planetary information system (http://cartsrv.mexlab.ru/geoportal) will be updated. The new system will provide spatial data processing, analysis and 3D-visualization and will be tested based on freely available Earth remote sensing data as well as Solar system planetary images from various missions. Based on this approach it will be possible to organize the research and representation of results on a new technology level, which provides more possibilities for immediate and direct reuse of research materials, including data, algorithms, methodology, and components. The new software environment is targeted at remote scientific teams, and will provide access to existing spatial distributed information for which we suggest implementation of a user interface as an advanced front-end, e.g., for virtual globe system.

  8. The number of reduced alignments between two DNA sequences

    PubMed Central

    2014-01-01

    Background In this study we consider DNA sequences as mathematical strings. Total and reduced alignments between two DNA sequences have been considered in the literature to measure their similarity. Results for explicit representations of some alignments have been already obtained. Results We present exact, explicit and computable formulas for the number of different possible alignments between two DNA sequences and a new formula for a class of reduced alignments. Conclusions A unified approach for a wide class of alignments between two DNA sequences has been provided. The formula is computable and, if complemented by software development, will provide a deeper insight into the theory of sequence alignment and give rise to new comparison methods. AMS Subject Classification Primary 92B05, 33C20, secondary 39A14, 65Q30 PMID:24684679

  9. The caCORE Software Development Kit: Streamlining construction of interoperable biomedical information services

    PubMed Central

    Phillips, Joshua; Chilukuri, Ram; Fragoso, Gilberto; Warzel, Denise; Covitz, Peter A

    2006-01-01

    Background Robust, programmatically accessible biomedical information services that syntactically and semantically interoperate with other resources are challenging to construct. Such systems require the adoption of common information models, data representations and terminology standards as well as documented application programming interfaces (APIs). The National Cancer Institute (NCI) developed the cancer common ontologic representation environment (caCORE) to provide the infrastructure necessary to achieve interoperability across the systems it develops or sponsors. The caCORE Software Development Kit (SDK) was designed to provide developers both within and outside the NCI with the tools needed to construct such interoperable software systems. Results The caCORE SDK requires a Unified Modeling Language (UML) tool to begin the development workflow with the construction of a domain information model in the form of a UML Class Diagram. Models are annotated with concepts and definitions from a description logic terminology source using the Semantic Connector component. The annotated model is registered in the Cancer Data Standards Repository (caDSR) using the UML Loader component. System software is automatically generated using the Codegen component, which produces middleware that runs on an application server. The caCORE SDK was initially tested and validated using a seven-class UML model, and has been used to generate the caCORE production system, which includes models with dozens of classes. The deployed system supports access through object-oriented APIs with consistent syntax for retrieval of any type of data object across all classes in the original UML model. The caCORE SDK is currently being used by several development teams, including by participants in the cancer biomedical informatics grid (caBIG) program, to create compatible data services. caBIG compatibility standards are based upon caCORE resources, and thus the caCORE SDK has emerged as a key enabling technology for caBIG. Conclusion The caCORE SDK substantially lowers the barrier to implementing systems that are syntactically and semantically interoperable by providing workflow and automation tools that standardize and expedite modeling, development, and deployment. It has gained acceptance among developers in the caBIG program, and is expected to provide a common mechanism for creating data service nodes on the data grid that is under development. PMID:16398930

  10. An Analysis of SE and MBSE Concepts to Support Defence Capability Acquisition

    DTIC Science & Technology

    2014-09-01

    Government Department of Finance and Deregulation, Canberra, ACT, August 2011. [online] URL: http://agimo.gov.au/files/2012/04/AGA_RM_v3_0.pdf ANSI...First Time, White Paper, Aberdeen Group Group, August 2011. [online] URL: http://www.aberdeen.com/Aberdeen- Library/7121/RA-system-design...Edge e-zine, IBM Software Group, August 2003. Cantor 2003b Cantor, Murray, Rational Unified Process for Systems Engineering Part I1: System

  11. Report of the Defense Science Board Task Force on Military Software

    DTIC Science & Technology

    1987-09-01

    training commitment from others. (The same thing is true of processor architectures.) 3. DoD should be aggressively looking for opportunities to buy...resource or training commitment from others. (The same thing is true of processor architectures.) 3. DoD should be aggressively looking for opportunities to...are uuifying principles to be found, whether in quarks or in unified field theorie.. Einstein repeatedly argued that there must eventually be

  12. USSR and Eastern Europe Scientific Abstracts, Cybernetics, Computers, and Automation Technology, Number 26

    DTIC Science & Technology

    1977-01-26

    Sisteme Matematicheskogo Obespecheniya YeS EVM [ Applied Programs in the Software System for the Unified System of Computers], by A. Ye. Fateyev, A. I...computerized systems are most effective in large production complexes , in which the level of utilization of computers can be as high as 500,000...performance of these tasks could be furthered by the complex introduction of electronic computers in automated control systems. The creation of ASU

  13. Integrated Design and Implementation of Embedded Control Systems with Scilab

    PubMed Central

    Ma, Longhua; Xia, Feng; Peng, Zhe

    2008-01-01

    Embedded systems are playing an increasingly important role in control engineering. Despite their popularity, embedded systems are generally subject to resource constraints and it is therefore difficult to build complex control systems on embedded platforms. Traditionally, the design and implementation of control systems are often separated, which causes the development of embedded control systems to be highly time-consuming and costly. To address these problems, this paper presents a low-cost, reusable, reconfigurable platform that enables integrated design and implementation of embedded control systems. To minimize the cost, free and open source software packages such as Linux and Scilab are used. Scilab is ported to the embedded ARM-Linux system. The drivers for interfacing Scilab with several communication protocols including serial, Ethernet, and Modbus are developed. Experiments are conducted to test the developed embedded platform. The use of Scilab enables implementation of complex control algorithms on embedded platforms. With the developed platform, it is possible to perform all phases of the development cycle of embedded control systems in a unified environment, thus facilitating the reduction of development time and cost. PMID:27873827

  14. Integrated Design and Implementation of Embedded Control Systems with Scilab.

    PubMed

    Ma, Longhua; Xia, Feng; Peng, Zhe

    2008-09-05

    Embedded systems are playing an increasingly important role in control engineering. Despite their popularity, embedded systems are generally subject to resource constraints and it is therefore difficult to build complex control systems on embedded platforms. Traditionally, the design and implementation of control systems are often separated, which causes the development of embedded control systems to be highly timeconsuming and costly. To address these problems, this paper presents a low-cost, reusable, reconfigurable platform that enables integrated design and implementation of embedded control systems. To minimize the cost, free and open source software packages such as Linux and Scilab are used. Scilab is ported to the embedded ARM-Linux system. The drivers for interfacing Scilab with several communication protocols including serial, Ethernet, and Modbus are developed. Experiments are conducted to test the developed embedded platform. The use of Scilab enables implementation of complex control algorithms on embedded platforms. With the developed platform, it is possible to perform all phases of the development cycle of embedded control systems in a unified environment, thus facilitating the reduction of development time and cost.

  15. Introducing Python tools for magnetotellurics: MTpy

    NASA Astrophysics Data System (ADS)

    Krieger, L.; Peacock, J.; Inverarity, K.; Thiel, S.; Robertson, K.

    2013-12-01

    Within the framework of geophysical exploration techniques, the magnetotelluric method (MT) is relatively immature: It is still not as widely spread as other geophysical methods like seismology, and its processing schemes and data formats are not thoroughly standardized. As a result, the file handling and processing software within the academic community is mainly based on a loose collection of codes, which are sometimes highly adapted to the respective local specifications. Although tools for the estimation of the frequency dependent MT transfer function, as well as inversion and modelling codes, are available, the standards and software for handling MT data are generally not unified throughout the community. To overcome problems that arise from missing standards, and to simplify the general handling of MT data, we have developed the software package "MTpy", which allows the handling, processing, and imaging of magnetotelluric data sets. It is written in Python and the code is open-source. The setup of this package follows the modular approach of successful software packages like GMT or Obspy. It contains sub-packages and modules for various tasks within the standard MT data processing and handling scheme. Besides pure Python classes and functions, MTpy provides wrappers and convenience scripts to call external software, e.g. modelling and inversion codes. Even though still under development, MTpy already contains ca. 250 functions that work on raw and preprocessed data. However, as our aim is not to produce a static collection of software, we rather introduce MTpy as a flexible framework, which will be dynamically extended in the future. It then has the potential to help standardise processing procedures and at same time be a versatile supplement for existing algorithms. We introduce the concept and structure of MTpy, and we illustrate the workflow of MT data processing utilising MTpy on an example data set collected over a geothermal exploration site in South Australia. Workflow of MT data processing. Within the structural diagram, the MTpy sub-packages are shown in red (time series data processing), green (handling of EDI files and impedance tensor data), yellow (connection to modelling/inversion algorithms), black (impedance tensor interpretation, e.g. by Phase Tensor calculations), and blue (generation of visual representations, e.g pseudo sections or resistivity models).

  16. Next Generation Community Based Unified Global Modeling System Development and Operational Implementation Strategies at NCEP

    NASA Astrophysics Data System (ADS)

    Tallapragada, V.

    2017-12-01

    NOAA's Next Generation Global Prediction System (NGGPS) has provided the unique opportunity to develop and implement a non-hydrostatic global model based on Geophysical Fluid Dynamics Laboratory (GFDL) Finite Volume Cubed Sphere (FV3) Dynamic Core at National Centers for Environmental Prediction (NCEP), making a leap-step advancement in seamless prediction capabilities across all spatial and temporal scales. Model development efforts are centralized with unified model development in the NOAA Environmental Modeling System (NEMS) infrastructure based on Earth System Modeling Framework (ESMF). A more sophisticated coupling among various earth system components is being enabled within NEMS following National Unified Operational Prediction Capability (NUOPC) standards. The eventual goal of unifying global and regional models will enable operational global models operating at convective resolving scales. Apart from the advanced non-hydrostatic dynamic core and coupling to various earth system components, advanced physics and data assimilation techniques are essential for improved forecast skill. NGGPS is spearheading ambitious physics and data assimilation strategies, concentrating on creation of a Common Community Physics Package (CCPP) and Joint Effort for Data Assimilation Integration (JEDI). Both initiatives are expected to be community developed, with emphasis on research transitioning to operations (R2O). The unified modeling system is being built to support the needs of both operations and research. Different layers of community partners are also established with specific roles/responsibilities for researchers, core development partners, trusted super-users, and operations. Stakeholders are engaged at all stages to help drive the direction of development, resources allocations and prioritization. This talk presents the current and future plans of unified model development at NCEP for weather, sub-seasonal, and seasonal climate prediction applications with special emphasis on implementation of NCEP FV3 Global Forecast System (GFS) and Global Ensemble Forecast System (GEFS) into operations by 2019.

  17. Omics Metadata Management Software (OMMS).

    PubMed

    Perez-Arriaga, Martha O; Wilson, Susan; Williams, Kelly P; Schoeniger, Joseph; Waymire, Russel L; Powell, Amy Jo

    2015-01-01

    Next-generation sequencing projects have underappreciated information management tasks requiring detailed attention to specimen curation, nucleic acid sample preparation and sequence production methods required for downstream data processing, comparison, interpretation, sharing and reuse. The few existing metadata management tools for genome-based studies provide weak curatorial frameworks for experimentalists to store and manage idiosyncratic, project-specific information, typically offering no automation supporting unified naming and numbering conventions for sequencing production environments that routinely deal with hundreds, if not thousands of samples at a time. Moreover, existing tools are not readily interfaced with bioinformatics executables, (e.g., BLAST, Bowtie2, custom pipelines). Our application, the Omics Metadata Management Software (OMMS), answers both needs, empowering experimentalists to generate intuitive, consistent metadata, and perform analyses and information management tasks via an intuitive web-based interface. Several use cases with short-read sequence datasets are provided to validate installation and integrated function, and suggest possible methodological road maps for prospective users. Provided examples highlight possible OMMS workflows for metadata curation, multistep analyses, and results management and downloading. The OMMS can be implemented as a stand alone-package for individual laboratories, or can be configured for webbased deployment supporting geographically-dispersed projects. The OMMS was developed using an open-source software base, is flexible, extensible and easily installed and executed. The OMMS can be obtained at http://omms.sandia.gov. The OMMS can be obtained at http://omms.sandia.gov.

  18. Next Generation Image-Based Phenotyping of Root System Architecture

    NASA Astrophysics Data System (ADS)

    Davis, T. W.; Shaw, N. M.; Cheng, H.; Larson, B. G.; Craft, E. J.; Shaff, J. E.; Schneider, D. J.; Piñeros, M. A.; Kochian, L. V.

    2016-12-01

    The development of the Plant Root Imaging and Data Acquisition (PRIDA) hardware/software system enables researchers to collect digital images, along with all the relevant experimental details, of a range of hydroponically grown agricultural crop roots for 2D and 3D trait analysis. Previous efforts of image-based root phenotyping focused on young cereals, such as rice; however, there is a growing need to measure both older and larger root systems, such as those of maize and sorghum, to improve our understanding of the underlying genetics that control favorable rooting traits for plant breeding programs to combat the agricultural risks presented by climate change. Therefore, a larger imaging apparatus has been prototyped for capturing 3D root architecture with an adaptive control system and innovative plant root growth media that retains three-dimensional root architectural features. New publicly available multi-platform software has been released with considerations for both high throughput (e.g., 3D imaging of a single root system in under ten minutes) and high portability (e.g., support for the Raspberry Pi computer). The software features unified data collection, management, exploration and preservation for continued trait and genetics analysis of root system architecture. The new system makes data acquisition efficient and includes features that address the needs of researchers and technicians, such as reduced imaging time, semi-automated camera calibration with uncertainty characterization, and safe storage of the critical experimental data.

  19. Omics Metadata Management Software (OMMS)

    PubMed Central

    Perez-Arriaga, Martha O; Wilson, Susan; Williams, Kelly P; Schoeniger, Joseph; Waymire, Russel L; Powell, Amy Jo

    2015-01-01

    Next-generation sequencing projects have underappreciated information management tasks requiring detailed attention to specimen curation, nucleic acid sample preparation and sequence production methods required for downstream data processing, comparison, interpretation, sharing and reuse. The few existing metadata management tools for genome-based studies provide weak curatorial frameworks for experimentalists to store and manage idiosyncratic, project-specific information, typically offering no automation supporting unified naming and numbering conventions for sequencing production environments that routinely deal with hundreds, if not thousands of samples at a time. Moreover, existing tools are not readily interfaced with bioinformatics executables, (e.g., BLAST, Bowtie2, custom pipelines). Our application, the Omics Metadata Management Software (OMMS), answers both needs, empowering experimentalists to generate intuitive, consistent metadata, and perform analyses and information management tasks via an intuitive web-based interface. Several use cases with short-read sequence datasets are provided to validate installation and integrated function, and suggest possible methodological road maps for prospective users. Provided examples highlight possible OMMS workflows for metadata curation, multistep analyses, and results management and downloading. The OMMS can be implemented as a stand alone-package for individual laboratories, or can be configured for webbased deployment supporting geographically-dispersed projects. The OMMS was developed using an open-source software base, is flexible, extensible and easily installed and executed. The OMMS can be obtained at http://omms.sandia.gov. Availability The OMMS can be obtained at http://omms.sandia.gov PMID:26124554

  20. Knowledge Synthesis with Maps of Neural Connectivity

    PubMed Central

    Tallis, Marcelo; Thompson, Richard; Russ, Thomas A.; Burns, Gully A. P. C.

    2011-01-01

    This paper describes software for neuroanatomical knowledge synthesis based on neural connectivity data. This software supports a mature methodology developed since the early 1990s. Over this time, the Swanson laboratory at USC has generated an account of the neural connectivity of the sub-structures of the hypothalamus, amygdala, septum, hippocampus, and bed nucleus of the stria terminalis. This is based on neuroanatomical data maps drawn into a standard brain atlas by experts. In earlier work, we presented an application for visualizing and comparing anatomical macro connections using the Swanson third edition atlas as a framework for accurate registration. Here we describe major improvements to the NeuARt application based on the incorporation of a knowledge representation of experimental design. We also present improvements in the interface and features of the data mapping components within a unified web-application. As a step toward developing an accurate sub-regional account of neural connectivity, we provide navigational access between the data maps and a semantic representation of area-to-area connections that they support. We do so based on an approach called “Knowledge Engineering from Experimental Design” (KEfED) model that is based on experimental variables. We have extended the underlying KEfED representation of tract-tracing experiments by incorporating the definition of a neuronanatomical data map as a measurement variable in the study design. This paper describes the software design of a web-application that allows anatomical data sets to be described within a standard experimental context and thus indexed by non-spatial experimental design features. PMID:22053155

  1. Unified Database Development Program. Final Report.

    ERIC Educational Resources Information Center

    Thomas, Everett L., Jr.; Deem, Robert N.

    The objective of the unified database (UDB) program was to develop an automated information system that would be useful in the design, development, testing, and support of new Air Force aircraft weapon systems. Primary emphasis was on the development of: (1) a historical logistics data repository system to provide convenient and timely access to…

  2. A unifying framework for systems modeling, control systems design, and system operation

    NASA Technical Reports Server (NTRS)

    Dvorak, Daniel L.; Indictor, Mark B.; Ingham, Michel D.; Rasmussen, Robert D.; Stringfellow, Margaret V.

    2005-01-01

    Current engineering practice in the analysis and design of large-scale multi-disciplinary control systems is typified by some form of decomposition- whether functional or physical or discipline-based-that enables multiple teams to work in parallel and in relative isolation. Too often, the resulting system after integration is an awkward marriage of different control and data mechanisms with poor end-to-end accountability. System of systems engineering, which faces this problem on a large scale, cries out for a unifying framework to guide analysis, design, and operation. This paper describes such a framework based on a state-, model-, and goal-based architecture for semi-autonomous control systems that guides analysis and modeling, shapes control system software design, and directly specifies operational intent. This paper illustrates the key concepts in the context of a large-scale, concurrent, globally distributed system of systems: NASA's proposed Array-based Deep Space Network.

  3. Semantically enabled image similarity search

    NASA Astrophysics Data System (ADS)

    Casterline, May V.; Emerick, Timothy; Sadeghi, Kolia; Gosse, C. A.; Bartlett, Brent; Casey, Jason

    2015-05-01

    Georeferenced data of various modalities are increasingly available for intelligence and commercial use, however effectively exploiting these sources demands a unified data space capable of capturing the unique contribution of each input. This work presents a suite of software tools for representing geospatial vector data and overhead imagery in a shared high-dimension vector or embedding" space that supports fused learning and similarity search across dissimilar modalities. While the approach is suitable for fusing arbitrary input types, including free text, the present work exploits the obvious but computationally difficult relationship between GIS and overhead imagery. GIS is comprised of temporally-smoothed but information-limited content of a GIS, while overhead imagery provides an information-rich but temporally-limited perspective. This processing framework includes some important extensions of concepts in literature but, more critically, presents a means to accomplish them as a unified framework at scale on commodity cloud architectures.

  4. Toward a unified approach to dose-response modeling in ecotoxicology.

    PubMed

    Ritz, Christian

    2010-01-01

    This study reviews dose-response models that are used in ecotoxicology. The focus lies on clarification of differences and similarities between models, and as a side effect, their different guises in ecotoxicology are unravelled. A look at frequently used dose-response models reveals major discrepancies, among other things in naming conventions. Therefore, there is a need for a unified view on dose-response modeling in order to improve the understanding of it and to facilitate communication and comparison of findings across studies, thus realizing its full potential. This study attempts to establish a general framework that encompasses most dose-response models that are of interest to ecotoxicologists in practice. The framework includes commonly used models such as the log-logistic and Weibull models, but also features entire suites of models as found in various guidance documents. An outline on how the proposed framework can be implemented in statistical software systems is also provided.

  5. Studying groundwater and surface water interactions using airborne remote sensing in Heihe River basin, northwest China

    NASA Astrophysics Data System (ADS)

    Liu, C.; Liu, J.; Hu, Y.; Zheng, C.

    2015-05-01

    Managing surface water and groundwater as a unified system is important for water resource exploitation and aquatic ecosystem conservation. The unified approach to water management needs accurate characterization of surface water and groundwater interactions. Temperature is a natural tracer for identifying surface water and groundwater interactions, and the use of remote sensing techniques facilitates basin-scale temperature measurement. This study focuses on the Heihe River basin, the second largest inland river basin in the arid and semi-arid northwest of China where surface water and groundwater undergoes dynamic exchanges. The spatially continuous river-surface temperature of the midstream section of the Heihe River was obtained by using an airborne pushbroom hyperspectral thermal sensor system. By using the hot spot analysis toolkit in the ArcGIS software, abnormally cold water zones were identified as indicators of the spatial pattern of groundwater discharge to the river.

  6. A novel AIDS/HIV intelligent medical consulting system based on expert systems.

    PubMed

    Ebrahimi, Alireza Pour; Toloui Ashlaghi, Abbas; Mahdavy Rad, Maryam

    2013-01-01

    The purpose of this paper is to propose a novel intelligent model for AIDS/HIV data based on expert system and using it for developing an intelligent medical consulting system for AIDS/HIV. In this descriptive research, 752 frequently asked questions (FAQs) about AIDS/HIV are gathered from numerous websites about this disease. To perform the data mining and extracting the intelligent model, the 6 stages of Crisp method has been completed for FAQs. The 6 stages include: Business understanding, data understanding, data preparation, modelling, evaluation and deployment. C5.0 Tree classification algorithm is used for modelling. Also, rational unified process (RUP) is used to develop the web-based medical consulting software. Stages of RUP are as follows: Inception, elaboration, construction and transition. The intelligent developed model has been used in the infrastructure of the software and based on client's inquiry and keywords related FAQs are displayed to the client, according to the rank. FAQs' ranks are gradually determined considering clients reading it. Based on displayed FAQs, test and entertainment links are also displayed. The accuracy of the AIDS/HIV intelligent web-based medical consulting system is estimated to be 78.76%. AIDS/HIV medical consulting systems have been developed using intelligent infrastructure. Being equipped with an intelligent model, providing consulting services on systematic textual data and providing side services based on client's activities causes the implemented system to be unique. The research has been approved by Iranian Ministry of Health and Medical Education for being practical.

  7. Experimental demonstration of an OpenFlow based software-defined optical network employing packet, fixed and flexible DWDM grid technologies on an international multi-domain testbed.

    PubMed

    Channegowda, M; Nejabati, R; Rashidi Fard, M; Peng, S; Amaya, N; Zervas, G; Simeonidou, D; Vilalta, R; Casellas, R; Martínez, R; Muñoz, R; Liu, L; Tsuritani, T; Morita, I; Autenrieth, A; Elbers, J P; Kostecki, P; Kaczmarek, P

    2013-03-11

    Software defined networking (SDN) and flexible grid optical transport technology are two key technologies that allow network operators to customize their infrastructure based on application requirements and therefore minimizing the extra capital and operational costs required for hosting new applications. In this paper, for the first time we report on design, implementation & demonstration of a novel OpenFlow based SDN unified control plane allowing seamless operation across heterogeneous state-of-the-art optical and packet transport domains. We verify and experimentally evaluate OpenFlow protocol extensions for flexible DWDM grid transport technology along with its integration with fixed DWDM grid and layer-2 packet switching.

  8. Software for Better Documentation of Other Software

    NASA Technical Reports Server (NTRS)

    Pinedo, John

    2003-01-01

    The Literate Programming Extraction Engine is a Practical Extraction and Reporting Language- (PERL-)based computer program that facilitates and simplifies the implementation of a concept of self-documented literate programming in a fashion tailored to the typical needs of scientists. The advantage for the programmer is that documentation and source code are written side-by-side in the same file, reducing the likelihood that the documentation will be inconsistent with the code and improving the verification that the code performs its intended functions. The advantage for the user is the knowledge that the documentation matches the software because they come from the same file. This program unifies the documentation process for a variety of programming languages, including C, C++, and several versions of FORTRAN. This program can process the documentation in any markup language, and incorporates the LaTeX typesetting software. The program includes sample Makefile scripts for automating both the code-compilation (when appropriate) and documentation-generation processes into a single command-line statement. Also included are macro instructions for the Emacs display-editor software, making it easy for a programmer to toggle between editing in a code or a documentation mode.

  9. Expanding Access to NCAR's Digital Assets: Towards a Unified Scientific Data Management System

    NASA Astrophysics Data System (ADS)

    Stott, D.

    2016-12-01

    In 2014 the National Center for Atmospheric Research (NCAR) Directorate created the Data Stewardship Engineering Team (DSET) to plan and implement the strategic vision of an integrated front door for data discovery and access across the organization, including all laboratories, the library, and UCAR Community Programs. The DSET is focused on improving the quality of users' experiences in finding and using NCAR's digital assets. This effort also supports new policies included in federal mandates, NSF requirements, and journal publication rules. An initial survey with 97 respondents identified 68 persons responsible for more than 3 petabytes of data. An inventory, using the Data Asset Framework produced by the UK Digital Curation Centre as a starting point, identified asset types that included files and metadata, publications, images, and software (visualization, analysis, model codes). User story sessions with representatives from each lab identified and ranked desired features for a unified Scientific Data Management System (SDMS). A process beginning with an organization-wide assessment of metadata by the HDF Group and followed by meetings with labs to identify key documentation concepts, culminated in the development of an NCAR metadata dialect that leverages the DataCite and ISO 19115 standards. The tasks ahead are to build out an SDMS and populate it with rich standardized metadata. Software packages have been prototyped and currently are being tested and reviewed by DSET members. Key challenges for the DSET include technical and non-technical issues. First, the status quo with regard to how assets are managed varies widely across the organization. There are differences in file format standards, technologies, and discipline-specific vocabularies. Metadata diversity is another real challenge. The types of metadata, the standards used, and the capacity to create new metadata varies across the organization. Significant effort is required to develop tools to create new standard metadata across the organization, adapt and integrate current digital assets, and establish consistent data management practices going forward. To be successful, best practices must be infused into daily activities. This poster will highlight the processes, lessons learned, and current status of the DSET effort at NCAR.

  10. Easy Handling of Sensors and Actuators over TCP/IP Networks by Open Source Hardware/Software

    PubMed Central

    Mejías, Andrés; Herrera, Reyes S.; Márquez, Marco A.; Calderón, Antonio José; González, Isaías; Andújar, José Manuel

    2017-01-01

    There are several specific solutions for accessing sensors and actuators present in any process or system through a TCP/IP network, either local or a wide area type like the Internet. The usage of sensors and actuators of different nature and diverse interfaces (SPI, I2C, analogue, etc.) makes access to them from a network in a homogeneous and secure way more complex. A framework, including both software and hardware resources, is necessary to simplify and unify networked access to these devices. In this paper, a set of open-source software tools, specifically designed to cover the different issues concerning the access to sensors and actuators, and two proposed low-cost hardware architectures to operate with the abovementioned software tools are presented. They allow integrated and easy access to local or remote sensors and actuators. The software tools, integrated in the free authoring tool Easy Java and Javascript Simulations (EJS) solve the interaction issues between the subsystem that integrates sensors and actuators into the network, called convergence subsystem in this paper, and the Human Machine Interface (HMI)—this one designed using the intuitive graphical system of EJS—located on the user’s computer. The proposed hardware architectures and software tools are described and experimental implementations with the proposed tools are presented. PMID:28067801

  11. Easy Handling of Sensors and Actuators over TCP/IP Networks by Open Source Hardware/Software.

    PubMed

    Mejías, Andrés; Herrera, Reyes S; Márquez, Marco A; Calderón, Antonio José; González, Isaías; Andújar, José Manuel

    2017-01-05

    There are several specific solutions for accessing sensors and actuators present in any process or system through a TCP/IP network, either local or a wide area type like the Internet. The usage of sensors and actuators of different nature and diverse interfaces (SPI, I2C, analogue, etc.) makes access to them from a network in a homogeneous and secure way more complex. A framework, including both software and hardware resources, is necessary to simplify and unify networked access to these devices. In this paper, a set of open-source software tools, specifically designed to cover the different issues concerning the access to sensors and actuators, and two proposed low-cost hardware architectures to operate with the abovementioned software tools are presented. They allow integrated and easy access to local or remote sensors and actuators. The software tools, integrated in the free authoring tool Easy Java and Javascript Simulations (EJS) solve the interaction issues between the subsystem that integrates sensors and actuators into the network, called convergence subsystem in this paper, and the Human Machine Interface (HMI)-this one designed using the intuitive graphical system of EJS-located on the user's computer. The proposed hardware architectures and software tools are described and experimental implementations with the proposed tools are presented.

  12. Space Telecommunications Radio System Software Architecture Concepts and Analysis

    NASA Technical Reports Server (NTRS)

    Handler, Louis M.; Hall, Charles S.; Briones, Janette C.; Blaser, Tammy M.

    2008-01-01

    The Space Telecommunications Radio System (STRS) project investigated various Software Defined Radio (SDR) architectures for Space. An STRS architecture has been selected that separates the STRS operating environment from its various waveforms and also abstracts any specialized hardware to limit its effect on the operating environment. The design supports software evolution where new functionality is incorporated into the radio. Radio hardware functionality has been moving from hardware based ASICs into firmware and software based processors such as FPGAs, DSPs and General Purpose Processors (GPPs). Use cases capture the requirements of a system by describing how the system should interact with the users or other systems (the actors) to achieve a specific goal. The Unified Modeling Language (UML) is used to illustrate the Use Cases in a variety of ways. The Top Level Use Case diagram shows groupings of the use cases and how the actors are involved. The state diagrams depict the various states that a system or object may be in and the transitions between those states. The sequence diagrams show the main flow of activity as described in the use cases.

  13. Proactive human-computer collaboration for information discovery

    NASA Astrophysics Data System (ADS)

    DiBona, Phil; Shilliday, Andrew; Barry, Kevin

    2016-05-01

    Lockheed Martin Advanced Technology Laboratories (LM ATL) is researching methods, representations, and processes for human/autonomy collaboration to scale analysis and hypotheses substantiation for intelligence analysts. This research establishes a machinereadable hypothesis representation that is commonsensical to the human analyst. The representation unifies context between the human and computer, enabling autonomy in the form of analytic software, to support the analyst through proactively acquiring, assessing, and organizing high-value information that is needed to inform and substantiate hypotheses.

  14. An Estimation Procedure for the Structural Parameters of the Unified Cognitive/IRT Model.

    ERIC Educational Resources Information Center

    Jiang, Hai; And Others

    L. V. DiBello, W. F. Stout, and L. A. Roussos (1993) have developed a new item response model, the Unified Model, which brings together the discrete, deterministic aspects of cognition favored by cognitive scientists, and the continuous, stochastic aspects of test response behavior that underlie item response theory (IRT). The Unified Model blends…

  15. Concept of a Cloud Service for Data Preparation and Computational Control on Custom HPC Systems in Application to Molecular Dynamics

    NASA Astrophysics Data System (ADS)

    Puzyrkov, Dmitry; Polyakov, Sergey; Podryga, Viktoriia; Markizov, Sergey

    2018-02-01

    At the present stage of computer technology development it is possible to study the properties and processes in complex systems at molecular and even atomic levels, for example, by means of molecular dynamics methods. The most interesting are problems related with the study of complex processes under real physical conditions. Solving such problems requires the use of high performance computing systems of various types, for example, GRID systems and HPC clusters. Considering the time consuming computational tasks, the need arises of software for automatic and unified monitoring of such computations. A complex computational task can be performed over different HPC systems. It requires output data synchronization between the storage chosen by a scientist and the HPC system used for computations. The design of the computational domain is also quite a problem. It requires complex software tools and algorithms for proper atomistic data generation on HPC systems. The paper describes the prototype of a cloud service, intended for design of atomistic systems of large volume for further detailed molecular dynamic calculations and computational management for this calculations, and presents the part of its concept aimed at initial data generation on the HPC systems.

  16. An application of miniscale experiments on Earth to refine microgravity analysis of adiabatic multiphase flow in space

    NASA Technical Reports Server (NTRS)

    Rothe, Paul H.; Martin, Christine; Downing, Julie

    1994-01-01

    Adiabatic two-phase flow is of interest to the design of multiphase fluid and thermal management systems for spacecraft. This paper presents original data and unifies existing data for capillary tubes as a step toward assessing existing multiphase flow analysis and engineering software. Comparisons of theory with these data once again confirm the broad accuracy of the theory. Due to the simplicity and low cost of the capillary tube experiments, which were performed on earth, we were able to closely examine for the first time a flow situation that had not previously been examined appreciably by aircraft tests. This is the situation of a slug flow at high quality, near transition to annular flow. Our comparison of software calculations with these data revealed overprediction of pipeline pressure drop by up to a factor of three. In turn, this finding motivated a reexamination of the existing theory, and then development of a new analytical and is in far better agreement with the data. This sequence of discovery illustrates the role of inexpensive miniscale modeling on earth to anticipate microgravity behavior in space and to complete and help define needs for aircraft tests.

  17. Unified Desktop for Monitoring & Control Applications - The Open Navigator Framework Applied for Control Centre and EGSE Applications

    NASA Astrophysics Data System (ADS)

    Brauer, U.

    2007-08-01

    The Open Navigator Framework (ONF) was developed to provide a unified and scalable platform for user interface integration. The main objective for the framework was to raise usability of monitoring and control consoles and to provide a reuse of software components in different application areas. ONF is currently applied for the Columbus onboard crew interface, the commanding application for the Columbus Control Centre, the Columbus user facilities specialized user interfaces, the Mission Execution Crew Assistant (MECA) study and EADS Astrium internal R&D projects. ONF provides a well documented and proven middleware for GUI components (Java plugin interface, simplified concept similar to Eclipse). The overall application configuration is performed within a graphical user interface for layout and component selection. The end-user does not have to work in the underlying XML configuration files. ONF was optimized to provide harmonized user interfaces for monitoring and command consoles. It provides many convenience functions designed together with flight controllers and onboard crew: user defined workspaces, incl. support for multi screens efficient communication mechanism between the components integrated web browsing and documentation search &viewing consistent and integrated menus and shortcuts common logging and application configuration (properties) supervision interface for remote plugin GUI access (web based) A large number of operationally proven ONF components have been developed: Command Stack & History: Release of commands and follow up the command acknowledges System Message Panel: Browse, filter and search system messages/events Unified Synoptic System: Generic synoptic display system Situational Awareness : Show overall subsystem status based on monitoring of key parameters System Model Browser: Browse mission database defintions (measurements, commands, events) Flight Procedure Executor: Execute checklist and logical flow interactive procedures Web Browser : Integrated browser reference documentation and operations data Timeline Viewer: View master timeline as Gantt chart Search: Local search of operations products (e.g. documentation, procedures, displays) All GUI components access the underlying spacecraft data (commanding, reporting data, events, command history) via a common library providing adaptors for the current deployments (Columbus MCS, Columbus onboard Data Management System, Columbus Trainer raw packet protocol). New Adaptors are easy to develop. Currently an adaptor to SCOS 2000 is developed as part of a study for the ESTEC standardization section ("USS for ESTEC Reference Facility").

  18. Conic section function neural network circuitry for offline signature recognition.

    PubMed

    Erkmen, Burcu; Kahraman, Nihan; Vural, Revna A; Yildirim, Tulay

    2010-04-01

    In this brief, conic section function neural network (CSFNN) circuitry was designed for offline signature recognition. CSFNN is a unified framework for multilayer perceptron (MLP) and radial basis function (RBF) networks to make simultaneous use of advantages of both. The CSFNN circuitry architecture was developed using a mixed mode circuit implementation. The designed circuit system is problem independent. Hence, the general purpose neural network circuit system could be applied to various pattern recognition problems with different network sizes on condition with the maximum network size of 16-16-8. In this brief, CSFNN circuitry system has been applied to two different signature recognition problems. CSFNN circuitry was trained with chip-in-the-loop learning technique in order to compensate typical analog process variations. CSFNN hardware achieved highly comparable computational performances with CSFNN software for nonlinear signature recognition problems.

  19. Nondestructive assessment of waveguides using an integrated electromechanical impedance and ultrasonic waves approach

    NASA Astrophysics Data System (ADS)

    Nasrollahi, Amir; Ma, Zhaoyun; Rizzo, Piervincenzo

    2017-04-01

    In this paper we present a structural health monitoring (SHM) paradigm based on the simultaneous use of ultrasounds and electromechanical impedance (EMI) to monitor waveguides. The paradigm uses guided ultrasonic waves (GUWs) in pitch-catch mode and EMI simultaneously. The two methodologies are driven by the same sensing/hardware/software unit. To assess the feasibility of this unified system an aluminum plate was monitored for varying damage location. Damage was simulated by adding small masses to the plate. The results associated with pitch-catch GUW testing mode were used in ultrasonic tomography, and statistical analysis was used to detect the damages using the EMI measurements. The results of GUW and EMI monitoring show that the proposed system is robust and can be developed further to address the challenges associated with the SHM of complex structures.

  20. 3D molecular models of whole HIV-1 virions generated with cellPACK

    PubMed Central

    Goodsell, David S.; Autin, Ludovic; Forli, Stefano; Sanner, Michel F.; Olson, Arthur J.

    2014-01-01

    As knowledge of individual biological processes grows, it becomes increasingly useful to frame new findings within their larger biological contexts in order to generate new systems-scale hypotheses. This report highlights two major iterations of a whole virus model of HIV-1, generated with the cellPACK software. cellPACK integrates structural and systems biology data with packing algorithms to assemble comprehensive 3D models of cell-scale structures in molecular detail. This report describes the biological data, modeling parameters and cellPACK methods used to specify and construct editable models for HIV-1. Anticipating that cellPACK interfaces under development will enable researchers from diverse backgrounds to critique and improve the biological models, we discuss how cellPACK can be used as a framework to unify different types of data across all scales of biology. PMID:25253262

  1. Consolidation and development roadmap of the EMI middleware

    NASA Astrophysics Data System (ADS)

    Kónya, B.; Aiftimiei, C.; Cecchi, M.; Field, L.; Fuhrmann, P.; Nilsen, J. K.; White, J.

    2012-12-01

    Scientific research communities have benefited recently from the increasing availability of computing and data infrastructures with unprecedented capabilities for large scale distributed initiatives. These infrastructures are largely defined and enabled by the middleware they deploy. One of the major issues in the current usage of research infrastructures is the need to use similar but often incompatible middleware solutions. The European Middleware Initiative (EMI) is a collaboration of the major European middleware providers ARC, dCache, gLite and UNICORE. EMI aims to: deliver a consolidated set of middleware components for deployment in EGI, PRACE and other Distributed Computing Infrastructures; extend the interoperability between grids and other computing infrastructures; strengthen the reliability of the services; establish a sustainable model to maintain and evolve the middleware; fulfil the requirements of the user communities. This paper presents the consolidation and development objectives of the EMI software stack covering the last two years. The EMI development roadmap is introduced along the four technical areas of compute, data, security and infrastructure. The compute area plan focuses on consolidation of standards and agreements through a unified interface for job submission and management, a common format for accounting, the wide adoption of GLUE schema version 2.0 and the provision of a common framework for the execution of parallel jobs. The security area is working towards a unified security model and lowering the barriers to Grid usage by allowing users to gain access with their own credentials. The data area is focusing on implementing standards to ensure interoperability with other grids and industry components and to reuse already existing clients in operating systems and open source distributions. One of the highlights of the infrastructure area is the consolidation of the information system services via the creation of a common information backbone.

  2. Unified Framework for Development, Deployment and Robust Testing of Neuroimaging Algorithms

    PubMed Central

    Joshi, Alark; Scheinost, Dustin; Okuda, Hirohito; Belhachemi, Dominique; Murphy, Isabella; Staib, Lawrence H.; Papademetris, Xenophon

    2011-01-01

    Developing both graphical and command-line user interfaces for neuroimaging algorithms requires considerable effort. Neuroimaging algorithms can meet their potential only if they can be easily and frequently used by their intended users. Deployment of a large suite of such algorithms on multiple platforms requires consistency of user interface controls, consistent results across various platforms and thorough testing. We present the design and implementation of a novel object-oriented framework that allows for rapid development of complex image analysis algorithms with many reusable components and the ability to easily add graphical user interface controls. Our framework also allows for simplified yet robust nightly testing of the algorithms to ensure stability and cross platform interoperability. All of the functionality is encapsulated into a software object requiring no separate source code for user interfaces, testing or deployment. This formulation makes our framework ideal for developing novel, stable and easy-to-use algorithms for medical image analysis and computer assisted interventions. The framework has been both deployed at Yale and released for public use in the open source multi-platform image analysis software—BioImage Suite (bioimagesuite.org). PMID:21249532

  3. Integrating unified medical language system and association mining techniques into relevance feedback for biomedical literature search.

    PubMed

    Ji, Yanqing; Ying, Hao; Tran, John; Dews, Peter; Massanari, R Michael

    2016-07-19

    Finding highly relevant articles from biomedical databases is challenging not only because it is often difficult to accurately express a user's underlying intention through keywords but also because a keyword-based query normally returns a long list of hits with many citations being unwanted by the user. This paper proposes a novel biomedical literature search system, called BiomedSearch, which supports complex queries and relevance feedback. The system employed association mining techniques to build a k-profile representing a user's relevance feedback. More specifically, we developed a weighted interest measure and an association mining algorithm to find the strength of association between a query and each concept in the article(s) selected by the user as feedback. The top concepts were utilized to form a k-profile used for the next-round search. BiomedSearch relies on Unified Medical Language System (UMLS) knowledge sources to map text files to standard biomedical concepts. It was designed to support queries with any levels of complexity. A prototype of BiomedSearch software was made and it was preliminarily evaluated using the Genomics data from TREC (Text Retrieval Conference) 2006 Genomics Track. Initial experiment results indicated that BiomedSearch increased the mean average precision (MAP) for a set of queries. With UMLS and association mining techniques, BiomedSearch can effectively utilize users' relevance feedback to improve the performance of biomedical literature search.

  4. Data Portal for the Library of Integrated Network-based Cellular Signatures (LINCS) program: integrated access to diverse large-scale cellular perturbation response data

    PubMed Central

    Koleti, Amar; Terryn, Raymond; Stathias, Vasileios; Chung, Caty; Cooper, Daniel J; Turner, John P; Vidović, Dušica; Forlin, Michele; Kelley, Tanya T; D’Urso, Alessandro; Allen, Bryce K; Torre, Denis; Jagodnik, Kathleen M; Wang, Lily; Jenkins, Sherry L; Mader, Christopher; Niu, Wen; Fazel, Mehdi; Mahi, Naim; Pilarczyk, Marcin; Clark, Nicholas; Shamsaei, Behrouz; Meller, Jarek; Vasiliauskas, Juozas; Reichard, John; Medvedovic, Mario; Ma’ayan, Avi; Pillai, Ajay

    2018-01-01

    Abstract The Library of Integrated Network-based Cellular Signatures (LINCS) program is a national consortium funded by the NIH to generate a diverse and extensive reference library of cell-based perturbation-response signatures, along with novel data analytics tools to improve our understanding of human diseases at the systems level. In contrast to other large-scale data generation efforts, LINCS Data and Signature Generation Centers (DSGCs) employ a wide range of assay technologies cataloging diverse cellular responses. Integration of, and unified access to LINCS data has therefore been particularly challenging. The Big Data to Knowledge (BD2K) LINCS Data Coordination and Integration Center (DCIC) has developed data standards specifications, data processing pipelines, and a suite of end-user software tools to integrate and annotate LINCS-generated data, to make LINCS signatures searchable and usable for different types of users. Here, we describe the LINCS Data Portal (LDP) (http://lincsportal.ccs.miami.edu/), a unified web interface to access datasets generated by the LINCS DSGCs, and its underlying database, LINCS Data Registry (LDR). LINCS data served on the LDP contains extensive metadata and curated annotations. We highlight the features of the LDP user interface that is designed to enable search, browsing, exploration, download and analysis of LINCS data and related curated content. PMID:29140462

  5. The research and implementation of a unified identity authentication in e-government network

    NASA Astrophysics Data System (ADS)

    Feng, Zhou

    Current problem existing in e-government network is that the applications of information system are developed independently by various departments, and each has its own specific set of authentication and access control mechanism. To build a comprehensive information system in favor of sharing and exchanging information, a sound and secure unified e-government authentication system is firstly needed. The paper, combining with practical development of e-government network, carries out a thorough discussion on how to achieve data synchronization between unified authentication system and related application systems.

  6. "UNICERT," or: Towards the Development of a Unified Language Certificate for German Universities.

    ERIC Educational Resources Information Center

    Voss, Bernd

    The standardization of second language proficiency levels for university students in Germany is discussed. Problems with the current system, in which each university has developed its own program of study and proficiency certification, are examined and a framework for development of a unified language certificate for all universities is outlined.…

  7. GREIT: a unified approach to 2D linear EIT reconstruction of lung images.

    PubMed

    Adler, Andy; Arnold, John H; Bayford, Richard; Borsic, Andrea; Brown, Brian; Dixon, Paul; Faes, Theo J C; Frerichs, Inéz; Gagnon, Hervé; Gärber, Yvo; Grychtol, Bartłomiej; Hahn, Günter; Lionheart, William R B; Malik, Anjum; Patterson, Robert P; Stocks, Janet; Tizzard, Andrew; Weiler, Norbert; Wolf, Gerhard K

    2009-06-01

    Electrical impedance tomography (EIT) is an attractive method for clinically monitoring patients during mechanical ventilation, because it can provide a non-invasive continuous image of pulmonary impedance which indicates the distribution of ventilation. However, most clinical and physiological research in lung EIT is done using older and proprietary algorithms; this is an obstacle to interpretation of EIT images because the reconstructed images are not well characterized. To address this issue, we develop a consensus linear reconstruction algorithm for lung EIT, called GREIT (Graz consensus Reconstruction algorithm for EIT). This paper describes the unified approach to linear image reconstruction developed for GREIT. The framework for the linear reconstruction algorithm consists of (1) detailed finite element models of a representative adult and neonatal thorax, (2) consensus on the performance figures of merit for EIT image reconstruction and (3) a systematic approach to optimize a linear reconstruction matrix to desired performance measures. Consensus figures of merit, in order of importance, are (a) uniform amplitude response, (b) small and uniform position error, (c) small ringing artefacts, (d) uniform resolution, (e) limited shape deformation and (f) high resolution. Such figures of merit must be attained while maintaining small noise amplification and small sensitivity to electrode and boundary movement. This approach represents the consensus of a large and representative group of experts in EIT algorithm design and clinical applications for pulmonary monitoring. All software and data to implement and test the algorithm have been made available under an open source license which allows free research and commercial use.

  8. Initial implementation of a comparative data analysis ontology.

    PubMed

    Prosdocimi, Francisco; Chisham, Brandon; Pontelli, Enrico; Thompson, Julie D; Stoltzfus, Arlin

    2009-07-03

    Comparative analysis is used throughout biology. When entities under comparison (e.g. proteins, genomes, species) are related by descent, evolutionary theory provides a framework that, in principle, allows N-ary comparisons of entities, while controlling for non-independence due to relatedness. Powerful software tools exist for specialized applications of this approach, yet it remains under-utilized in the absence of a unifying informatics infrastructure. A key step in developing such an infrastructure is the definition of a formal ontology. The analysis of use cases and existing formalisms suggests that a significant component of evolutionary analysis involves a core problem of inferring a character history, relying on key concepts: "Operational Taxonomic Units" (OTUs), representing the entities to be compared; "character-state data" representing the observations compared among OTUs; "phylogenetic tree", representing the historical path of evolution among the entities; and "transitions", the inferred evolutionary changes in states of characters that account for observations. Using the Web Ontology Language (OWL), we have defined these and other fundamental concepts in a Comparative Data Analysis Ontology (CDAO). CDAO has been evaluated for its ability to represent token data sets and to support simple forms of reasoning. With further development, CDAO will provide a basis for tools (for semantic transformation, data retrieval, validation, integration, etc.) that make it easier for software developers and biomedical researchers to apply evolutionary methods of inference to diverse types of data, so as to integrate this powerful framework for reasoning into their research.

  9. Decision support and disease management: a logic engineering approach.

    PubMed

    Fox, J; Thomson, R

    1998-12-01

    This paper describes the development and application of PROforma, a unified technology for clinical decision support and disease management. Work leading to the implementation of PROforma has been carried out in a series of projects funded by European agencies over the past 13 years. The work has been based on logic engineering, a distinct design and development methodology that combines concepts from knowledge engineering, logic programming, and software engineering. Several of the projects have used the approach to demonstrate a wide range of applications in primary and specialist care and clinical research. Concurrent academic research projects have provided a sound theoretical basis for the safety-critical elements of the methodology. The principal technical results of the work are the PROforma logic language for defining clinical processes and an associated suite of software tools for delivering applications, such as decision support and disease management procedures. The language supports four standard objects (decisions, plans, actions, and enquiries), each of which has an intuitive meaning with well-understood logical semantics. The development toolset includes a powerful visual programming environment for composing applications from these standard components, for verifying consistency and completeness of the resulting specification and for delivering stand-alone or embeddable applications. Tools and applications that have resulted from the work are described and illustrated, with examples from specialist cancer care and primary care. The results of a number of evaluation activities are included to illustrate the utility of the technology.

  10. Hypersonic transport aircraft

    NASA Technical Reports Server (NTRS)

    1987-01-01

    A hypersonic transport aircraft design project was selected as a result of interactions with NASA Lewis Research Center personnel and fits the Presidential concept of the Orient Express. The Graduate Teaching Assistant (GTA) and an undergraduate student worked at the NASA Lewis Research Center during the 1986 summer conducting a literature survey, and relevant literature and useful software were collected. The computer software was implemented in the Computer Aided Design Laboratory of the Mechanical and Aerospace Engineering Department. In addition to the lectures by the three instructors, a series of guest lectures was conducted. The first of these lectures 'Anywhere in the World in Two Hours' was delivered by R. Luidens of NASA Lewis Center. In addition, videotaped copies of relevant seminars obtained from NASA Lewis were also featured. The first assignment was to individually research and develop the mission requirements and to discuss the findings with the class. The class in consultation with the instructors then developed a set of unified mission requirements. Then the class was divided into three design groups (1) Aerodynamics Group, (2) Propulsion Group, and (3) Structures and Thermal Analyses Group. The groups worked on their respective design areas and interacted with each other to finally come up with an integrated conceptual design. The three faculty members and the GTA acted as the resource persons for the three groups and aided in the integration of the individual group designs into the final design of a hypersonic aircraft.

  11. Design for Run-Time Monitor on Cloud Computing

    NASA Astrophysics Data System (ADS)

    Kang, Mikyung; Kang, Dong-In; Yun, Mira; Park, Gyung-Leen; Lee, Junghoon

    Cloud computing is a new information technology trend that moves computing and data away from desktops and portable PCs into large data centers. The basic principle of cloud computing is to deliver applications as services over the Internet as well as infrastructure. A cloud is the type of a parallel and distributed system consisting of a collection of inter-connected and virtualized computers that are dynamically provisioned and presented as one or more unified computing resources. The large-scale distributed applications on a cloud require adaptive service-based software, which has the capability of monitoring the system status change, analyzing the monitored information, and adapting its service configuration while considering tradeoffs among multiple QoS features simultaneously. In this paper, we design Run-Time Monitor (RTM) which is a system software to monitor the application behavior at run-time, analyze the collected information, and optimize resources on cloud computing. RTM monitors application software through library instrumentation as well as underlying hardware through performance counter optimizing its computing configuration based on the analyzed data.

  12. Information-computational platform for collaborative multidisciplinary investigations of regional climatic changes and their impacts

    NASA Astrophysics Data System (ADS)

    Gordov, Evgeny; Lykosov, Vasily; Krupchatnikov, Vladimir; Okladnikov, Igor; Titov, Alexander; Shulgina, Tamara

    2013-04-01

    Analysis of growing volume of related to climate change data from sensors and model outputs requires collaborative multidisciplinary efforts of researchers. To do it timely and in reliable way one needs in modern information-computational infrastructure supporting integrated studies in the field of environmental sciences. Recently developed experimental software and hardware platform Climate (http://climate.scert.ru/) provides required environment for regional climate change related investigations. The platform combines modern web 2.0 approach, GIS-functionality and capabilities to run climate and meteorological models, process large geophysical datasets and support relevant analysis. It also supports joint software development by distributed research groups, and organization of thematic education for students and post-graduate students. In particular, platform software developed includes dedicated modules for numerical processing of regional and global modeling results for consequent analysis and visualization. Also run of integrated into the platform WRF and «Planet Simulator» models, modeling results data preprocessing and visualization is provided. All functions of the platform are accessible by a user through a web-portal using common graphical web-browser in the form of an interactive graphical user interface which provides, particularly, capabilities of selection of geographical region of interest (pan and zoom), data layers manipulation (order, enable/disable, features extraction) and visualization of results. Platform developed provides users with capabilities of heterogeneous geophysical data analysis, including high-resolution data, and discovering of tendencies in climatic and ecosystem changes in the framework of different multidisciplinary researches. Using it even unskilled user without specific knowledge can perform reliable computational processing and visualization of large meteorological, climatic and satellite monitoring datasets through unified graphical web-interface. Partial support of RF Ministry of Education and Science grant 8345, SB RAS Program VIII.80.2 and Projects 69, 131, 140 and APN CBA2012-16NSY project is acknowledged.

  13. CrossTalk. The Journal of Defense Software Engineering. Volume 14, Number 5, May 2001

    DTIC Science & Technology

    2001-05-01

    REPORT NUMBER 9. SPONSORING/MONITORING AGENCY NAME(S) AND ADDRESS(ES) 10 . SPONSOR/MONITOR’S ACRONYM(S) 11. SPONSOR/MONITOR’S REPORT NUMBER(S) 12... 10 HIGHLIGHTED IN AN APRIL 30 TUTORIAL AND ON MAY 1 - TRACK 8 STEVEN R. PERKINS IS A PLENARY SPEAKER ON MAY 2 PATRICK J. SCHROEDER WILL PRESENT ON MAY...Capability - the first released version. (The artifacts for each are provided by an electronic process guide [ 10 ] and are also used by the Rational Unified

  14. A Unified Framework for Simulating Markovian Models of Highly Dependable Systems

    DTIC Science & Technology

    1989-07-01

    ependability I’valuiation of Complex lault- lolerant Computing Systems. Ptreedings of the 1-.et-enth Sv~npmiun on Falult- lolerant Comnputing. Portland, Maine...New York. [12] (icis;t, R.M. and ’I’rivedi, K.S. (1983). I!Itra-Il gh Reliability Prediction for Fault-’ lolerant Computer Systems. IEE.-E Trw.%,.cions... 1998 ). Surv’ey of Software Tools for [valuating Reli- ability. A vailability, and Serviceabilitv. ACA1 Computing S urveyjs 20. 4, 227-269). [32] Meyer

  15. Generalized Preconditioned Locally Harmonic Residual Eigensolver (GPLHR) v0.1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    VECHARYNSKI, EUGENE; YANG, CHAO

    The software contains a MATLAB implementation of the Generalized Preconditioned Locally Harmonic Residual (GPLHR) method for solving standard and generalized non-Hermitian eigenproblems. The method is particularly useful for computing a subset of eigenvalues, and their eigen- or Schur vectors, closest to a given shift. The proposed method is based on block iterations and can take advantage of a preconditioner if it is available. It does not need to perform exact shift-and-invert transformation. Standard and generalized eigenproblems are handled in a unified framework.

  16. Unified Protocol for the Transdiagnostic Treatment of Emotional Disorders: Protocol Development and Initial Outcome Data

    ERIC Educational Resources Information Center

    Ellard, Kristen K.; Fairholme, Christopher P.; Boisseau, Christina L.; Farchione, Todd J.; Barlow, David H.

    2010-01-01

    The Unified Protocol (UP) is a transdiagnostic, emotion-focused cognitive-behavioral treatment developed to be applicable across the emotional disorders. The UP consists of 4 core modules: increasing emotional awareness, facilitating flexibility in appraisals, identifying and preventing behavioral and emotional avoidance, and situational and…

  17. Evaluation of the Unified Compensation and Classification Plan.

    ERIC Educational Resources Information Center

    Dade County Public Schools, Miami, FL. Office of Educational Accountability.

    The Unified Classification and Compensation Plan of the Dade County (Florida) Public Schools consists of four interdependent activities that include: (1) developing and maintaining accurate job descriptions, (2) conducting evaluations that recommend job worth and grade, (3) developing and maintaining rates of compensation for job values, and (4)…

  18. The caBIG® Life Science Business Architecture Model

    PubMed Central

    Boyd, Lauren Becnel; Hunicke-Smith, Scott P.; Stafford, Grace A.; Freund, Elaine T.; Ehlman, Michele; Chandran, Uma; Dennis, Robert; Fernandez, Anna T.; Goldstein, Stephen; Steffen, David; Tycko, Benjamin; Klemm, Juli D.

    2011-01-01

    Motivation: Business Architecture Models (BAMs) describe what a business does, who performs the activities, where and when activities are performed, how activities are accomplished and which data are present. The purpose of a BAM is to provide a common resource for understanding business functions and requirements and to guide software development. The cancer Biomedical Informatics Grid (caBIG®) Life Science BAM (LS BAM) provides a shared understanding of the vocabulary, goals and processes that are common in the business of LS research. Results: LS BAM 1.1 includes 90 goals and 61 people and groups within Use Case and Activity Unified Modeling Language (UML) Diagrams. Here we report on the model's current release, LS BAM 1.1, its utility and usage, and plans for future use and continuing development for future releases. Availability and Implementation: The LS BAM is freely available as UML, PDF and HTML (https://wiki.nci.nih.gov/x/OFNyAQ). Contact: lbboyd@bcm.edu; laurenbboyd@gmail.com Supplementary information: Supplementary data) are avaliable at Bioinformatics online. PMID:21450709

  19. GPU-accelerated phase-field simulation of dendritic solidification in a binary alloy

    NASA Astrophysics Data System (ADS)

    Yamanaka, Akinori; Aoki, Takayuki; Ogawa, Satoi; Takaki, Tomohiro

    2011-03-01

    The phase-field simulation for dendritic solidification of a binary alloy has been accelerated by using a graphic processing unit (GPU). To perform the phase-field simulation of the alloy solidification on GPU, a program code was developed with computer unified device architecture (CUDA). In this paper, the implementation technique of the phase-field model on GPU is presented. Also, we evaluated the acceleration performance of the three-dimensional solidification simulation by using a single NVIDIA TESLA C1060 GPU and the developed program code. The results showed that the GPU calculation for 5763 computational grids achieved the performance of 170 GFLOPS by utilizing the shared memory as a software-managed cache. Furthermore, it can be demonstrated that the computation with the GPU is 100 times faster than that with a single CPU core. From the obtained results, we confirmed the feasibility of realizing a real-time full three-dimensional phase-field simulation of microstructure evolution on a personal desktop computer.

  20. Impact of office productivity cloud computing on energy consumption and greenhouse gas emissions.

    PubMed

    Williams, Daniel R; Tang, Yinshan

    2013-05-07

    Cloud computing is usually regarded as being energy efficient and thus emitting less greenhouse gases (GHG) than traditional forms of computing. When the energy consumption of Microsoft's cloud computing Office 365 (O365) and traditional Office 2010 (O2010) software suites were tested and modeled, some cloud services were found to consume more energy than the traditional form. The developed model in this research took into consideration the energy consumption at the three main stages of data transmission; data center, network, and end user device. Comparable products from each suite were selected and activities were defined for each product to represent a different computing type. Microsoft provided highly confidential data for the data center stage, while the networking and user device stages were measured directly. A new measurement and software apportionment approach was defined and utilized allowing the power consumption of cloud services to be directly measured for the user device stage. Results indicated that cloud computing is more energy efficient for Excel and Outlook which consumed less energy and emitted less GHG than the standalone counterpart. The power consumption of the cloud based Outlook (8%) and Excel (17%) was lower than their traditional counterparts. However, the power consumption of the cloud version of Word was 17% higher than its traditional equivalent. A third mixed access method was also measured for Word which emitted 5% more GHG than the traditional version. It is evident that cloud computing may not provide a unified way forward to reduce energy consumption and GHG. Direct conversion from the standalone package into the cloud provision platform can now consider energy and GHG emissions at the software development and cloud service design stage using the methods described in this research.

  1. [Computer graphic display of retinal examination results. Software improving the quality of documenting fundus changes].

    PubMed

    Jürgens, Clemens; Grossjohann, Rico; Czepita, Damian; Tost, Frank

    2009-01-01

    Graphic documentation of retinal examination results in clinical ophthalmological practice is often depicted using pictures or in handwritten form. Popular software products used to describe changes in the fundus do not vary much from simple graphic programs that enable to insert, scale and edit basic graphic elements such as: a circle, rectangle, arrow or text. Displaying the results of retinal examinations in a unified way is difficult to achieve. Therefore, we devised and implemented modern software tools for this purpose. A computer program enabling to quickly and intuitively form graphs of the fundus, that can be digitally archived or printed was created. Especially for the needs of ophthalmological clinics, a set of standard digital symbols used to document the results of retinal examinations was developed and installed in a library of graphic symbols. These symbols are divided into the following categories: preoperative, postoperative, neovascularization, retinopathy of prematurity. The appropriate symbol can be selected with a click of the mouse and dragged-and-dropped on the canvas of the fundus. Current forms of documenting results of retinal examinations are unsatisfactory, due to the fact that they are time consuming and imprecise. Unequivocal interpretation is difficult or in some cases impossible. Using the developed computer program a sketch of the fundus can be created much more quickly than by hand drawing. Additionally the quality of the medica documentation using a system of well described and standardized symbols will be enhanced. (1) Graphic symbols used to document the results of retinal examinations are a part of everyday clinical practice. (2) The designed computer program will allow quick and intuitive graphical creation of fundus sketches that can be either digitally archived or printed.

  2. Clinical data integration model. Core interoperability ontology for research using primary care data.

    PubMed

    Ethier, J-F; Curcin, V; Barton, A; McGilchrist, M M; Bastiaens, H; Andreasson, A; Rossiter, J; Zhao, L; Arvanitis, T N; Taweel, A; Delaney, B C; Burgun, A

    2015-01-01

    This article is part of the Focus Theme of METHODS of Information in Medicine on "Managing Interoperability and Complexity in Health Systems". Primary care data is the single richest source of routine health care data. However its use, both in research and clinical work, often requires data from multiple clinical sites, clinical trials databases and registries. Data integration and interoperability are therefore of utmost importance. TRANSFoRm's general approach relies on a unified interoperability framework, described in a previous paper. We developed a core ontology for an interoperability framework based on data mediation. This article presents how such an ontology, the Clinical Data Integration Model (CDIM), can be designed to support, in conjunction with appropriate terminologies, biomedical data federation within TRANSFoRm, an EU FP7 project that aims to develop the digital infrastructure for a learning healthcare system in European Primary Care. TRANSFoRm utilizes a unified structural / terminological interoperability framework, based on the local-as-view mediation paradigm. Such an approach mandates the global information model to describe the domain of interest independently of the data sources to be explored. Following a requirement analysis process, no ontology focusing on primary care research was identified and, thus we designed a realist ontology based on Basic Formal Ontology to support our framework in collaboration with various terminologies used in primary care. The resulting ontology has 549 classes and 82 object properties and is used to support data integration for TRANSFoRm's use cases. Concepts identified by researchers were successfully expressed in queries using CDIM and pertinent terminologies. As an example, we illustrate how, in TRANSFoRm, the Query Formulation Workbench can capture eligibility criteria in a computable representation, which is based on CDIM. A unified mediation approach to semantic interoperability provides a flexible and extensible framework for all types of interaction between health record systems and research systems. CDIM, as core ontology of such an approach, enables simplicity and consistency of design across the heterogeneous software landscape and can support the specific needs of EHR-driven phenotyping research using primary care data.

  3. BarraCUDA - a fast short read sequence aligner using graphics processing units

    PubMed Central

    2012-01-01

    Background With the maturation of next-generation DNA sequencing (NGS) technologies, the throughput of DNA sequencing reads has soared to over 600 gigabases from a single instrument run. General purpose computing on graphics processing units (GPGPU), extracts the computing power from hundreds of parallel stream processors within graphics processing cores and provides a cost-effective and energy efficient alternative to traditional high-performance computing (HPC) clusters. In this article, we describe the implementation of BarraCUDA, a GPGPU sequence alignment software that is based on BWA, to accelerate the alignment of sequencing reads generated by these instruments to a reference DNA sequence. Findings Using the NVIDIA Compute Unified Device Architecture (CUDA) software development environment, we ported the most computational-intensive alignment component of BWA to GPU to take advantage of the massive parallelism. As a result, BarraCUDA offers a magnitude of performance boost in alignment throughput when compared to a CPU core while delivering the same level of alignment fidelity. The software is also capable of supporting multiple CUDA devices in parallel to further accelerate the alignment throughput. Conclusions BarraCUDA is designed to take advantage of the parallelism of GPU to accelerate the alignment of millions of sequencing reads generated by NGS instruments. By doing this, we could, at least in part streamline the current bioinformatics pipeline such that the wider scientific community could benefit from the sequencing technology. BarraCUDA is currently available from http://seqbarracuda.sf.net PMID:22244497

  4. Control software and electronics architecture design in the framework of the E-ELT instrumentation

    NASA Astrophysics Data System (ADS)

    Di Marcantonio, P.; Coretti, I.; Cirami, R.; Comari, M.; Santin, P.; Pucillo, M.

    2010-07-01

    During the last years the European Southern Observatory (ESO), in collaboration with other European astronomical institutes, has started several feasibility studies for the E-ELT (European-Extremely Large Telescope) instrumentation and post-focal adaptive optics. The goal is to create a flexible suite of instruments to deal with the wide variety of scientific questions astronomers would like to see solved in the coming decades. In this framework INAF-Astronomical Observatory of Trieste (INAF-AOTs) is currently responsible of carrying out the analysis and the preliminary study of the architecture of the electronics and control software of three instruments: CODEX (control software and electronics) and OPTIMOS-EVE/OPTIMOS-DIORAMAS (control software). To cope with the increased complexity and new requirements for stability, precision, real-time latency and communications among sub-systems imposed by these instruments, new solutions have been investigated by our group. In this paper we present the proposed software and electronics architecture based on a distributed common framework centered on the Component/Container model that uses OPC Unified Architecture as a standard layer to communicate with COTS components of three different vendors. We describe three working prototypes that have been set-up in our laboratory and discuss their performances, integration complexity and ease of deployment.

  5. A novel AIDS/HIV intelligent medical consulting system based on expert systems

    PubMed Central

    Ebrahimi, Alireza Pour; Toloui Ashlaghi, Abbas; Mahdavy Rad, Maryam

    2013-01-01

    Background: The purpose of this paper is to propose a novel intelligent model for AIDS/HIV data based on expert system and using it for developing an intelligent medical consulting system for AIDS/HIV. Materials and Methods: In this descriptive research, 752 frequently asked questions (FAQs) about AIDS/HIV are gathered from numerous websites about this disease. To perform the data mining and extracting the intelligent model, the 6 stages of Crisp method has been completed for FAQs. The 6 stages include: Business understanding, data understanding, data preparation, modelling, evaluation and deployment. C5.0 Tree classification algorithm is used for modelling. Also, rational unified process (RUP) is used to develop the web-based medical consulting software. Stages of RUP are as follows: Inception, elaboration, construction and transition. The intelligent developed model has been used in the infrastructure of the software and based on client's inquiry and keywords related FAQs are displayed to the client, according to the rank. FAQs’ ranks are gradually determined considering clients reading it. Based on displayed FAQs, test and entertainment links are also displayed. Result: The accuracy of the AIDS/HIV intelligent web-based medical consulting system is estimated to be 78.76%. Conclusion: AIDS/HIV medical consulting systems have been developed using intelligent infrastructure. Being equipped with an intelligent model, providing consulting services on systematic textual data and providing side services based on client's activities causes the implemented system to be unique. The research has been approved by Iranian Ministry of Health and Medical Education for being practical. PMID:24251290

  6. Mount control system of the ASTRI SST-2M prototype for the Cherenkov Telescope Array

    NASA Astrophysics Data System (ADS)

    Antolini, Elisa; Tosti, Gino; Tanci, Claudio; Bagaglia, Marco; Canestrari, Rodolfo; Cascone, Enrico; Gambini, Giorgio; Nucciarelli, Giuliano; Pareschi, Giovanni; Scuderi, Salvo; Stringhetti, Luca; Busatta, Andrea; Giacomel, Stefano; Marchiori, Gianpietro; Manfrin, Cristiana; Marcuzzi, Enrico; Di Michele, Daniele; Grigolon, Carlo; Guarise, Paolo

    2016-08-01

    The ASTRI SST-2M telescope is an end-to-end prototype proposed for the Small Size class of Telescopes (SST) of the future Cherenkov Telescope Array (CTA). The prototype is installed in Italy at the INAF observing station located at Serra La Nave on Mount Etna (Sicily) and it was inaugurated in September 2014. This paper presents the software and hardware architecture and development of the system dedicated to the control of the mount, health, safety and monitoring systems of the ASTRI SST-2M telescope prototype. The mount control system installed on the ASTRI SST-2M telescope prototype makes use of standard and widely deployed industrial hardware and software. State of the art of the control and automation industries was selected in order to fulfill the mount related functional and safety requirements with assembly compactness, high reliability, and reduced maintenance. The software package was implemented with the Beckhoff TwinCAT version 3 environment for the software Programmable Logical Controller (PLC), while the control electronics have been chosen in order to maximize the homogeneity and the real time performance of the system. The integration with the high level controller (Telescope Control System) has been carried out by choosing the open platform communications Unified Architecture (UA) protocol, supporting rich data model while offering compatibility with the PLC platform. In this contribution we show how the ASTRI approach for the design and implementation of the mount control system has made the ASTRI SST-2M prototype a standalone intelligent machine, able to fulfill requirements and easy to be integrated in an array configuration such as the future ASTRI mini-array proposed to be installed at the southern site of the Cherenkov Telescope Array (CTA).

  7. Analysis of the prospective energy interconnections in Northeast Asia and development of the data portal

    NASA Astrophysics Data System (ADS)

    Churkin, Andrey; Bialek, Janusz

    2018-01-01

    Development of power interconnections in Northeast Asia becomes not only engineering but also a political issue. More research institutes are involved in the Asian Super Grid initiative discussion, as well as more politicians mention power interconnection opportunities. UNESCAP started providing a platform for intragovernmental discussion of the issue. However, there are still a lack of comprehensive modern research of the Asian Super Grid. Moreover, there is no unified data base and no unified power routes concept. Therefore, this article discusses a tool for optimal power routes selection and suggest a concept of the unified data portal.

  8. Unified theory of motion of inner planets

    NASA Astrophysics Data System (ADS)

    Kotelnikov, V.; Kislik, M.

    1983-01-01

    A highly accurate, unified theory of motion for the Solar System's inner planets Mercury, Venus, the Earth, Mars was developed. It has practical importance and is used to solve various problems of interplanetary cosmonautics.

  9. A unified approach to the analysis and design of elasto-plastic structures with mechanical contact

    NASA Technical Reports Server (NTRS)

    Bendsoe, Martin P.; Olhoff, Niels; Taylor, John E.

    1990-01-01

    With structural design in mind, a new unified variational model has been developed which represents the mechanics of deformation elasto-plasticity with unilateral contact conditions. For a design problem formulated as maximization of the load carrying capacity of a structure under certain constraints, the unified model allows for a simultaneous analysis and design synthesis for a whole range of mechanical behavior.

  10. Support Vector Data Descriptions and k-Means Clustering: One Class?

    PubMed

    Gornitz, Nico; Lima, Luiz Alberto; Muller, Klaus-Robert; Kloft, Marius; Nakajima, Shinichi

    2017-09-27

    We present ClusterSVDD, a methodology that unifies support vector data descriptions (SVDDs) and k-means clustering into a single formulation. This allows both methods to benefit from one another, i.e., by adding flexibility using multiple spheres for SVDDs and increasing anomaly resistance and flexibility through kernels to k-means. In particular, our approach leads to a new interpretation of k-means as a regularized mode seeking algorithm. The unifying formulation further allows for deriving new algorithms by transferring knowledge from one-class learning settings to clustering settings and vice versa. As a showcase, we derive a clustering method for structured data based on a one-class learning scenario. Additionally, our formulation can be solved via a particularly simple optimization scheme. We evaluate our approach empirically to highlight some of the proposed benefits on artificially generated data, as well as on real-world problems, and provide a Python software package comprising various implementations of primal and dual SVDD as well as our proposed ClusterSVDD.

  11. Unified Software Solution for Efficient SPR Data Analysis in Drug Research

    PubMed Central

    Dahl, Göran; Steigele, Stephan; Hillertz, Per; Tigerström, Anna; Egnéus, Anders; Mehrle, Alexander; Ginkel, Martin; Edfeldt, Fredrik; Holdgate, Geoff; O’Connell, Nichole; Kappler, Bernd; Brodte, Annette; Rawlins, Philip B.; Davies, Gareth; Westberg, Eva-Lotta; Folmer, Rutger H. A.; Heyse, Stephan

    2016-01-01

    Surface plasmon resonance (SPR) is a powerful method for obtaining detailed molecular interaction parameters. Modern instrumentation with its increased throughput has enabled routine screening by SPR in hit-to-lead and lead optimization programs, and SPR has become a mainstream drug discovery technology. However, the processing and reporting of SPR data in drug discovery are typically performed manually, which is both time-consuming and tedious. Here, we present the workflow concept, design and experiences with a software module relying on a single, browser-based software platform for the processing, analysis, and reporting of SPR data. The efficiency of this concept lies in the immediate availability of end results: data are processed and analyzed upon loading the raw data file, allowing the user to immediately quality control the results. Once completed, the user can automatically report those results to data repositories for corporate access and quickly generate printed reports or documents. The software module has resulted in a very efficient and effective workflow through saved time and improved quality control. We discuss these benefits and show how this process defines a new benchmark in the drug discovery industry for the handling, interpretation, visualization, and sharing of SPR data. PMID:27789754

  12. Arms Trafficking and Colombia

    DTIC Science & Technology

    2003-01-01

    currently available to terrorists, in- surgents, and other criminals are enormous. These groups have ex- ploited and developed local, regional, and global ...Institute, a federally funded research and development center supported by the Office of the Secretary of Defense, the Joint Staff, the unified commands...research and development center sponsored by the Office of the Secretary of Defense, the Joint Staff, the unified commands, and the defense agencies

  13. Another Initiative? Where Does it Fit? A Unifying Framework and an Integrated Infrastructure for Schools to Address Barriers to Learning and Promote Healthy Development

    ERIC Educational Resources Information Center

    Center for Mental Health in Schools at UCLA, 2005

    2005-01-01

    This report was developed to highlight the current state of affairs and illustrate the value of a unifying framework and integrated infrastructure for the many initiatives, projects, programs, and services schools pursue in addressing barriers to learning and promoting healthy development. Specifically, it highlights how initiatives can be…

  14. The Unified Floating Point Vector Coprocessor for Reconfigurable Hardware

    NASA Astrophysics Data System (ADS)

    Kathiara, Jainik

    There has been an increased interest recently in using embedded cores on FPGAs. Many of the applications that make use of these cores have floating point operations. Due to the complexity and expense of floating point hardware, these algorithms are usually converted to fixed point operations or implemented using floating-point emulation in software. As the technology advances, more and more homogeneous computational resources and fixed function embedded blocks are added to FPGAs and hence implementation of floating point hardware becomes a feasible option. In this research we have implemented a high performance, autonomous floating point vector Coprocessor (FPVC) that works independently within an embedded processor system. We have presented a unified approach to vector and scalar computation, using a single register file for both scalar operands and vector elements. The Hybrid vector/SIMD computational model of FPVC results in greater overall performance for most applications along with improved peak performance compared to other approaches. By parameterizing vector length and the number of vector lanes, we can design an application specific FPVC and take optimal advantage of the FPGA fabric. For this research we have also initiated designing a software library for various computational kernels, each of which adapts FPVC's configuration and provide maximal performance. The kernels implemented are from the area of linear algebra and include matrix multiplication and QR and Cholesky decomposition. We have demonstrated the operation of FPVC on a Xilinx Virtex 5 using the embedded PowerPC.

  15. Social representations of health councilors regarding the right to health and citizenship.

    PubMed

    Moura, Luciana Melo de; Shimizu, Helena Eri

    2017-03-30

    To know the structure of the social representations of right to health and citizenship of health municipal councilors. This is a qualitative study, based on the central nucleus theory of social representations, carried out in eight municipalities of the Integrated Region for the Development of the Surroundings of the Federal District, Brazil. The intentional sample consisted of municipal health councilors. Between June and December 2012, free recall questionnaires were used, of which 68 were answered with the inducing term health, and 64 with the inducing term citizenship. Data were analyzed using EVOC software and Bardin's content analysis. The representational field of the right to health is associated with the idea of universal law guaranteed by the Constitution and the Unified Health System (SUS), and of citizenship linked to rights and duties. The conceptions of right to health are understood as a condition for reaching citizenship, and citizenship as social protection.

  16. Image-algebraic design of multispectral target recognition algorithms

    NASA Astrophysics Data System (ADS)

    Schmalz, Mark S.; Ritter, Gerhard X.

    1994-06-01

    In this paper, we discuss methods for multispectral ATR (Automated Target Recognition) of small targets that are sensed under suboptimal conditions, such as haze, smoke, and low light levels. In particular, we discuss our ongoing development of algorithms and software that effect intelligent object recognition by selecting ATR filter parameters according to ambient conditions. Our algorithms are expressed in terms of IA (image algebra), a concise, rigorous notation that unifies linear and nonlinear mathematics in the image processing domain. IA has been implemented on a variety of parallel computers, with preprocessors available for the Ada and FORTRAN languages. An image algebra C++ class library has recently been made available. Thus, our algorithms are both feasible implementationally and portable to numerous machines. Analyses emphasize the aspects of image algebra that aid the design of multispectral vision algorithms, such as parameterized templates that facilitate the flexible specification of ATR filters.

  17. MetalS2: a tool for the structural alignment of minimal functional sites in metal-binding proteins and nucleic acids.

    PubMed

    Andreini, Claudia; Cavallaro, Gabriele; Rosato, Antonio; Valasatava, Yana

    2013-11-25

    We developed a new software tool, MetalS(2), for the structural alignment of Minimal Functional Sites (MFSs) in metal-binding biological macromolecules. MFSs are 3D templates that describe the local environment around the metal(s) independently of the larger context of the macromolecular structure. Such local environment has a determinant role in tuning the chemical reactivity of the metal, ultimately contributing to the functional properties of the whole system. On our example data sets, MetalS(2) unveiled structural similarities that other programs for protein structure comparison do not consistently point out and overall identified a larger number of structurally similar MFSs. MetalS(2) supports the comparison of MFSs harboring different metals and/or with different nuclearity and is available both as a stand-alone program and a Web tool ( http://metalweb.cerm.unifi.it/tools/metals2/).

  18. Universal Quantum Computing with Arbitrary Continuous-Variable Encoding.

    PubMed

    Lau, Hoi-Kwan; Plenio, Martin B

    2016-09-02

    Implementing a qubit quantum computer in continuous-variable systems conventionally requires the engineering of specific interactions according to the encoding basis states. In this work, we present a unified formalism to conduct universal quantum computation with a fixed set of operations but arbitrary encoding. By storing a qubit in the parity of two or four qumodes, all computing processes can be implemented by basis state preparations, continuous-variable exponential-swap operations, and swap tests. Our formalism inherits the advantages that the quantum information is decoupled from collective noise, and logical qubits with different encodings can be brought to interact without decoding. We also propose a possible implementation of the required operations by using interactions that are available in a variety of continuous-variable systems. Our work separates the "hardware" problem of engineering quantum-computing-universal interactions, from the "software" problem of designing encodings for specific purposes. The development of quantum computer architecture could hence be simplified.

  19. Universal Quantum Computing with Arbitrary Continuous-Variable Encoding

    NASA Astrophysics Data System (ADS)

    Lau, Hoi-Kwan; Plenio, Martin B.

    2016-09-01

    Implementing a qubit quantum computer in continuous-variable systems conventionally requires the engineering of specific interactions according to the encoding basis states. In this work, we present a unified formalism to conduct universal quantum computation with a fixed set of operations but arbitrary encoding. By storing a qubit in the parity of two or four qumodes, all computing processes can be implemented by basis state preparations, continuous-variable exponential-swap operations, and swap tests. Our formalism inherits the advantages that the quantum information is decoupled from collective noise, and logical qubits with different encodings can be brought to interact without decoding. We also propose a possible implementation of the required operations by using interactions that are available in a variety of continuous-variable systems. Our work separates the "hardware" problem of engineering quantum-computing-universal interactions, from the "software" problem of designing encodings for specific purposes. The development of quantum computer architecture could hence be simplified.

  20. Advanced Visualization of Experimental Data in Real Time Using LiveView3D

    NASA Technical Reports Server (NTRS)

    Schwartz, Richard J.; Fleming, Gary A.

    2006-01-01

    LiveView3D is a software application that imports and displays a variety of wind tunnel derived data in an interactive virtual environment in real time. LiveView3D combines the use of streaming video fed into a three-dimensional virtual representation of the test configuration with networked communications to the test facility Data Acquisition System (DAS). This unified approach to real time data visualization provides a unique opportunity to comprehend very large sets of diverse forms of data in a real time situation, as well as in post-test analysis. This paper describes how LiveView3D has been implemented to visualize diverse forms of aerodynamic data gathered during wind tunnel experiments, most notably at the NASA Langley Research Center Unitary Plan Wind Tunnel (UPWT). Planned future developments of the LiveView3D system are also addressed.

  1. High-resolution coupled physics solvers for analysing fine-scale nuclear reactor design problems.

    PubMed

    Mahadevan, Vijay S; Merzari, Elia; Tautges, Timothy; Jain, Rajeev; Obabko, Aleksandr; Smith, Michael; Fischer, Paul

    2014-08-06

    An integrated multi-physics simulation capability for the design and analysis of current and future nuclear reactor models is being investigated, to tightly couple neutron transport and thermal-hydraulics physics under the SHARP framework. Over several years, high-fidelity, validated mono-physics solvers with proven scalability on petascale architectures have been developed independently. Based on a unified component-based architecture, these existing codes can be coupled with a mesh-data backplane and a flexible coupling-strategy-based driver suite to produce a viable tool for analysts. The goal of the SHARP framework is to perform fully resolved coupled physics analysis of a reactor on heterogeneous geometry, in order to reduce the overall numerical uncertainty while leveraging available computational resources. The coupling methodology and software interfaces of the framework are presented, along with verification studies on two representative fast sodium-cooled reactor demonstration problems to prove the usability of the SHARP framework.

  2. Experimental determination of spin-dependent electron density by joint refinement of X-ray and polarized neutron diffraction data.

    PubMed

    Deutsch, Maxime; Claiser, Nicolas; Pillet, Sébastien; Chumakov, Yurii; Becker, Pierre; Gillet, Jean Michel; Gillon, Béatrice; Lecomte, Claude; Souhassou, Mohamed

    2012-11-01

    New crystallographic tools were developed to access a more precise description of the spin-dependent electron density of magnetic crystals. The method combines experimental information coming from high-resolution X-ray diffraction (XRD) and polarized neutron diffraction (PND) in a unified model. A new algorithm that allows for a simultaneous refinement of the charge- and spin-density parameters against XRD and PND data is described. The resulting software MOLLYNX is based on the well known Hansen-Coppens multipolar model, and makes it possible to differentiate the electron spins. This algorithm is validated and demonstrated with a molecular crystal formed by a bimetallic chain, MnCu(pba)(H(2)O)(3)·2H(2)O, for which XRD and PND data are available. The joint refinement provides a more detailed description of the spin density than the refinement from PND data alone.

  3. High-resolution coupled physics solvers for analysing fine-scale nuclear reactor design problems

    PubMed Central

    Mahadevan, Vijay S.; Merzari, Elia; Tautges, Timothy; Jain, Rajeev; Obabko, Aleksandr; Smith, Michael; Fischer, Paul

    2014-01-01

    An integrated multi-physics simulation capability for the design and analysis of current and future nuclear reactor models is being investigated, to tightly couple neutron transport and thermal-hydraulics physics under the SHARP framework. Over several years, high-fidelity, validated mono-physics solvers with proven scalability on petascale architectures have been developed independently. Based on a unified component-based architecture, these existing codes can be coupled with a mesh-data backplane and a flexible coupling-strategy-based driver suite to produce a viable tool for analysts. The goal of the SHARP framework is to perform fully resolved coupled physics analysis of a reactor on heterogeneous geometry, in order to reduce the overall numerical uncertainty while leveraging available computational resources. The coupling methodology and software interfaces of the framework are presented, along with verification studies on two representative fast sodium-cooled reactor demonstration problems to prove the usability of the SHARP framework. PMID:24982250

  4. gpuSPHASE-A shared memory caching implementation for 2D SPH using CUDA

    NASA Astrophysics Data System (ADS)

    Winkler, Daniel; Meister, Michael; Rezavand, Massoud; Rauch, Wolfgang

    2017-04-01

    Smoothed particle hydrodynamics (SPH) is a meshless Lagrangian method that has been successfully applied to computational fluid dynamics (CFD), solid mechanics and many other multi-physics problems. Using the method to solve transport phenomena in process engineering requires the simulation of several days to weeks of physical time. Based on the high computational demand of CFD such simulations in 3D need a computation time of years so that a reduction to a 2D domain is inevitable. In this paper gpuSPHASE, a new open-source 2D SPH solver implementation for graphics devices, is developed. It is optimized for simulations that must be executed with thousands of frames per second to be computed in reasonable time. A novel caching algorithm for Compute Unified Device Architecture (CUDA) shared memory is proposed and implemented. The software is validated and the performance is evaluated for the well established dambreak test case.

  5. Unified Technical Concepts--Phase II. Expand Application to Industrial Technologies and Adult Education. Final Report.

    ERIC Educational Resources Information Center

    Technical Education Research Center, Waco, TX.

    A project was conducted to develop a laboratory-based instructional system in physics for two-year technician programs that emphasizes both the analogies between basic physical principles and the applications of the principles in modern technology. The Unified Technical Concepts (UTC) system that was developed is (1) a reorganization of physics…

  6. Initial English Language Teacher Education: Processes and Tensions towards a Unifying Curriculum in an Argentinian Province

    ERIC Educational Resources Information Center

    Banegas, Dario Luis

    2014-01-01

    In this reflective piece I discuss the process of developing a new unifying initial English language teacher education curriculum in the province of Chubut (Argentina). Trainers and trainees from different institutions were called to work on it with the aim of democratising curriculum development and enhancing involvement among agents. In the…

  7. Unified implementation of the reference architecture : concept of operations.

    DOT National Transportation Integrated Search

    2015-10-19

    This document describes the Concept of Operations (ConOps) for the Unified Implementation of the Reference Architecture, located in Southeast Michigan, which supports connected vehicle research and development. This ConOps describes the current state...

  8. ObsPy: A Python toolbox for seismology - Sustainability, New Features, and Applications

    NASA Astrophysics Data System (ADS)

    Krischer, L.; Megies, T.; Sales de Andrade, E.; Barsch, R.; MacCarthy, J.

    2016-12-01

    ObsPy (https://www.obspy.org) is a community-driven, open-source project dedicated to offer a bridge for seismology into the scientific Python ecosystem. Amongst other things, it provides Read and write support for essentially every commonly used data format in seismology with a unified interface. This includes waveform data as well as station and event meta information. A signal processing toolbox tuned to the specific needs of seismologists. Integrated access to the largest data centers, web services, and databases. Wrappers around third party codes like libmseed and evalresp. Using ObsPy enables users to take advantage of the vast scientific ecosystem that has developed around Python. In contrast to many other programming languages and tools, Python is simple enough to enable an exploratory and interactive coding style desired by many scientists. At the same time it is a full-fledged programming language usable by software engineers to build complex and large programs. This combination makes it very suitable for use in seismology where research code often must be translated to stable and production ready environments, especially in the age of big data. ObsPy has seen constant development for more than six years and enjoys a large rate of adoption in the seismological community with thousands of users. Successful applications include time-dependent and rotational seismology, big data processing, event relocations, and synthetic studies about attenuation kernels and full-waveform inversions to name a few examples. Additionally it sparked the development of several more specialized packages slowly building a modern seismological ecosystem around it. We will present a short overview of the capabilities of ObsPy and point out several representative use cases and more specialized software built around ObsPy. Additionally we will discuss new and upcoming features, as well as the sustainability of open-source scientific software.

  9. The Unified Behavior Framework for the Simulation of Autonomous Agents

    DTIC Science & Technology

    2015-03-01

    1980s, researchers have designed a variety of robot control architectures intending to imbue robots with some degree of autonomy. A recently developed ...Identification Friend or Foe viii THE UNIFIED BEHAVIOR FRAMEWORK FOR THE SIMULATION OF AUTONOMOUS AGENTS I. Introduction The development of autonomy has...room for research by utilizing methods like simulation and modeling that consume less time and fewer monetary resources. A recently developed reactive

  10. The HEASARC in 2013 and Beyond: NuSTAR, Astro-H, NICER..

    NASA Astrophysics Data System (ADS)

    Drake, Stephen A.; Smale, A. P.; McGlynn, T. A.; Arnaud, K. A.

    2013-04-01

    The High Energy Astrophysics Archival Research Center or HEASARC (http://heasarc.gsfc.nasa.gov/) is in its third decade as the NASA astrophysics discipline node supporting multi-mission cosmic X-ray and gamma-ray astronomy research. It provides a unified archive and software structure aimed both at 'legacy' missions such as Einstein, EXOSAT, ROSAT and RXTE, contemporary missions such as Fermi, Swift, Suzaku, Chandra, etc., and upcoming missions, such as NuSTAR, Astro-H and NICER. The HEASARC's high-energy astronomy archive has grown so that it presently contains 45 TB of data from 28 orbital missions. The HEASARC is the designated archive which supports NASA's Physics of the Cosmos theme (http://pcos.gsfc.nasa.gov/). We discuss some of the upcoming new initiatives and developments for the HEASARC, including the arrival of public data from the hard X-ray imaging NuSTAR mission in the summer of 2013, and the ongoing preparations to support the JAXA/NASA Astro-H mission and the NASA MoO Neutron Star Interior Composition Explorer (NICER), which are expected to become operational in 2015-2016. We also highlight some of the new software capabilities of the HEASARC, such as Xamin, a next-generation archive interface which will eventually supersede Browse, and the latest update of XSPEC (v 12.8.0).

  11. 3D Fluid-Structure Interaction Simulation of Aortic Valves Using a Unified Continuum ALE FEM Model.

    PubMed

    Spühler, Jeannette H; Jansson, Johan; Jansson, Niclas; Hoffman, Johan

    2018-01-01

    Due to advances in medical imaging, computational fluid dynamics algorithms and high performance computing, computer simulation is developing into an important tool for understanding the relationship between cardiovascular diseases and intraventricular blood flow. The field of cardiac flow simulation is challenging and highly interdisciplinary. We apply a computational framework for automated solutions of partial differential equations using Finite Element Methods where any mathematical description directly can be translated to code. This allows us to develop a cardiac model where specific properties of the heart such as fluid-structure interaction of the aortic valve can be added in a modular way without extensive efforts. In previous work, we simulated the blood flow in the left ventricle of the heart. In this paper, we extend this model by placing prototypes of both a native and a mechanical aortic valve in the outflow region of the left ventricle. Numerical simulation of the blood flow in the vicinity of the valve offers the possibility to improve the treatment of aortic valve diseases as aortic stenosis (narrowing of the valve opening) or regurgitation (leaking) and to optimize the design of prosthetic heart valves in a controlled and specific way. The fluid-structure interaction and contact problem are formulated in a unified continuum model using the conservation laws for mass and momentum and a phase function. The discretization is based on an Arbitrary Lagrangian-Eulerian space-time finite element method with streamline diffusion stabilization, and it is implemented in the open source software Unicorn which shows near optimal scaling up to thousands of cores. Computational results are presented to demonstrate the capability of our framework.

  12. 3D Fluid-Structure Interaction Simulation of Aortic Valves Using a Unified Continuum ALE FEM Model

    PubMed Central

    Spühler, Jeannette H.; Jansson, Johan; Jansson, Niclas; Hoffman, Johan

    2018-01-01

    Due to advances in medical imaging, computational fluid dynamics algorithms and high performance computing, computer simulation is developing into an important tool for understanding the relationship between cardiovascular diseases and intraventricular blood flow. The field of cardiac flow simulation is challenging and highly interdisciplinary. We apply a computational framework for automated solutions of partial differential equations using Finite Element Methods where any mathematical description directly can be translated to code. This allows us to develop a cardiac model where specific properties of the heart such as fluid-structure interaction of the aortic valve can be added in a modular way without extensive efforts. In previous work, we simulated the blood flow in the left ventricle of the heart. In this paper, we extend this model by placing prototypes of both a native and a mechanical aortic valve in the outflow region of the left ventricle. Numerical simulation of the blood flow in the vicinity of the valve offers the possibility to improve the treatment of aortic valve diseases as aortic stenosis (narrowing of the valve opening) or regurgitation (leaking) and to optimize the design of prosthetic heart valves in a controlled and specific way. The fluid-structure interaction and contact problem are formulated in a unified continuum model using the conservation laws for mass and momentum and a phase function. The discretization is based on an Arbitrary Lagrangian-Eulerian space-time finite element method with streamline diffusion stabilization, and it is implemented in the open source software Unicorn which shows near optimal scaling up to thousands of cores. Computational results are presented to demonstrate the capability of our framework. PMID:29713288

  13. Devoloping an integrated analysis approach to exoplanetary spectroscopy

    NASA Astrophysics Data System (ADS)

    Waldmann, Ingo

    2015-07-01

    Analysing the atmospheres of Earth and SuperEarth type planets for possible biomarkers will push us to the limits of current and future instrumentation. As the field matures, we must also upgrade our data analysis and interpretation techniques from their "ad-hoc" beginnings to a solid statistical foundation. This is particularly important for the optimal exploitation of future instruments, such as JWST and E-ELT. At the limits of low signal-to-noise, we are prone to two sources of biases: 1) Prior selection in the data reduction; 2) Prior constraints on the spectral retrieval. A unified set of tools addressing both points is required. To de-trend low S/N, correlated data, we demonstrated blind-source-separation (BSS) machine learning techniques to be a significant step forward. Both in photometry and spectroscopy. BSS finds applications in fields as diverse as medical imaging to cosmology. Applied to exoplanets, it allows us to resolve de-trending biases and demonstrate consistency between data sets that were previously found to be highly discrepant and subject to much debate. For the interpretation of the data, we developed a novel atmospheric retrieval suite, Tau-REx. Tau-REx implements an unbiased prior selections via a custom built pattern recognition software. A full subsequent mapping of the likelihood space (using cluster computing) allows us, for the first time, to fully study degeneracies and biases in emission and transmission spectroscopy. The development of a coherent end-to-end infrastructure is paramount to the characterisation of ever smaller and fainter foreign worlds. In this conference, I will discuss what we have learned for current observations and the need for unified statistical frameworks in the era of JWST, E-ELT.

  14. FREQ: A computational package for multivariable system loop-shaping procedures

    NASA Technical Reports Server (NTRS)

    Giesy, Daniel P.; Armstrong, Ernest S.

    1989-01-01

    Many approaches in the field of linear, multivariable time-invariant systems analysis and controller synthesis employ loop-sharing procedures wherein design parameters are chosen to shape frequency-response singular value plots of selected transfer matrices. A software package, FREQ, is documented for computing within on unified framework many of the most used multivariable transfer matrices for both continuous and discrete systems. The matrices are evaluated at user-selected frequency-response values, and singular values against frequency. Example computations are presented to demonstrate the use of the FREQ code.

  15. Toughen up.

    PubMed

    Donaldson, D; Mayes, M

    1999-10-01

    Within six months, AHS needed to integrate three recently merged hospitals running on disparate hardware and software systems into one unified system. AHS partnered with DataStudy Inc., Parsippany, N.J., and formed a team to address the specific enterprise resource planning needs of this healthcare organization. The implementation team completed the project within the six-month time frame and incorporated functionality that went beyond the initial specifications for the project. "To maximize the return on the always substantial ERP investment, healthcare executives must be aware of the many pitfalls waiting to derail every well-intentioned implementation."

  16. System and method for memory allocation in a multiclass memory system

    DOEpatents

    Loh, Gabriel; Meswani, Mitesh; Ignatowski, Michael; Nutter, Mark

    2016-06-28

    A system for memory allocation in a multiclass memory system includes a processor coupleable to a plurality of memories sharing a unified memory address space, and a library store to store a library of software functions. The processor identifies a type of a data structure in response to a memory allocation function call to the library for allocating memory to the data structure. Using the library, the processor allocates portions of the data structure among multiple memories of the multiclass memory system based on the type of the data structure.

  17. New Zealand's National Landslide Database

    NASA Astrophysics Data System (ADS)

    Rosser, B.; Dellow, S.; Haubrook, S.; Glassey, P.

    2016-12-01

    Since 1780, landslides have caused an average of about 3 deaths a year in New Zealand and have cost the economy an average of at least NZ$250M/a (0.1% GDP). To understand the risk posed by landslide hazards to society, a thorough knowledge of where, when and why different types of landslides occur is vital. The main objective for establishing the database was to provide a centralised national-scale, publically available database to collate landslide information that could be used for landslide hazard and risk assessment. Design of a national landslide database for New Zealand required consideration of both existing landslide data stored in a variety of digital formats, and future data, yet to be collected. Pre-existing databases were developed and populated with data reflecting the needs of the landslide or hazard project, and the database structures of the time. Bringing these data into a single unified database required a new structure capable of storing and delivering data at a variety of scales and accuracy and with different attributes. A "unified data model" was developed to enable the database to hold old and new landslide data irrespective of scale and method of capture. The database contains information on landslide locations and where available: 1) the timing of landslides and the events that may have triggered them; 2) the type of landslide movement; 3) the volume and area; 4) the source and debris tail; and 5) the impacts caused by the landslide. Information from a variety of sources including aerial photographs (and other remotely sensed data), field reconnaissance and media accounts has been collated and is presented for each landslide along with metadata describing the data sources and quality. There are currently nearly 19,000 landslide records in the database that include point locations, polygons of landslide source and deposit areas, and linear features. Several large datasets are awaiting upload which will bring the total number of landslides to over 100,000. The geo-spatial database is publicly available via the Internet. Software components, including the underlying database (PostGIS), Web Map Server (GeoServer) and web application use open-source software. The hope is that others will add relevant information to the database as well as download the data contained in it.

  18. BRIDG: a domain information model for translational and clinical protocol-driven research.

    PubMed

    Becnel, Lauren B; Hastak, Smita; Ver Hoef, Wendy; Milius, Robert P; Slack, MaryAnn; Wold, Diane; Glickman, Michael L; Brodsky, Boris; Jaffe, Charles; Kush, Rebecca; Helton, Edward

    2017-09-01

    It is critical to integrate and analyze data from biological, translational, and clinical studies with data from health systems; however, electronic artifacts are stored in thousands of disparate systems that are often unable to readily exchange data. To facilitate meaningful data exchange, a model that presents a common understanding of biomedical research concepts and their relationships with health care semantics is required. The Biomedical Research Integrated Domain Group (BRIDG) domain information model fulfills this need. Software systems created from BRIDG have shared meaning "baked in," enabling interoperability among disparate systems. For nearly 10 years, the Clinical Data Standards Interchange Consortium, the National Cancer Institute, the US Food and Drug Administration, and Health Level 7 International have been key stakeholders in developing BRIDG. BRIDG is an open-source Unified Modeling Language-class model developed through use cases and harmonization with other models. With its 4+ releases, BRIDG includes clinical and now translational research concepts in its Common, Protocol Representation, Study Conduct, Adverse Events, Regulatory, Statistical Analysis, Experiment, Biospecimen, and Molecular Biology subdomains. The model is a Clinical Data Standards Interchange Consortium, Health Level 7 International, and International Standards Organization standard that has been utilized in national and international standards-based software development projects. It will continue to mature and evolve in the areas of clinical imaging, pathology, ontology, and vocabulary support. BRIDG 4.1.1 and prior releases are freely available at https://bridgmodel.nci.nih.gov . © The Author 2017. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  19. The pLISA project in ASTERICS

    NASA Astrophysics Data System (ADS)

    De Bonis, Giulia; Bozza, Cristiano

    2017-03-01

    In the framework of Horizon 2020, the European Commission approved the ASTERICS initiative (ASTronomy ESFRI and Research Infrastructure CluSter) to collect knowledge and experiences from astronomy, astrophysics and particle physics and foster synergies among existing research infrastructures and scientific communities, hence paving the way for future ones. ASTERICS aims at producing a common set of tools and strategies to be applied in Astronomy ESFRI facilities. In particular, it will target the so-called multi-messenger approach to combine information from optical and radio telescopes, photon counters and neutrino telescopes. pLISA is a software tool under development in ASTERICS to help and promote machine learning as a unified approach to multivariate analysis of astrophysical data and signals. The library will offer a collection of classification parameters, estimators, classes and methods to be linked and used in reconstruction programs (and possibly also extended), to characterize events in terms of particle identification and energy. The pLISA library aims at offering the software infras tructure for applications developed inside different experiments and has been designed with an effort to extrapolate general, physics-related estimators from the specific features of the data model related to each particular experiment. pLISA is oriented towards parallel computing architectures, with awareness of the opportunity of using GPUs as accelerators demanding specifically optimized algorithms and to reduce the costs of pro cessing hardware requested for the reconstruction tasks. Indeed, a fast (ideally, real-time) reconstruction can open the way for the development or improvement of alert systems, typically required by multi-messenger search programmes among the different experi mental facilities involved in ASTERICS.

  20. Methodology for assessing the safety of Hydrogen Systems: HyRAM 1.1 technical reference manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Groth, Katrina; Hecht, Ethan; Reynolds, John Thomas

    The HyRAM software toolkit provides a basis for conducting quantitative risk assessment and consequence modeling for hydrogen infrastructure and transportation systems. HyRAM is designed to facilitate the use of state-of-the-art science and engineering models to conduct robust, repeatable assessments of hydrogen safety, hazards, and risk. HyRAM is envisioned as a unifying platform combining validated, analytical models of hydrogen behavior, a stan- dardized, transparent QRA approach, and engineering models and generic data for hydrogen installations. HyRAM is being developed at Sandia National Laboratories for the U. S. De- partment of Energy to increase access to technical data about hydrogen safety andmore » to enable the use of that data to support development and revision of national and international codes and standards. This document provides a description of the methodology and models contained in the HyRAM version 1.1. HyRAM 1.1 includes generic probabilities for hydrogen equipment fail- ures, probabilistic models for the impact of heat flux on humans and structures, and computa- tionally and experimentally validated analytical and first order models of hydrogen release and flame physics. HyRAM 1.1 integrates deterministic and probabilistic models for quantifying accident scenarios, predicting physical effects, and characterizing hydrogen hazards (thermal effects from jet fires, overpressure effects from deflagrations), and assessing impact on people and structures. HyRAM is a prototype software in active development and thus the models and data may change. This report will be updated at appropriate developmental intervals.« less

  1. A unified computational model of the development of object unity, object permanence, and occluded object trajectory perception.

    PubMed

    Franz, A; Triesch, J

    2010-12-01

    The perception of the unity of objects, their permanence when out of sight, and the ability to perceive continuous object trajectories even during occlusion belong to the first and most important capacities that infants have to acquire. Despite much research a unified model of the development of these abilities is still missing. Here we make an attempt to provide such a unified model. We present a recurrent artificial neural network that learns to predict the motion of stimuli occluding each other and that develops representations of occluded object parts. It represents completely occluded, moving objects for several time steps and successfully predicts their reappearance after occlusion. This framework allows us to account for a broad range of experimental data. Specifically, the model explains how the perception of object unity develops, the role of the width of the occluders, and it also accounts for differences between data for moving and stationary stimuli. We demonstrate that these abilities can be acquired by learning to predict the sensory input. The model makes specific predictions and provides a unifying framework that has the potential to be extended to other visual event categories. Copyright © 2010 Elsevier Inc. All rights reserved.

  2. A Unified Framework for Analyzing and Designing for Stationary Arterial Networks

    DOT National Transportation Integrated Search

    2017-05-17

    This research aims to develop a unified theoretical and simulation framework for analyzing and designing signals for stationary arterial networks. Existing traffic flow models used in design and analysis of signal control strategies are either too si...

  3. Toward a unified account of comprehension and production in language development.

    PubMed

    McCauley, Stewart M; Christiansen, Morten H

    2013-08-01

    Although Pickering & Garrod (P&G) argue convincingly for a unified system for language comprehension and production, they fail to explain how such a system might develop. Using a recent computational model of language acquisition as an example, we sketch a developmental perspective on the integration of comprehension and production. We conclude that only through development can we fully understand the intertwined nature of comprehension and production in adult processing.

  4. Final Report: CNC Micromachines LDRD No.10793

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    JOKIEL JR., BERNHARD; BENAVIDES, GILBERT L.; BIEG, LOTHAR F.

    2003-04-01

    The three-year LDRD ''CNC Micromachines'' was successfully completed at the end of FY02. The project had four major breakthroughs in spatial motion control in MEMS: (1) A unified method for designing scalable planar and spatial on-chip motion control systems was developed. The method relies on the use of parallel kinematic mechanisms (PKMs) that when properly designed provide different types of motion on-chip without the need for post-fabrication assembly, (2) A new type of actuator was developed--the linear stepping track drive (LSTD) that provides open loop linear position control that is scalable in displacement, output force and step size. Several versionsmore » of this actuator were designed, fabricated and successfully tested. (3) Different versions of XYZ translation only and PTT motion stages were designed, successfully fabricated and successfully tested demonstrating absolutely that on-chip spatial motion control systems are not only possible, but are a reality. (4) Control algorithms, software and infrastructure based on MATLAB were created and successfully implemented to drive the XYZ and PTT motion platforms in a controlled manner. The control software is capable of reading an M/G code machine tool language file, decode the instructions and correctly calculate and apply position and velocity trajectories to the motion devices linear drive inputs to position the device platform along the trajectory as specified by the input file. A full and detailed account of design methodology, theory and experimental results (failures and successes) is provided.« less

  5. A unified theory of development: a dialectic integration of nature and nurture.

    PubMed

    Sameroff, Arnold

    2010-01-01

    The understanding of nature and nurture within developmental science has evolved with alternating ascendance of one or the other as primary explanations for individual differences in life course trajectories of success or failure. A dialectical perspective emphasizing the interconnectedness of individual and context is suggested to interpret the evolution of developmental science in similar terms to those necessary to explain the development of individual children. A unified theory of development is proposed to integrate personal change, context, regulation, and representational models of development.

  6. Textual data compression in computational biology: a synopsis.

    PubMed

    Giancarlo, Raffaele; Scaturro, Davide; Utro, Filippo

    2009-07-01

    Textual data compression, and the associated techniques coming from information theory, are often perceived as being of interest for data communication and storage. However, they are also deeply related to classification and data mining and analysis. In recent years, a substantial effort has been made for the application of textual data compression techniques to various computational biology tasks, ranging from storage and indexing of large datasets to comparison and reverse engineering of biological networks. The main focus of this review is on a systematic presentation of the key areas of bioinformatics and computational biology where compression has been used. When possible, a unifying organization of the main ideas and techniques is also provided. It goes without saying that most of the research results reviewed here offer software prototypes to the bioinformatics community. The Supplementary Material provides pointers to software and benchmark datasets for a range of applications of broad interest. In addition to provide reference to software, the Supplementary Material also gives a brief presentation of some fundamental results and techniques related to this paper. It is at: http://www.math.unipa.it/ approximately raffaele/suppMaterial/compReview/

  7. Sandia/Stanford Unified Creep Plasticity Damage Model for ANSYS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pierce, David M.; Vianco, Paul T.; Fossum, Arlo F.

    2006-09-03

    A unified creep plasticity (UCP) model was developed, based upon the time-dependent and time-independent deformation properties of the 95.5Sn-3.9Ag-0.6Cu (wt.%) soldier that were measured at Sandia. Then, a damage parameter, D, was added to the equation to develop the unified creep plasticity damage (UCPD) model. The parameter, D, was parameterized, using data obtained at Sandia from isothermal fatigue experiments on a double-lap shear test. The softwae was validated against a BGA solder joint exposed to thermal cycling. The UCPD model was put into the ANSYS finite element as a subroutine. So, the softwae is the subroutine for ANSYS 8.1.

  8. Technical writing practically unified through industry

    NASA Technical Reports Server (NTRS)

    Houston, L. S.

    1981-01-01

    General background details in the development of a university level technical writing program, based upon the writing tasks of the student's occupations, are summarized. Objectives and methods for unifying the courses of study with the needs of industry are discussed. Four academic course divisions, Industries Technologies, in which preparation and training are offered are: Animal, Horticulture, Agriculture, and Agricultural Business. Occupational competence is cited as the main goal for these programs in which technical writing is to be practically unified through industry. Course descriptions are also provided.

  9. The feasibility of using UML to compare the impact of different brands of computer system on the clinical consultation.

    PubMed

    Kumarapeli, Pushpa; de Lusignan, Simon; Koczan, Phil; Jones, Beryl; Sheeler, Ian

    2007-01-01

    UK general practice is universally computerised, with computers used in the consulting room at the point of care. Practices use a range of different brands of computer system, which have developed organically to meet the needs of general practitioners and health service managers. Unified Modelling Language (UML) is a standard modelling and specification notation widely used in software engineering. To examine the feasibility of UML notation to compare the impact of different brands of general practice computer system on the clinical consultation. Multi-channel video recordings of simulated consultation sessions were recorded on three different clinical computer systems in common use (EMIS, iSOFT Synergy and IPS Vision). User action recorder software recorded time logs of keyboard and mouse use, and pattern recognition software captured non-verbal communication. The outputs of these were used to create UML class and sequence diagrams for each consultation. We compared 'definition of the presenting problem' and 'prescribing', as these tasks were present in all the consultations analysed. Class diagrams identified the entities involved in the clinical consultation. Sequence diagrams identified common elements of the consultation (such as prescribing) and enabled comparisons to be made between the different brands of computer system. The clinician and computer system interaction varied greatly between the different brands. UML sequence diagrams are useful in identifying common tasks in the clinical consultation, and for contrasting the impact of the different brands of computer system on the clinical consultation. Further research is needed to see if patterns demonstrated in this pilot study are consistently displayed.

  10. EarthServer - an FP7 project to enable the web delivery and analysis of 3D/4D models

    NASA Astrophysics Data System (ADS)

    Laxton, John; Sen, Marcus; Passmore, James

    2013-04-01

    EarthServer aims at open access and ad-hoc analytics on big Earth Science data, based on the OGC geoservice standards Web Coverage Service (WCS) and Web Coverage Processing Service (WCPS). The WCS model defines "coverages" as a unifying paradigm for multi-dimensional raster data, point clouds, meshes, etc., thereby addressing a wide range of Earth Science data including 3D/4D models. WCPS allows declarative SQL-style queries on coverages. The project is developing a pilot implementing these standards, and will also investigate the use of GeoSciML to describe coverages. Integration of WCPS with XQuery will in turn allow coverages to be queried in combination with their metadata and GeoSciML description. The unified service will support navigation, extraction, aggregation, and ad-hoc analysis on coverage data from SQL. Clients will range from mobile devices to high-end immersive virtual reality, and will enable 3D model visualisation using web browser technology coupled with developing web standards. EarthServer is establishing open-source client and server technology intended to be scalable to Petabyte/Exabyte volumes, based on distributed processing, supercomputing, and cloud virtualization. Implementation will be based on the existing rasdaman server technology developed. Services using rasdaman technology are being installed serving the atmospheric, oceanographic, geological, cryospheric, planetary and general earth observation communities. The geology service (http://earthserver.bgs.ac.uk/) is being provided by BGS and at present includes satellite imagery, superficial thickness data, onshore DTMs and 3D models for the Glasgow area. It is intended to extend the data sets available to include 3D voxel models. Use of the WCPS standard allows queries to be constructed against single or multiple coverages. For example on a single coverage data for a particular area can be selected or data with a particular range of pixel values. Queries on multiple surfaces can be constructed to calculate, for example, the thickness between two surfaces in a 3D model or the depth from ground surface to the top of a particular geologic unit. In the first version of the service a simple interface showing some example queries has been implemented in order to show the potential of the technologies. The project aims to develop the services available in light of user feedback, both in terms of the data available, the functionality and the interface. User feedback on the services guides the software and standards development aspects of the project, leading to enhanced versions of the software which will be implemented in upgraded versions of the services during the lifetime of the project.

  11. GEM1: First-year modeling and IT activities for the Global Earthquake Model

    NASA Astrophysics Data System (ADS)

    Anderson, G.; Giardini, D.; Wiemer, S.

    2009-04-01

    GEM is a public-private partnership initiated by the Organisation for Economic Cooperation and Development (OECD) to build an independent standard for modeling and communicating earthquake risk worldwide. GEM is aimed at providing authoritative, open information about seismic risk and decision tools to support mitigation. GEM will also raise risk awareness and help post-disaster economic development, with the ultimate goal of reducing the toll of future earthquakes. GEM will provide a unified set of seismic hazard, risk, and loss modeling tools based on a common global IT infrastructure and consensus standards. These tools, systems, and standards will be developed in partnership with organizations around the world, with coordination by the GEM Secretariat and its Secretary General. GEM partners will develop a variety of global components, including a unified earthquake catalog, fault database, and ground motion prediction equations. To ensure broad representation and community acceptance, GEM will include local knowledge in all modeling activities, incorporate existing detailed models where possible, and independently test all resulting tools and models. When completed in five years, GEM will have a versatile, penly accessible modeling environment that can be updated as necessary, and will provide the global standard for seismic hazard, risk, and loss models to government ministers, scientists and engineers, financial institutions, and the public worldwide. GEM is now underway with key support provided by private sponsors (Munich Reinsurance Company, Zurich Financial Services, AIR Worldwide Corporation, and Willis Group Holdings); countries including Belgium, Germany, Italy, Singapore, Switzerland, and Turkey; and groups such as the European Commission. The GEM Secretariat has been selected by the OECD and will be hosted at the Eucentre at the University of Pavia in Italy; the Secretariat is now formalizing the creation of the GEM Foundation. Some of GEM's global components are in the planning stages, such as the developments of a unified active fault database and earthquake catalog. The flagship activity of GEM's first year is GEM1, a focused pilot project to develop GEM's first hazard and risk modeling products and initial IT infrastructure, starting in January 2009 and ending in March 2010. GEM1 will provide core capabilities for the present and key knowledge for future development of the full GEM computing Environment and product set. We will build GEM1 largely using existing tools and datasets, connected through a unified IT infrastructure, in order to bring GEM's initial capabilities online as rapidly as possible. The Swiss Seismological Service at ETH-Zurich is leading the GEM1 effort in cooperation with partners around the world. We anticipate that GEM1's products will include: • A global compilation of regional seismic source zone models in one or more common representations • Global synthetic earthquake catalogs for use in hazard calculations • Initial set of regional and global catalogues for validation • Global hazard models in map and database forms • First compilation of global vulnerabilities and fragilities • Tools for exposure and loss assessment • Validation of results and software for existing risk assessment tools to be used in future GEM stages • Demonstration risk scenarios for target cities • First version of GEM IT infrastructure All these products will be made freely available to the greatest extent possible. For more information on GEM and GEM1, please visit http://www.globalquakemodel.org.

  12. A unified algorithm for predicting partition coefficients for PBPK modeling of drugs and environmental chemicals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peyret, Thomas; Poulin, Patrick; Krishnan, Kannan, E-mail: kannan.krishnan@umontreal.ca

    The algorithms in the literature focusing to predict tissue:blood PC (P{sub tb}) for environmental chemicals and tissue:plasma PC based on total (K{sub p}) or unbound concentration (K{sub pu}) for drugs differ in their consideration of binding to hemoglobin, plasma proteins and charged phospholipids. The objective of the present study was to develop a unified algorithm such that P{sub tb}, K{sub p} and K{sub pu} for both drugs and environmental chemicals could be predicted. The development of the unified algorithm was accomplished by integrating all mechanistic algorithms previously published to compute the PCs. Furthermore, the algorithm was structured in such amore » way as to facilitate predictions of the distribution of organic compounds at the macro (i.e. whole tissue) and micro (i.e. cells and fluids) levels. The resulting unified algorithm was applied to compute the rat P{sub tb}, K{sub p} or K{sub pu} of muscle (n = 174), liver (n = 139) and adipose tissue (n = 141) for acidic, neutral, zwitterionic and basic drugs as well as ketones, acetate esters, alcohols, aliphatic hydrocarbons, aromatic hydrocarbons and ethers. The unified algorithm reproduced adequately the values predicted previously by the published algorithms for a total of 142 drugs and chemicals. The sensitivity analysis demonstrated the relative importance of the various compound properties reflective of specific mechanistic determinants relevant to prediction of PC values of drugs and environmental chemicals. Overall, the present unified algorithm uniquely facilitates the computation of macro and micro level PCs for developing organ and cellular-level PBPK models for both chemicals and drugs.« less

  13. Unifying Faculty, Staff, Students, and Community by Establishing and Implementing a Unique Vision for a New Elementary School.

    ERIC Educational Resources Information Center

    Currie, John R.

    The principal of a newly opened elementary school implemented a practicum study designed to unify faculty, parents, staff, and children; add direction to the program; develop a sense of purpose; and increase participation. It was expected that a vision statement would be developed in the school's first year of operation, and that parents and staff…

  14. Perl Extension to the Bproc Library

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grunau, Daryl W.

    2004-06-07

    The Beowulf Distributed process Space (Bproc) software stack is comprised of UNIX/Linux kernel modifications and a support library by which a cluster of machines, each running their own private kernel, can present itself as a unified process space to the user. A Bproc cluster contains a single front-end machine and many back-end nodes which receive and run processes given to them by the front-end. Any process which is migrated to a back-end node is also visible as a ghost process on the fron-end, and may be controlled there using traditional UNIX semantics (e.g. ps(1), kill(1), etc). This software is amore » Perl extension to the Bproc library which enables the Perl programmer to make direct calls to functions within the Bproc library. See http://www.clustermatic.org, http://bproc.sourceforge.net, and http://www.perl.org« less

  15. BIRCH: a user-oriented, locally-customizable, bioinformatics system.

    PubMed

    Fristensky, Brian

    2007-02-09

    Molecular biologists need sophisticated analytical tools which often demand extensive computational resources. While finding, installing, and using these tools can be challenging, pipelining data from one program to the next is particularly awkward, especially when using web-based programs. At the same time, system administrators tasked with maintaining these tools do not always appreciate the needs of research biologists. BIRCH (Biological Research Computing Hierarchy) is an organizational framework for delivering bioinformatics resources to a user group, scaling from a single lab to a large institution. The BIRCH core distribution includes many popular bioinformatics programs, unified within the GDE (Genetic Data Environment) graphic interface. Of equal importance, BIRCH provides the system administrator with tools that simplify the job of managing a multiuser bioinformatics system across different platforms and operating systems. These include tools for integrating locally-installed programs and databases into BIRCH, and for customizing the local BIRCH system to meet the needs of the user base. BIRCH can also act as a front end to provide a unified view of already-existing collections of bioinformatics software. Documentation for the BIRCH and locally-added programs is merged in a hierarchical set of web pages. In addition to manual pages for individual programs, BIRCH tutorials employ step by step examples, with screen shots and sample files, to illustrate both the important theoretical and practical considerations behind complex analytical tasks. BIRCH provides a versatile organizational framework for managing software and databases, and making these accessible to a user base. Because of its network-centric design, BIRCH makes it possible for any user to do any task from anywhere.

  16. BIRCH: A user-oriented, locally-customizable, bioinformatics system

    PubMed Central

    Fristensky, Brian

    2007-01-01

    Background Molecular biologists need sophisticated analytical tools which often demand extensive computational resources. While finding, installing, and using these tools can be challenging, pipelining data from one program to the next is particularly awkward, especially when using web-based programs. At the same time, system administrators tasked with maintaining these tools do not always appreciate the needs of research biologists. Results BIRCH (Biological Research Computing Hierarchy) is an organizational framework for delivering bioinformatics resources to a user group, scaling from a single lab to a large institution. The BIRCH core distribution includes many popular bioinformatics programs, unified within the GDE (Genetic Data Environment) graphic interface. Of equal importance, BIRCH provides the system administrator with tools that simplify the job of managing a multiuser bioinformatics system across different platforms and operating systems. These include tools for integrating locally-installed programs and databases into BIRCH, and for customizing the local BIRCH system to meet the needs of the user base. BIRCH can also act as a front end to provide a unified view of already-existing collections of bioinformatics software. Documentation for the BIRCH and locally-added programs is merged in a hierarchical set of web pages. In addition to manual pages for individual programs, BIRCH tutorials employ step by step examples, with screen shots and sample files, to illustrate both the important theoretical and practical considerations behind complex analytical tasks. Conclusion BIRCH provides a versatile organizational framework for managing software and databases, and making these accessible to a user base. Because of its network-centric design, BIRCH makes it possible for any user to do any task from anywhere. PMID:17291351

  17. Micromechanics-Based Structural Analysis (FEAMAC) and Multiscale Visualization within Abaqus/CAE Environment

    NASA Technical Reports Server (NTRS)

    Arnold, Steven M.; Bednarcyk, Brett A.; Hussain, Aquila; Katiyar, Vivek

    2010-01-01

    A unified framework is presented that enables coupled multiscale analysis of composite structures and associated graphical pre- and postprocessing within the Abaqus/CAE environment. The recently developed, free, Finite Element Analysis--Micromechanics Analysis Code (FEAMAC) software couples NASA's Micromechanics Analysis Code with Generalized Method of Cells (MAC/GMC) with Abaqus/Standard and Abaqus/Explicit to perform micromechanics based FEA such that the nonlinear composite material response at each integration point is modeled at each increment by MAC/GMC. The Graphical User Interfaces (FEAMAC-Pre and FEAMAC-Post), developed through collaboration between SIMULIA Erie and the NASA Glenn Research Center, enable users to employ a new FEAMAC module within Abaqus/CAE that provides access to the composite microscale. FEA IAC-Pre is used to define and store constituent material properties, set-up and store composite repeating unit cells, and assign composite materials as sections with all data being stored within the CAE database. Likewise FEAMAC-Post enables multiscale field quantity visualization (contour plots, X-Y plots), with point and click access to the microscale i.e., fiber and matrix fields).

  18. Clinical performance of dental fiberscope image guided system for endodontic treatment.

    PubMed

    Yamazaki, Yasushi; Ogawa, Takumi; Shigeta, Yuko; Ikawa, Tomoko; Kasama, Shintaro; Hattori, Asaki; Suzuki, Naoki; Yamamoto, Takatsugu; Ozawa, Toshiko; Arai, Takashi

    2011-01-01

    We developed a dental fiberscope that can be navigated. As a result we are able to better grasp the device position relative to the teeth, aiming at the lesion more precisely. However, the device position and the precise target setting were difficult to consistently ascertain. The aim of this study is to navigate the position of tip of the dental fiberscope fiber in the root canal with our navigation system. A 3D tooth model was made from the raw dental CT data. In addition, the optical position of the measurement device, OPTOTRAK system was used for registration of the 3D model and actual teeth position and to chase the scope movement. We developed exclusive software to unify information. We were subsequently able to precisely indicate the relation of the position between the device and the teeth on the 3D model in the monitor. This allowed us to aim at the lesion more precisely, as the revised endoscopic image matched the 3D model. The application of this endoscopic navigation system could increase the success rate for root canal treatments with recalcitrant lesion.

  19. Onboard Nonlinear Engine Sensor and Component Fault Diagnosis and Isolation Scheme

    NASA Technical Reports Server (NTRS)

    Tang, Liang; DeCastro, Jonathan A.; Zhang, Xiaodong

    2011-01-01

    A method detects and isolates in-flight sensor, actuator, and component faults for advanced propulsion systems. In sharp contrast to many conventional methods, which deal with either sensor fault or component fault, but not both, this method considers sensor fault, actuator fault, and component fault under one systemic and unified framework. The proposed solution consists of two main components: a bank of real-time, nonlinear adaptive fault diagnostic estimators for residual generation, and a residual evaluation module that includes adaptive thresholds and a Transferable Belief Model (TBM)-based residual evaluation scheme. By employing a nonlinear adaptive learning architecture, the developed approach is capable of directly dealing with nonlinear engine models and nonlinear faults without the need of linearization. Software modules have been developed and evaluated with the NASA C-MAPSS engine model. Several typical engine-fault modes, including a subset of sensor/actuator/components faults, were tested with a mild transient operation scenario. The simulation results demonstrated that the algorithm was able to successfully detect and isolate all simulated faults as long as the fault magnitudes were larger than the minimum detectable/isolable sizes, and no misdiagnosis occurred

  20. Access to Biomedical Information: The Unified Medical Language System.

    ERIC Educational Resources Information Center

    Squires, Steven J.

    1993-01-01

    Describes the development of a Unified Medical Language System (UMLS) by the National Library of Medicine that will retrieve and integrate information from a variety of information resources. Highlights include the metathesaurus; the UMLS semantic network; semantic locality; information sources map; evaluation of the metathesaurus; future…

  1. Explanation and Prediction: Building a Unified Theory of Librarianship, Concept and Review.

    ERIC Educational Resources Information Center

    McGrath, William E.

    2002-01-01

    Develops a comprehensive, unified, explanatory theory of librarianship by first making an analogy to the unification of the fundamental forces of nature. Topics include dependent and independent variables; publishing; acquisitions; classification and organization of knowledge; storage, preservation, and collection management; collections; and…

  2. Evaluation of English Language Development Programs in the Santa Ana Unified School District. A Report on Data System Reliability and Statistical Modeling of Program Impacts.

    ERIC Educational Resources Information Center

    Mitchell, Douglas E.; Destino, Tom; Karam, Rita

    In response to concern about the effectiveness of programs for English-as-a-Second-Language students in California's schools, the Santa Ana Unified School District, in which over 80 percent of students are limited-English-proficient (LEP) conducted a study of both the operations and effectiveness of the district's language development program,…

  3. Implications of a Non-Unified Command System and the Need for a Unified Command System in Zambia

    DTIC Science & Technology

    2015-06-12

    vein, it is indicated that the concept of Chief of General Staff would have been advantageous in the development of the Defence Forces had it been well...process. For example, British failures during the Crimean War caused the British to look towards the German General Staff system in effect during the...economic and social development . Nevertheless, any meaningful economic and social development needs to be well protected and anchored upon an effective

  4. Beyond PARR - PMEL's Integrated Data Management Strategy

    NASA Astrophysics Data System (ADS)

    Burger, E. F.; O'Brien, K.; Manke, A. B.; Schweitzer, R.; Smith, K. M.

    2016-12-01

    NOAA's Pacific Marine Environmental Laboratory (PMEL) hosts a wide range of scientific projects that span a number of scientific and environmental research disciplines. Each of these 14 research projects have their own data streams that are as diverse as the research. With its requirements for public access to federally funded research results and data, the 2013 White House Office of Science and Technology memo on Public Access to Research Results (PARR) changed the data management landscape for Federal agencies. In 2015, with support from the PMEL Director, Dr. Christopher Sabine, PMEL's Science Data Integration Group (SDIG) initiated a multi-year effort to formulate and implement an integrated data-management strategy for PMEL research efforts. Instead of using external requirements, such as PARR, to define our approach, we focussed on strategies to provide PMEL science projects with a unified framework for data submission, interoperable data access, data storage, and easier data archival to National Data Centers. This improves data access to PMEL scientists, their collaborators, and the public, and also provides a unified lab framework that allows our projects to meet their data management objectives, as well as those required by the PARR. We are implementing this solution in stages that allows us to test technology and architecture choices before comitting to a large scale implementation. SDIG developers have completed the first year of development where our approach is to reuse and leverage existing frameworks and standards. This presentation will describe our data management strategy, explain our phased implementation approach, the software and framework choices, and how these elements help us meet the objectives of this strategy. We will share the lessons learned in dealing with diverse and complex datasets in this first year of implementation and how these outcomes will shape our decisions for this ongoing effort. The data management capabilities now available to scientific projects, and other services being developed to manage and preserve PMEL's scientific data assets for our researchers, their collaborators, and future generations, will be described.

  5. Integrating malaria surveillance with climate data for outbreak detection and forecasting: the EPIDEMIA system.

    PubMed

    Merkord, Christopher L; Liu, Yi; Mihretie, Abere; Gebrehiwot, Teklehaymanot; Awoke, Worku; Bayabil, Estifanos; Henebry, Geoffrey M; Kassa, Gebeyaw T; Lake, Mastewal; Wimberly, Michael C

    2017-02-23

    Early indication of an emerging malaria epidemic can provide an opportunity for proactive interventions. Challenges to the identification of nascent malaria epidemics include obtaining recent epidemiological surveillance data, spatially and temporally harmonizing this information with timely data on environmental precursors, applying models for early detection and early warning, and communicating results to public health officials. Automated web-based informatics systems can provide a solution to these problems, but their implementation in real-world settings has been limited. The Epidemic Prognosis Incorporating Disease and Environmental Monitoring for Integrated Assessment (EPIDEMIA) computer system was designed and implemented to integrate disease surveillance with environmental monitoring in support of operational malaria forecasting in the Amhara region of Ethiopia. A co-design workshop was held with computer scientists, epidemiological modelers, and public health partners to develop an initial list of system requirements. Subsequent updates to the system were based on feedback obtained from system evaluation workshops and assessments conducted by a steering committee of users in the public health sector. The system integrated epidemiological data uploaded weekly by the Amhara Regional Health Bureau with remotely-sensed environmental data freely available from online archives. Environmental data were acquired and processed automatically by the EASTWeb software program. Additional software was developed to implement a public health interface for data upload and download, harmonize the epidemiological and environmental data into a unified database, automatically update time series forecasting models, and generate formatted reports. Reporting features included district-level control charts and maps summarizing epidemiological indicators of emerging malaria outbreaks, environmental risk factors, and forecasts of future malaria risk. Successful implementation and use of EPIDEMIA is an important step forward in the use of epidemiological and environmental informatics systems for malaria surveillance. Developing software to automate the workflow steps while remaining robust to continual changes in the input data streams was a key technical challenge. Continual stakeholder involvement throughout design, implementation, and operation has created a strong enabling environment that will facilitate the ongoing development, application, and testing of the system.

  6. Unified functional network and nonlinear time series analysis for complex systems science: The pyunicorn package

    NASA Astrophysics Data System (ADS)

    Donges, Jonathan; Heitzig, Jobst; Beronov, Boyan; Wiedermann, Marc; Runge, Jakob; Feng, Qing Yi; Tupikina, Liubov; Stolbova, Veronika; Donner, Reik; Marwan, Norbert; Dijkstra, Henk; Kurths, Jürgen

    2016-04-01

    We introduce the pyunicorn (Pythonic unified complex network and recurrence analysis toolbox) open source software package for applying and combining modern methods of data analysis and modeling from complex network theory and nonlinear time series analysis. pyunicorn is a fully object-oriented and easily parallelizable package written in the language Python. It allows for the construction of functional networks such as climate networks in climatology or functional brain networks in neuroscience representing the structure of statistical interrelationships in large data sets of time series and, subsequently, investigating this structure using advanced methods of complex network theory such as measures and models for spatial networks, networks of interacting networks, node-weighted statistics, or network surrogates. Additionally, pyunicorn provides insights into the nonlinear dynamics of complex systems as recorded in uni- and multivariate time series from a non-traditional perspective by means of recurrence quantification analysis, recurrence networks, visibility graphs, and construction of surrogate time series. The range of possible applications of the library is outlined, drawing on several examples mainly from the field of climatology. pyunicorn is available online at https://github.com/pik-copan/pyunicorn. Reference: J.F. Donges, J. Heitzig, B. Beronov, M. Wiedermann, J. Runge, Q.-Y. Feng, L. Tupikina, V. Stolbova, R.V. Donner, N. Marwan, H.A. Dijkstra, and J. Kurths, Unified functional network and nonlinear time series analysis for complex systems science: The pyunicorn package, Chaos 25, 113101 (2015), DOI: 10.1063/1.4934554, Preprint: arxiv.org:1507.01571 [physics.data-an].

  7. Unified functional network and nonlinear time series analysis for complex systems science: The pyunicorn package

    NASA Astrophysics Data System (ADS)

    Donges, Jonathan F.; Heitzig, Jobst; Beronov, Boyan; Wiedermann, Marc; Runge, Jakob; Feng, Qing Yi; Tupikina, Liubov; Stolbova, Veronika; Donner, Reik V.; Marwan, Norbert; Dijkstra, Henk A.; Kurths, Jürgen

    2015-11-01

    We introduce the pyunicorn (Pythonic unified complex network and recurrence analysis toolbox) open source software package for applying and combining modern methods of data analysis and modeling from complex network theory and nonlinear time series analysis. pyunicorn is a fully object-oriented and easily parallelizable package written in the language Python. It allows for the construction of functional networks such as climate networks in climatology or functional brain networks in neuroscience representing the structure of statistical interrelationships in large data sets of time series and, subsequently, investigating this structure using advanced methods of complex network theory such as measures and models for spatial networks, networks of interacting networks, node-weighted statistics, or network surrogates. Additionally, pyunicorn provides insights into the nonlinear dynamics of complex systems as recorded in uni- and multivariate time series from a non-traditional perspective by means of recurrence quantification analysis, recurrence networks, visibility graphs, and construction of surrogate time series. The range of possible applications of the library is outlined, drawing on several examples mainly from the field of climatology.

  8. Abstracted Workow Framework with a Structure from Motion Application

    NASA Astrophysics Data System (ADS)

    Rossi, Adam J.

    In scientific and engineering disciplines, from academia to industry, there is an increasing need for the development of custom software to perform experiments, construct systems, and develop products. The natural mindset initially is to shortcut and bypass all overhead and process rigor in order to obtain an immediate result for the problem at hand, with the misconception that the software will simply be thrown away at the end. In a majority of the cases, it turns out the software persists for many years, and likely ends up in production systems for which it was not initially intended. In the current study, a framework that can be used in both industry and academic applications mitigates underlying problems associated with developing scientific and engineering software. This results in software that is much more maintainable, documented, and usable by others, specifically allowing new users to extend capabilities of components already implemented in the framework. There is a multi-disciplinary need in the fields of imaging science, computer science, and software engineering for a unified implementation model, which motivates the development of an abstracted software framework. Structure from motion (SfM) has been identified as one use case where the abstracted workflow framework can improve research efficiencies and eliminate implementation redundancies in scientific fields. The SfM process begins by obtaining 2D images of a scene from different perspectives. Features from the images are extracted and correspondences are established. This provides a sufficient amount of information to initialize the problem for fully automated processing. Transformations are established between views, and 3D points are established via triangulation algorithms. The parameters for the camera models for all views / images are solved through bundle adjustment, establishing a highly consistent point cloud. The initial sparse point cloud and camera matrices are used to generate a dense point cloud through patch based techniques or densification algorithms such as Semi-Global Matching (SGM). The point cloud can be visualized or exploited by both humans and automated techniques. In some cases the point cloud is "draped" with original imagery in order to enhance the 3D model for a human viewer. The SfM workflow can be implemented in the abstracted framework, making it easily leverageable and extensible by multiple users. Like many processes in scientific and engineering domains, the workflow described for SfM is complex and requires many disparate components to form a functional system, often utilizing algorithms implemented by many users in different languages / environments and without knowledge of how the component fits into the larger system. In practice, this generally leads to issues interfacing the components, building the software for desired platforms, understanding its concept of operations, and how it can be manipulated in order to fit the desired function for a particular application. In addition, other scientists and engineers instinctively wish to analyze the performance of the system, establish new algorithms, optimize existing processes, and establish new functionality based on current research. This requires a framework whereby new components can be easily plugged in without affecting the current implemented functionality. The need for a universal programming environment establishes the motivation for the development of the abstracted workflow framework. This software implementation, named Catena, provides base classes from which new components must derive in order to operate within the framework. The derivation mandates requirements be satisfied in order to provide a complete implementation. Additionally, the developer must provide documentation of the component in terms of its overall function and inputs. The interface input and output values corresponding to the component must be defined in terms of their respective data types, and the implementation uses mechanisms within the framework to retrieve and send the values. This process requires the developer to componentize their algorithm rather than implement it monolithically. Although the requirements of the developer are slightly greater, the benefits realized from using Catena far outweigh the overhead, and results in extensible software. This thesis provides a basis for the abstracted workflow framework concept and the Catena software implementation. The benefits are also illustrated using a detailed examination of the SfM process as an example application.

  9. Unified Plant Growth Model (UPGM). 1. Background, objectives, and vision.

    USDA-ARS?s Scientific Manuscript database

    Since the development of the Environmental Policy Integrated Climate (EPIC) model in 1988, the EPIC-based plant growth code has been incorporated and modified into many agro-ecosystem models. The goals of the Unified Plant Growth Model (UPGM) project are: 1) integrating into one platform the enhance...

  10. Toward a Unified Communication Theory.

    ERIC Educational Resources Information Center

    McMillan, Saundra

    After discussing the nature of theory itself, the author explains her concept of the Unified Communication Theory, which rests on the assumption that there exists in all living structures a potential communication factor which is delimited by species and ontogeny. An organism develops "symbol fixation" at the level where its perceptual abilities…

  11. 77 FR 8014 - Unified Agenda of Federal Regulatory and Deregulatory Actions

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-02-13

    ... agenda was developed under the guidelines of Executive Order 12866 ``Regulatory Planning and Review... regulations for review to determine whether they should be modified or eliminated. Proposed rules may be... Unified Agenda will be available online at www.reginfo.gov , in a format that offers users a greatly...

  12. Enhancement of the Acquisition Process for a Combat System-A Case Study to Model the Workflow Processes for an Air Defense System Acquisition

    DTIC Science & Technology

    2009-12-01

    Business Process Modeling BPMN Business Process Modeling Notation SoA Service-oriented Architecture UML Unified Modeling Language CSP...system developers. Supporting technologies include Business Process Modeling Notation ( BPMN ), Unified Modeling Language (UML), model-driven architecture

  13. Sacramento City Unified School District Chapter 1/State Compensatory Education Handbook Series.

    ERIC Educational Resources Information Center

    Sacramento City Unified School District, CA.

    Four handbooks developed by the Consolidated Programs Department of the Sacramento City Unified School District (California) provide a means by which the multitude of federal, state, and district rules and regulations pertaining to compensatory education can be understood. The "Consolidated Programs Office Management Procedures" handbook…

  14. A unified approach for composite cost reporting and prediction in the ACT program

    NASA Technical Reports Server (NTRS)

    Freeman, W. Tom; Vosteen, Louis F.; Siddiqi, Shahid

    1991-01-01

    The Structures Technology Program Office (STPO) at NASA Langley Research Center has held two workshops with representatives from the commercial airframe companies to establish a plan for development of a standard cost reporting format and a cost prediction tool for conceptual and preliminary designers. This paper reviews the findings of the workshop representatives with a plan for implementation of their recommendations. The recommendations of the cost tracking and reporting committee will be implemented by reinstituting the collection of composite part fabrication data in a format similar to the DoD/NASA Structural Composites Fabrication Guide. The process of data collection will be automated by taking advantage of current technology with user friendly computer interfaces and electronic data transmission. Development of a conceptual and preliminary designers' cost prediction model will be initiated. The model will provide a technically sound method for evaluating the relative cost of different composite structural designs, fabrication processes, and assembly methods that can be compared to equivalent metallic parts or assemblies. The feasibility of developing cost prediction software in a modular form for interfacing with state of the art preliminary design tools and computer aided design (CAD) programs is assessed.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dr. George L Mesina

    Our ultimate goal is to create and maintain RELAP5-3D as the best software tool available to analyze nuclear power plants. This begins with writing excellent programming and requires thorough testing. This document covers development of RELAP5-3D software, the behavior of the RELAP5-3D program that must be maintained, and code testing. RELAP5-3D must perform in a manner consistent with previous code versions with backward compatibility for the sake of the users. Thus file operations, code termination, input and output must remain consistent in form and content while adding appropriate new files, input and output as new features are developed. As computermore » hardware, operating systems, and other software change, RELAP5-3D must adapt and maintain performance. The code must be thoroughly tested to ensure that it continues to perform robustly on the supported platforms. The coding must be written in a consistent manner that makes the program easy to read to reduce the time and cost of development, maintenance and error resolution. The programming guidelines presented her are intended to institutionalize a consistent way of writing FORTRAN code for the RELAP5-3D computer program that will minimize errors and rework. A common format and organization of program units creates a unifying look and feel to the code. This in turn increases readability and reduces time required for maintenance, development and debugging. It also aids new programmers in reading and understanding the program. Therefore, when undertaking development of the RELAP5-3D computer program, the programmer must write computer code that follows these guidelines. This set of programming guidelines creates a framework of good programming practices, such as initialization, structured programming, and vector-friendly coding. It sets out formatting rules for lines of code, such as indentation, capitalization, spacing, etc. It creates limits on program units, such as subprograms, functions, and modules. It establishes documentation guidance on internal comments. The guidelines apply to both existing and new subprograms. They are written for both FORTRAN 77 and FORTRAN 95. The guidelines are not so rigorous as to inhibit a programmer’s unique style, but do restrict the variations in acceptable coding to create sufficient commonality that new readers will find the coding in each new subroutine familiar. It is recognized that this is a “living” document and must be updated as languages, compilers, and computer hardware and software evolve.« less

  16. CUDA-based real time surgery simulation.

    PubMed

    Liu, Youquan; De, Suvranu

    2008-01-01

    In this paper we present a general software platform that enables real time surgery simulation on the newly available compute unified device architecture (CUDA)from NVIDIA. CUDA-enabled GPUs harness the power of 128 processors which allow data parallel computations. Compared to the previous GPGPU, it is significantly more flexible with a C language interface. We report implementation of both collision detection and consequent deformation computation algorithms. Our test results indicate that the CUDA enables a twenty times speedup for collision detection and about fifteen times speedup for deformation computation on an Intel Core 2 Quad 2.66 GHz machine with GeForce 8800 GTX.

  17. The Protein Data Bank: unifying the archive

    PubMed Central

    Westbrook, John; Feng, Zukang; Jain, Shri; Bhat, T. N.; Thanki, Narmada; Ravichandran, Veerasamy; Gilliland, Gary L.; Bluhm, Wolfgang F.; Weissig, Helge; Greer, Douglas S.; Bourne, Philip E.; Berman, Helen M.

    2002-01-01

    The Protein Data Bank (PDB; http://www.pdb.org/) is the single worldwide archive of structural data of biological macromolecules. This paper describes the progress that has been made in validating all data in the PDB archive and in releasing a uniform archive for the community. We have now produced a collection of mmCIF data files for the PDB archive (ftp://beta.rcsb.org/pub/pdb/uniformity/data/mmCIF/). A utility application that converts the mmCIF data files to the PDB format (called CIFTr) has also been released to provide support for existing software. PMID:11752306

  18. Action Recommendation for Cyber Resilience

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Choudhury, Sutanay; Rodriguez, Luke R.; Curtis, Darren S.

    2015-09-01

    This paper presents an unifying graph-based model for representing the infrastructure, behavior and missions of an enterprise. We describe how the model can be used to achieve resiliency against a wide class of failures and attacks. We introduce an algorithm for recommending resilience establishing actions based on dynamic updates to the models. Without loss of generality, we show the effectiveness of the algorithm for preserving latency based quality of service (QoS). Our models and the recommendation algorithms are implemented in a software framework that we seek to release as an open source framework for simulating resilient cyber systems.

  19. National Aeronautics and Space Administration Manned Spacecraft Center data base requirements study

    NASA Technical Reports Server (NTRS)

    1971-01-01

    A study was conducted to evaluate the types of data that the Manned Spacecraft Center (MSC) should automate in order to make available essential management and technical information to support MSC's various functions and missions. In addition, the software and hardware capabilities to best handle the storage and retrieval of this data were analyzed. Based on the results of this study, recommendations are presented for a unified data base that provides a cost effective solution to MSC's data automation requirements. The recommendations are projected through a time frame that includes the earth orbit space station.

  20. Vehicle System Management Modeling in UML for Ares I

    NASA Technical Reports Server (NTRS)

    Pearson, Newton W.; Biehn, Bradley A.; Curry, Tristan D.; Martinez, Mario R.

    2011-01-01

    The Spacecraft & Vehicle Systems Department of Marshall Space Flight Center is responsible for modeling the Vehicle System Management for the Ares I vehicle which was a part of the now canceled Constellation Program. An approach to generating the requirements for the Vehicle System Management was to use the Unified Modeling Language technique to build and test a model that would fulfill the Vehicle System Management requirements. UML has been used on past projects (flight software) in the design phase of the effort but this was the first attempt to use the UML technique from a top down requirements perspective.

  1. Unified-theory-of-reinforcement neural networks do not simulate the blocking effect.

    PubMed

    Calvin, Nicholas T; J McDowell, J

    2015-11-01

    For the last 20 years the unified theory of reinforcement (Donahoe et al., 1993) has been used to develop computer simulations to evaluate its plausibility as an account for behavior. The unified theory of reinforcement states that operant and respondent learning occurs via the same neural mechanisms. As part of a larger project to evaluate the operant behavior predicted by the theory, this project was the first replication of neural network models based on the unified theory of reinforcement. In the process of replicating these neural network models it became apparent that a previously published finding, namely, that the networks simulate the blocking phenomenon (Donahoe et al., 1993), was a misinterpretation of the data. We show that the apparent blocking produced by these networks is an artifact of the inability of these networks to generate the same conditioned response to multiple stimuli. The piecemeal approach to evaluate the unified theory of reinforcement via simulation is critiqued and alternatives are discussed. Copyright © 2015 Elsevier B.V. All rights reserved.

  2. Toward a Unified Military Response: Hurricane Sandy and the Dual Status Commander

    DTIC Science & Technology

    2015-04-01

    and support by developing self -awareness through leader feedback and leader resiliency. The School of Strategic Landpower develops strategic...forces; • Regional strategic appraisals; • The nature of land warfare; • Matters affecting the Army’s future; • The concepts, philosophy, and theory of...Army War College Press TOWARD A UNIFIED MILITARY RESPONSE: HURRICANE SANDY AND THE DUAL STATUS COMMANDER Ryan Burke Sue McNeil April 2015 The views

  3. Probabilistic Tracking and Trajectory Planning for Autonomous Ground Vehicles in Urban Environments

    DTIC Science & Technology

    2016-03-05

    SECURITY CLASSIFICATION OF: The aim of this research is to develop a unified theory for perception and planning in autonomous ground vehicles, with a...Report Title The aim of this research is to develop a unified theory for perception and planning in autonomous ground vehicles, with a specific focus on...a combination of experimentally collected vision data and Monte- Carlo simulations. Smoothing for improved perception and robustness in planning

  4. U.S. Tsunami Information technology (TIM) Modernization:Developing a Maintainable and Extensible Open Source Earthquake and Tsunami Warning System

    NASA Astrophysics Data System (ADS)

    Hellman, S. B.; Lisowski, S.; Baker, B.; Hagerty, M.; Lomax, A.; Leifer, J. M.; Thies, D. A.; Schnackenberg, A.; Barrows, J.

    2015-12-01

    Tsunami Information technology Modernization (TIM) is a National Oceanic and Atmospheric Administration (NOAA) project to update and standardize the earthquake and tsunami monitoring systems currently employed at the U.S. Tsunami Warning Centers in Ewa Beach, Hawaii (PTWC) and Palmer, Alaska (NTWC). While this project was funded by NOAA to solve a specific problem, the requirements that the delivered system be both open source and easily maintainable have resulted in the creation of a variety of open source (OS) software packages. The open source software is now complete and this is a presentation of the OS Software that has been funded by NOAA for benefit of the entire seismic community. The design architecture comprises three distinct components: (1) The user interface, (2) The real-time data acquisition and processing system and (3) The scientific algorithm library. The system follows a modular design with loose coupling between components. We now identify the major project constituents. The user interface, CAVE, is written in Java and is compatible with the existing National Weather Service (NWS) open source graphical system AWIPS. The selected real-time seismic acquisition and processing system is open source SeisComp3 (sc3). The seismic library (libseismic) contains numerous custom written and wrapped open source seismic algorithms (e.g., ML/mb/Ms/Mwp, mantle magnitude (Mm), w-phase moment tensor, bodywave moment tensor, finite-fault inversion, array processing). The seismic library is organized in a way (function naming and usage) that will be familiar to users of Matlab. The seismic library extends sc3 so that it can be called by the real-time system, but it can also be driven and tested outside of sc3, for example, by ObsPy or Earthworm. To unify the three principal components we have developed a flexible and lightweight communication layer called SeismoEdex.

  5. Reinventing User Applications for Mission Control

    NASA Technical Reports Server (NTRS)

    Trimble, Jay Phillip; Crocker, Alan R.

    2010-01-01

    In 2006, NASA Ames Research Center's (ARC) Intelligent Systems Division, and NASA Johnson Space Centers (JSC) Mission Operations Directorate (MOD) began a collaboration to move user applications for JSC's mission control center to a new software architecture, intended to replace the existing user applications being used for the Space Shuttle and the International Space Station. It must also carry NASA/JSC mission operations forward to the future, meeting the needs for NASA's exploration programs beyond low Earth orbit. Key requirements for the new architecture, called Mission Control Technologies (MCT) are that end users must be able to compose and build their own software displays without the need for programming, or direct support and approval from a platform services organization. Developers must be able to build MCT components using industry standard languages and tools. Each component of MCT must be interoperable with other components, regardless of what organization develops them. For platform service providers and MOD management, MCT must be cost effective, maintainable and evolvable. MCT software is built from components that are presented to users as composable user objects. A user object is an entity that represents a domain object such as a telemetry point, a command, a timeline, an activity, or a step in a procedure. User objects may be composed and reused, for example a telemetry point may be used in a traditional monitoring display, and that same telemetry user object may be composed into a procedure step. In either display, that same telemetry point may be shown in different views, such as a plot, an alpha numeric, or a meta-data view and those views may be changed live and in place. MCT presents users with a single unified user environment that contains all the objects required to perform applicable flight controller tasks, thus users do not have to use multiple applications, the traditional boundaries that exist between multiple heterogeneous applications disappear, leaving open the possibility of new operations concepts that are not constrained by the traditional applications paradigm.

  6. AmapSim: a structural whole-plant simulator based on botanical knowledge and designed to host external functional models.

    PubMed

    Barczi, Jean-François; Rey, Hervé; Caraglio, Yves; de Reffye, Philippe; Barthélémy, Daniel; Dong, Qiao Xue; Fourcaud, Thierry

    2008-05-01

    AmapSim is a tool that implements a structural plant growth model based on a botanical theory and simulates plant morphogenesis to produce accurate, complex and detailed plant architectures. This software is the result of more than a decade of research and development devoted to plant architecture. New advances in the software development have yielded plug-in external functions that open up the simulator to functional processes. The simulation of plant topology is based on the growth of a set of virtual buds whose activity is modelled using stochastic processes. The geometry of the resulting axes is modelled by simple descriptive functions. The potential growth of each bud is represented by means of a numerical value called physiological age, which controls the value for each parameter in the model. The set of possible values for physiological ages is called the reference axis. In order to mimic morphological and architectural metamorphosis, the value allocated for the physiological age of buds evolves along this reference axis according to an oriented finite state automaton whose occupation and transition law follows a semi-Markovian function. Simulations were performed on tomato plants to demonstrate how the AmapSim simulator can interface external modules, e.g. a GREENLAB growth model and a radiosity model. The algorithmic ability provided by AmapSim, e.g. the reference axis, enables unified control to be exercised over plant development parameter values, depending on the biological process target: how to affect the local pertinent process, i.e. the pertinent parameter(s), while keeping the rest unchanged. This opening up to external functions also offers a broadened field of applications and thus allows feedback between plant growth and the physical environment.

  7. AmapSim: A Structural Whole-plant Simulator Based on Botanical Knowledge and Designed to Host External Functional Models

    PubMed Central

    Barczi, Jean-François; Rey, Hervé; Caraglio, Yves; de Reffye, Philippe; Barthélémy, Daniel; Dong, Qiao Xue; Fourcaud, Thierry

    2008-01-01

    Background and Aims AmapSim is a tool that implements a structural plant growth model based on a botanical theory and simulates plant morphogenesis to produce accurate, complex and detailed plant architectures. This software is the result of more than a decade of research and development devoted to plant architecture. New advances in the software development have yielded plug-in external functions that open up the simulator to functional processes. Methods The simulation of plant topology is based on the growth of a set of virtual buds whose activity is modelled using stochastic processes. The geometry of the resulting axes is modelled by simple descriptive functions. The potential growth of each bud is represented by means of a numerical value called physiological age, which controls the value for each parameter in the model. The set of possible values for physiological ages is called the reference axis. In order to mimic morphological and architectural metamorphosis, the value allocated for the physiological age of buds evolves along this reference axis according to an oriented finite state automaton whose occupation and transition law follows a semi-Markovian function. Key Results Simulations were performed on tomato plants to demostrate how the AmapSim simulator can interface external modules, e.g. a GREENLAB growth model and a radiosity model. Conclusions The algorithmic ability provided by AmapSim, e.g. the reference axis, enables unified control to be exercised over plant development parameter values, depending on the biological process target: how to affect the local pertinent process, i.e. the pertinent parameter(s), while keeping the rest unchanged. This opening up to external functions also offers a broadened field of applications and thus allows feedback between plant growth and the physical environment. PMID:17766310

  8. The Latin American laws of correct nutrition: Review, unified interpretation, model and tools.

    PubMed

    Chávez-Bosquez, Oscar; Pozos-Parra, Pilar

    2016-03-01

    The "Laws of Correct Nutrition": the Law of Quantity, the Law of Quality, the Law of Harmony and the Law of Adequacy, provide the basis of a proper diet, i.e. one that provides the body with the energy required and nutrients it needs for daily activities and maintenance of vital functions. For several decades, these Laws have been the basis of nourishing menus designed in Latin America; however, they are stated in a colloquial language, which leads to differences in interpretation and ambiguities for non-experts and even experts in the field. We present a review of the different interpretations of the Laws and describe a consensus. We represent concepts related to nourishing menu design employing the Unified Modeling Language (UML). We formalize the Laws using the Object Constraint Language (OCL). We design a nourishing menu for a particular user through enforcement of the Laws. We designed a domain model with the essential elements to plan a nourishing menu and we expressed the necessary constraints to make the model׳s behavior conform to the four Laws. We made a formal verification and validation of the model via USE (UML-based Specification Environment) and we developed a software prototype for menu design under the Laws. Diet planning is considered as an art but consideration should be given to the need for a set of strict rules to design adequate menus. Thus, we model the "Laws of Nutrition" as a formal basis and standard framework for this task. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  9. Development and evaluation of SOA-based AAL services in real-life environments: a case study and lessons learned.

    PubMed

    Stav, Erlend; Walderhaug, Ståle; Mikalsen, Marius; Hanke, Sten; Benc, Ivan

    2013-11-01

    The proper use of ICT services can support seniors in living independently longer. While such services are starting to emerge, current proprietary solutions are often expensive, covering only isolated parts of seniors' needs, and lack support for sharing information between services and between users. For developers, the challenge is that it is complex and time consuming to develop high quality, interoperable services, and new techniques are needed to simplify the development and reduce the development costs. This paper provides the complete view of the experiences gained in the MPOWER project with respect to using model-driven development (MDD) techniques for Service Oriented Architecture (SOA) system development in the Ambient Assisted Living (AAL) domain. To address this challenge, the approach of the European research project MPOWER (2006-2009) was to investigate and record the user needs, define a set of reusable software services based on these needs, and then implement pilot systems using these services. Further, a model-driven toolchain covering key development phases was developed to support software developers through this process. Evaluations were conducted both on the technical artefacts (methodology and tools), and on end user experience from using the pilot systems in trial sites. The outcome of the work on the user needs is a knowledge base recorded as a Unified Modeling Language (UML) model. This comprehensive model describes actors, use cases, and features derived from these. The model further includes the design of a set of software services, including full trace information back to the features and use cases motivating their design. Based on the model, the services were implemented for use in Service Oriented Architecture (SOA) systems, and are publicly available as open source software. The services were successfully used in the realization of two pilot applications. There is therefore a direct and traceable link from the user needs of the elderly, through the service design knowledge base, to the service and pilot implementations. The evaluation of the SOA approach on the developers in the project revealed that SOA is useful with respect to job performance and quality. Furthermore, they think SOA is easy to use and support development of AAL applications. An important finding is that the developers clearly report that they intend to use SOA in the future, but not for all type of projects. With respect to using model-driven development in web services design and implementation, the developers reported that it was useful. However, it is important that the code generated from the models is correct if the full potential of MDD should be achieved. The pilots and their evaluation in the trial sites showed that the services of the platform are sufficient to create suitable systems for end users in the domain. A SOA platform with a set of reusable domain services is a suitable foundation for more rapid development and tailoring of assisted living systems covering reoccurring needs among elderly users. It is feasible to realize a tool-chain for model-driven development of SOA applications in the AAL domain, and such a tool-chain can be accepted and found useful by software developers. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  10. Development of a unified constitutive model for an isotropic nickel base superalloy Rene 80

    NASA Technical Reports Server (NTRS)

    Ramaswamy, V. G.; Vanstone, R. H.; Laflen, J. H.; Stouffer, D. C.

    1988-01-01

    Accurate analysis of stress-strain behavior is of critical importance in the evaluation of life capabilities of hot section turbine engine components such as turbine blades and vanes. The constitutive equations used in the finite element analysis of such components must be capable of modeling a variety of complex behavior exhibited at high temperatures by cast superalloys. The classical separation of plasticity and creep employed in most of the finite element codes in use today is known to be deficient in modeling elevated temperature time dependent phenomena. Rate dependent, unified constitutive theories can overcome many of these difficulties. A new unified constitutive theory was developed to model the high temperature, time dependent behavior of Rene' 80 which is a cast turbine blade and vane nickel base superalloy. Considerations in model development included the cyclic softening behavior of Rene' 80, rate independence at lower temperatures and the development of a new model for static recovery.

  11. Root System Markup Language: Toward a Unified Root Architecture Description Language1[OPEN

    PubMed Central

    Pound, Michael P.; Pradal, Christophe; Draye, Xavier; Godin, Christophe; Leitner, Daniel; Meunier, Félicien; Pridmore, Tony P.; Schnepf, Andrea

    2015-01-01

    The number of image analysis tools supporting the extraction of architectural features of root systems has increased in recent years. These tools offer a handy set of complementary facilities, yet it is widely accepted that none of these software tools is able to extract in an efficient way the growing array of static and dynamic features for different types of images and species. We describe the Root System Markup Language (RSML), which has been designed to overcome two major challenges: (1) to enable portability of root architecture data between different software tools in an easy and interoperable manner, allowing seamless collaborative work; and (2) to provide a standard format upon which to base central repositories that will soon arise following the expanding worldwide root phenotyping effort. RSML follows the XML standard to store two- or three-dimensional image metadata, plant and root properties and geometries, continuous functions along individual root paths, and a suite of annotations at the image, plant, or root scale at one or several time points. Plant ontologies are used to describe botanical entities that are relevant at the scale of root system architecture. An XML schema describes the features and constraints of RSML, and open-source packages have been developed in several languages (R, Excel, Java, Python, and C#) to enable researchers to integrate RSML files into popular research workflow. PMID:25614065

  12. Root system markup language: toward a unified root architecture description language.

    PubMed

    Lobet, Guillaume; Pound, Michael P; Diener, Julien; Pradal, Christophe; Draye, Xavier; Godin, Christophe; Javaux, Mathieu; Leitner, Daniel; Meunier, Félicien; Nacry, Philippe; Pridmore, Tony P; Schnepf, Andrea

    2015-03-01

    The number of image analysis tools supporting the extraction of architectural features of root systems has increased in recent years. These tools offer a handy set of complementary facilities, yet it is widely accepted that none of these software tools is able to extract in an efficient way the growing array of static and dynamic features for different types of images and species. We describe the Root System Markup Language (RSML), which has been designed to overcome two major challenges: (1) to enable portability of root architecture data between different software tools in an easy and interoperable manner, allowing seamless collaborative work; and (2) to provide a standard format upon which to base central repositories that will soon arise following the expanding worldwide root phenotyping effort. RSML follows the XML standard to store two- or three-dimensional image metadata, plant and root properties and geometries, continuous functions along individual root paths, and a suite of annotations at the image, plant, or root scale at one or several time points. Plant ontologies are used to describe botanical entities that are relevant at the scale of root system architecture. An XML schema describes the features and constraints of RSML, and open-source packages have been developed in several languages (R, Excel, Java, Python, and C#) to enable researchers to integrate RSML files into popular research workflow. © 2015 American Society of Plant Biologists. All Rights Reserved.

  13. Unified Model for Academic Competence, Social Adjustment, and Psychopathology.

    ERIC Educational Resources Information Center

    Schaefer, Earl S.; And Others

    A unified conceptual model is needed to integrate the extensive research on (1) social competence and adaptive behavior, (2) converging conceptualizations of social adjustment and psychopathology, and (3) emerging concepts and measures of academic competence. To develop such a model, a study was conducted in which teacher ratings were collected on…

  14. Unifying theory for terrestrial research infrastructures

    NASA Astrophysics Data System (ADS)

    Mirtl, Michael

    2016-04-01

    The presentation will elaborate on basic steps needed for building a common theoretical base between Research Infrastructures focusing on terrestrial ecosystems. This theoretical base is needed for developing a better cooperation and integrating in the near future. An overview of different theories will be given and ways to a unifying approach explored. In the second step more practical implications of a theory-guided integration will be developed alongside the following guiding questions: • How do the existing and planned European environmental RIs map on a possible unifying theory on terrestrial ecosystems (covered structures and functions, scale; overlaps and gaps) • Can a unifying theory improve the consistent definition of RÍs scientific scope and focal science questions? • How could a division of tasks between RIs be organized in order to minimize parallel efforts? • Where concretely do existing and planned European environmental RIs need to interact to respond to overarching questions (top down component)? • What practical fora and mechanisms (across RIs) would be needed to bridge the gap between PI driven (bottom up) efforts and the centralistic RI design and operations?

  15. Virtual Labs (Science Gateways) as platforms for Free and Open Source Science

    NASA Astrophysics Data System (ADS)

    Lescinsky, David; Car, Nicholas; Fraser, Ryan; Friedrich, Carsten; Kemp, Carina; Squire, Geoffrey

    2016-04-01

    The Free and Open Source Software (FOSS) movement promotes community engagement in software development, as well as provides access to a range of sophisticated technologies that would be prohibitively expensive if obtained commercially. However, as geoinformatics and eResearch tools and services become more dispersed, it becomes more complicated to identify and interface between the many required components. Virtual Laboratories (VLs, also known as Science Gateways) simplify the management and coordination of these components by providing a platform linking many, if not all, of the steps in particular scientific processes. These enable scientists to focus on their science, rather than the underlying supporting technologies. We describe a modular, open source, VL infrastructure that can be reconfigured to create VLs for a wide range of disciplines. Development of this infrastructure has been led by CSIRO in collaboration with Geoscience Australia and the National Computational Infrastructure (NCI) with support from the National eResearch Collaboration Tools and Resources (NeCTAR) and the Australian National Data Service (ANDS). Initially, the infrastructure was developed to support the Virtual Geophysical Laboratory (VGL), and has subsequently been repurposed to create the Virtual Hazards Impact and Risk Laboratory (VHIRL) and the reconfigured Australian National Virtual Geophysics Laboratory (ANVGL). During each step of development, new capabilities and services have been added and/or enhanced. We plan on continuing to follow this model using a shared, community code base. The VL platform facilitates transparent and reproducible science by providing access to both the data and methodologies used during scientific investigations. This is further enhanced by the ability to set up and run investigations using computational resources accessed through the VL. Data is accessed using registries pointing to catalogues within public data repositories (notably including the NCI National Environmental Research Data Interoperability Platform), or by uploading data directly from user supplied addresses or files. Similarly, scientific software is accessed through registries pointing to software repositories (e.g., GitHub). Runs are configured by using or modifying default templates designed by subject matter experts. After the appropriate computational resources are identified by the user, Virtual Machines (VMs) are spun up and jobs are submitted to service providers (currently the NeCTAR public cloud or Amazon Web Services). Following completion of the jobs the results can be reviewed and downloaded if desired. By providing a unified platform for science, the VL infrastructure enables sophisticated provenance capture and management. The source of input data (including both collection and queries), user information, software information (version and configuration details) and output information are all captured and managed as a VL resource which can be linked to output data sets. This provenance resource provides a mechanism for publication and citation for Free and Open Source Science.

  16. Free Factories: Unified Infrastructure for Data Intensive Web Services

    PubMed Central

    Zaranek, Alexander Wait; Clegg, Tom; Vandewege, Ward; Church, George M.

    2010-01-01

    We introduce the Free Factory, a platform for deploying data-intensive web services using small clusters of commodity hardware and free software. Independently administered virtual machines called Freegols give application developers the flexibility of a general purpose web server, along with access to distributed batch processing, cache and storage services. Each cluster exploits idle RAM and disk space for cache, and reserves disks in each node for high bandwidth storage. The batch processing service uses a variation of the MapReduce model. Virtualization allows every CPU in the cluster to participate in batch jobs. Each 48-node cluster can achieve 4-8 gigabytes per second of disk I/O. Our intent is to use multiple clusters to process hundreds of simultaneous requests on multi-hundred terabyte data sets. Currently, our applications achieve 1 gigabyte per second of I/O with 123 disks by scheduling batch jobs on two clusters, one of which is located in a remote data center. PMID:20514356

  17. Lipidomics informatics for life-science.

    PubMed

    Schwudke, D; Shevchenko, A; Hoffmann, N; Ahrends, R

    2017-11-10

    Lipidomics encompasses analytical approaches that aim to identify and quantify the complete set of lipids, defined as lipidome in a given cell, tissue or organism as well as their interactions with other molecules. The majority of lipidomics workflows is based on mass spectrometry and has been proven as a powerful tool in system biology in concert with other Omics disciplines. Unfortunately, bioinformatics infrastructures for this relatively young discipline are limited only to some specialists. Search engines, quantification algorithms, visualization tools and databases developed by the 'Lipidomics Informatics for Life-Science' (LIFS) partners will be restructured and standardized to provide broad access to these specialized bioinformatics pipelines. There are many medical challenges related to lipid metabolic alterations that will be fostered by capacity building suggested by LIFS. LIFS as member of the 'German Network for Bioinformatics' (de.NBI) node for 'Bioinformatics for Proteomics' (BioInfra.Prot) and will provide access to the described software as well as to tutorials and consulting services via a unified web-portal. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. The control system of the polarized internal target of ANKE at COSY

    NASA Astrophysics Data System (ADS)

    Kleines, H.; Sarkadi, J.; Zwoll, K.; Engels, R.; Grigoryev, K.; Mikirtychyants, M.; Nekipelov, M.; Rathmann, F.; Seyfarth, H.; Kravtsov, P.; Vasilyev, A.

    2006-05-01

    The polarized internal target for the ANKE experiment at the Cooler Synchrotron COSY of the Forschungszentrum Jülich utilizes a polarized atomic beam source to feed a storage cell with polarized hydrogen or deuterium atoms. The nuclear polarization is measured with a Lamb-shift polarimeter. For common control of the two systems, industrial equipment was selected providing reliable, long-term support and remote control of the target as well as measurement and optimization of its operating parameters. The interlock system has been implemented on the basis of SIEMENS SIMATIC S7-300 family of programmable logic controllers. In order to unify the interfacing to the control computer, all front-end equipment is connected via the PROFIBUS DP fieldbus. The process control software was implemented using the Windows-based WinCC toolkit from SIEMENS. The variety of components, to be controlled, and the logical structure of the control and interlock system are described. Finally, a number of applications derived from the present development to other, new installations are briefly mentioned.

  19. High-resolution coupled physics solvers for analysing fine-scale nuclear reactor design problems

    DOE PAGES

    Mahadevan, Vijay S.; Merzari, Elia; Tautges, Timothy; ...

    2014-06-30

    An integrated multi-physics simulation capability for the design and analysis of current and future nuclear reactor models is being investigated, to tightly couple neutron transport and thermal-hydraulics physics under the SHARP framework. Over several years, high-fidelity, validated mono-physics solvers with proven scalability on petascale architectures have been developed independently. Based on a unified component-based architecture, these existing codes can be coupled with a mesh-data backplane and a flexible coupling-strategy-based driver suite to produce a viable tool for analysts. The goal of the SHARP framework is to perform fully resolved coupled physics analysis of a reactor on heterogeneous geometry, in ordermore » to reduce the overall numerical uncertainty while leveraging available computational resources. Finally, the coupling methodology and software interfaces of the framework are presented, along with verification studies on two representative fast sodium-cooled reactor demonstration problems to prove the usability of the SHARP framework.« less

  20. Geometric Optimization for Non-Thrombogenicity of a Centrifugal Blood Pump through Flow Visualization

    NASA Astrophysics Data System (ADS)

    Toyoda, Masahiro; Nishida, Masahiro; Maruyama, Osamu; Yamane, Takashi; Tsutsui, Tatsuo; Sankai, Yoshiyuki

    A monopivot centrifugal blood pump, whose impeller is supported with a pivot bearing and a passive magnetic bearing, is under development for implantable artificial heart. The hemolysis level is less than that of commercial centrifugal pumps and the pump size is as small as 160 mL in volume. To solve a problem of thrombus caused by fluid dynamics, flow visualization experiments and animal experiments have been undertaken. For flow visualization a three-fold scale-up model, high-speed video system, and particle tracking velocimetry software were used. To verify non-thrombogenicity one-week animal experiments were conducted with sheep. The initially observed thrombus around the pivot was removed through unifying the separate washout holes to a small centered hole to induce high shear around the pivot. It was found that the thrombus contours corresponded to the shear rate of 300s-1 for red thrombus and 1300-1700s-1 for white thrombus, respectively. Thus flow visualization technique was found to be a useful tool to predict thrombus location.

  1. Multivariate Methods for Meta-Analysis of Genetic Association Studies.

    PubMed

    Dimou, Niki L; Pantavou, Katerina G; Braliou, Georgia G; Bagos, Pantelis G

    2018-01-01

    Multivariate meta-analysis of genetic association studies and genome-wide association studies has received a remarkable attention as it improves the precision of the analysis. Here, we review, summarize and present in a unified framework methods for multivariate meta-analysis of genetic association studies and genome-wide association studies. Starting with the statistical methods used for robust analysis and genetic model selection, we present in brief univariate methods for meta-analysis and we then scrutinize multivariate methodologies. Multivariate models of meta-analysis for a single gene-disease association studies, including models for haplotype association studies, multiple linked polymorphisms and multiple outcomes are discussed. The popular Mendelian randomization approach and special cases of meta-analysis addressing issues such as the assumption of the mode of inheritance, deviation from Hardy-Weinberg Equilibrium and gene-environment interactions are also presented. All available methods are enriched with practical applications and methodologies that could be developed in the future are discussed. Links for all available software implementing multivariate meta-analysis methods are also provided.

  2. Wound Documentation by Using 3G Mobile as Acquisition Terminal: An Appropriate Proposal for Community Wound Care.

    PubMed

    Ge, Kui; Wu, Minjie; Liu, Hu; Gong, Jiahong; Zhang, Yi; Hu, Qiang; Fang, Min; Tao, Yanping; Cai, Minqiang; Chen, Hua; Wang, Jianbo; Xie, Ting; Lu, Shuliang

    2015-06-01

    The increasing numbers of cases of wound disease are now posing a big challenge in China. For more convenience of wound patients, wound management in community health care centers under the supervision of a specialist at general hospitals is an ideal solution. To ensure an accurate diagnosis in community health clinics, it is important that "the same language" for wound description, which may be composed of unified format description, including wound image, must be achieved. We developed a wound information management system that was built up by acquisition terminal, wound description, data bank, and related software. In this system, a 3G mobile phone was applied as acquisition terminal, which could be used to access to the data bank. This documentation system was thought to be an appropriate proposal for community wound care because of its objectivity, uniformity, and facilitation. It also provides possibility for epidemiological study in the future. © The Author(s) 2014.

  3. Cyberinfrastructure to Support Collaborative and Reproducible Computational Hydrologic Modeling

    NASA Astrophysics Data System (ADS)

    Goodall, J. L.; Castronova, A. M.; Bandaragoda, C.; Morsy, M. M.; Sadler, J. M.; Essawy, B.; Tarboton, D. G.; Malik, T.; Nijssen, B.; Clark, M. P.; Liu, Y.; Wang, S. W.

    2017-12-01

    Creating cyberinfrastructure to support reproducibility of computational hydrologic models is an important research challenge. Addressing this challenge requires open and reusable code and data with machine and human readable metadata, organized in ways that allow others to replicate results and verify published findings. Specific digital objects that must be tracked for reproducible computational hydrologic modeling include (1) raw initial datasets, (2) data processing scripts used to clean and organize the data, (3) processed model inputs, (4) model results, and (5) the model code with an itemization of all software dependencies and computational requirements. HydroShare is a cyberinfrastructure under active development designed to help users store, share, and publish digital research products in order to improve reproducibility in computational hydrology, with an architecture supporting hydrologic-specific resource metadata. Researchers can upload data required for modeling, add hydrology-specific metadata to these resources, and use the data directly within HydroShare.org for collaborative modeling using tools like CyberGIS, Sciunit-CLI, and JupyterHub that have been integrated with HydroShare to run models using notebooks, Docker containers, and cloud resources. Current research aims to implement the Structure For Unifying Multiple Modeling Alternatives (SUMMA) hydrologic model within HydroShare to support hypothesis-driven hydrologic modeling while also taking advantage of the HydroShare cyberinfrastructure. The goal of this integration is to create the cyberinfrastructure that supports hypothesis-driven model experimentation, education, and training efforts by lowering barriers to entry, reducing the time spent on informatics technology and software development, and supporting collaborative research within and across research groups.

  4. Automatic Synthesis of UML Designs from Requirements in an Iterative Process

    NASA Technical Reports Server (NTRS)

    Schumann, Johann; Whittle, Jon; Clancy, Daniel (Technical Monitor)

    2001-01-01

    The Unified Modeling Language (UML) is gaining wide popularity for the design of object-oriented systems. UML combines various object-oriented graphical design notations under one common framework. A major factor for the broad acceptance of UML is that it can be conveniently used in a highly iterative, Use Case (or scenario-based) process (although the process is not a part of UML). Here, the (pre-) requirements for the software are specified rather informally as Use Cases and a set of scenarios. A scenario can be seen as an individual trace of a software artifact. Besides first sketches of a class diagram to illustrate the static system breakdown, scenarios are a favorite way of communication with the customer, because scenarios describe concrete interactions between entities and are thus easy to understand. Scenarios with a high level of detail are often expressed as sequence diagrams. Later in the design and implementation stage (elaboration and implementation phases), a design of the system's behavior is often developed as a set of statecharts. From there (and the full-fledged class diagram), actual code development is started. Current commercial UML tools support this phase by providing code generators for class diagrams and statecharts. In practice, it can be observed that the transition from requirements to design to code is a highly iterative process. In this talk, a set of algorithms is presented which perform reasonable synthesis and transformations between different UML notations (sequence diagrams, Object Constraint Language (OCL) constraints, statecharts). More specifically, we will discuss the following transformations: Statechart synthesis, introduction of hierarchy, consistency of modifications, and "design-debugging".

  5. The Oceanographic Multipurpose Software Environment (OMUSE v1.0)

    NASA Astrophysics Data System (ADS)

    Pelupessy, Inti; van Werkhoven, Ben; van Elteren, Arjen; Viebahn, Jan; Candy, Adam; Portegies Zwart, Simon; Dijkstra, Henk

    2017-08-01

    In this paper we present the Oceanographic Multipurpose Software Environment (OMUSE). OMUSE aims to provide a homogeneous environment for existing or newly developed numerical ocean simulation codes, simplifying their use and deployment. In this way, numerical experiments that combine ocean models representing different physics or spanning different ranges of physical scales can be easily designed. Rapid development of simulation models is made possible through the creation of simple high-level scripts. The low-level core of the abstraction in OMUSE is designed to deploy these simulations efficiently on heterogeneous high-performance computing resources. Cross-verification of simulation models with different codes and numerical methods is facilitated by the unified interface that OMUSE provides. Reproducibility in numerical experiments is fostered by allowing complex numerical experiments to be expressed in portable scripts that conform to a common OMUSE interface. Here, we present the design of OMUSE as well as the modules and model components currently included, which range from a simple conceptual quasi-geostrophic solver to the global circulation model POP (Parallel Ocean Program). The uniform access to the codes' simulation state and the extensive automation of data transfer and conversion operations aids the implementation of model couplings. We discuss the types of couplings that can be implemented using OMUSE. We also present example applications that demonstrate the straightforward model initialization and the concurrent use of data analysis tools on a running model. We give examples of multiscale and multiphysics simulations by embedding a regional ocean model into a global ocean model and by coupling a surface wave propagation model with a coastal circulation model.

  6. Semantic solutions to Heliophysics data access

    NASA Astrophysics Data System (ADS)

    Narock, T. W.; Vandegriff, J. D.; Weigel, R. S.

    2011-12-01

    Within the domain of Heliophysics, data discovery is being actively addressed. However, data diversity in the returned results has proven to be a significant barrier to integrated multi-mission analysis. Software is being actively developed (e.g. Vandergriff and Brown, 2008) that is data format and measurement type agnostic. However, such approaches rely on an a priori definition of common baseline parameters, units, and coordinate systems onto which all data will be mapped. In this work, we describe our efforts at utilizing a task ontology (Guarino, 1998) to model the steps involved in data transformation within Heliophysics. Thus, given Heliophysics logic and heterogeneous input data, we are able to develop software that is able to infer the set of steps required to compute user specified parameters. Such a framework offers flexibility by allowing users to define their own preferred sets of parameters, units, and coordinate systems they would like in their analysis. In addition, the storage of this information as ontology instances means they are external to source code and are easily shareable and extensible. The additional inclusion of a provenance ontology allows us to capture the historical record of each data analysis session for future review. We describe our use of existing task and provenance ontologies and provide example use cases as well as potential future applications. References J. Vandegriff and L. Brown, (2010), A framework for reading and unifying heliophysics time series data, Earth Science Informatics, Volume 3, Numbers 1-2, Pages 75-86 N. Guarino, (1998), Formal Ontology in Information Systems, Proceedings of FOIS'98, Trento, Italy, 6-8 June 1998. Amsterdam, IOS Press, pp. 3-15.

  7. UML as a cell and biochemistry modeling language.

    PubMed

    Webb, Ken; White, Tony

    2005-06-01

    The systems biology community is building increasingly complex models and simulations of cells and other biological entities, and are beginning to look at alternatives to traditional representations such as those provided by ordinary differential equations (ODE). The lessons learned over the years by the software development community in designing and building increasingly complex telecommunication and other commercial real-time reactive systems, can be advantageously applied to the problems of modeling in the biology domain. Making use of the object-oriented (OO) paradigm, the unified modeling language (UML) and Real-Time Object-Oriented Modeling (ROOM) visual formalisms, and the Rational Rose RealTime (RRT) visual modeling tool, we describe a multi-step process we have used to construct top-down models of cells and cell aggregates. The simple example model described in this paper includes membranes with lipid bilayers, multiple compartments including a variable number of mitochondria, substrate molecules, enzymes with reaction rules, and metabolic pathways. We demonstrate the relevance of abstraction, reuse, objects, classes, component and inheritance hierarchies, multiplicity, visual modeling, and other current software development best practices. We show how it is possible to start with a direct diagrammatic representation of a biological structure such as a cell, using terminology familiar to biologists, and by following a process of gradually adding more and more detail, arrive at a system with structure and behavior of arbitrary complexity that can run and be observed on a computer. We discuss our CellAK (Cell Assembly Kit) approach in terms of features found in SBML, CellML, E-CELL, Gepasi, Jarnac, StochSim, Virtual Cell, and membrane computing systems.

  8. Techniques and software tools for estimating ultrasonic signal-to-noise ratios

    NASA Astrophysics Data System (ADS)

    Chiou, Chien-Ping; Margetan, Frank J.; McKillip, Matthew; Engle, Brady J.; Roberts, Ronald A.

    2016-02-01

    At Iowa State University's Center for Nondestructive Evaluation (ISU CNDE), the use of models to simulate ultrasonic inspections has played a key role in R&D efforts for over 30 years. To this end a series of wave propagation models, flaw response models, and microstructural backscatter models have been developed to address inspection problems of interest. One use of the combined models is the estimation of signal-to-noise ratios (S/N) in circumstances where backscatter from the microstructure (grain noise) acts to mask sonic echoes from internal defects. Such S/N models have been used in the past to address questions of inspection optimization and reliability. Under the sponsorship of the National Science Foundation's Industry/University Cooperative Research Center at ISU, an effort was recently initiated to improve existing research-grade software by adding graphical user interface (GUI) to become user friendly tools for the rapid estimation of S/N for ultrasonic inspections of metals. The software combines: (1) a Python-based GUI for specifying an inspection scenario and displaying results; and (2) a Fortran-based engine for computing defect signal and backscattered grain noise characteristics. The latter makes use of several models including: the Multi-Gaussian Beam Model for computing sonic fields radiated by commercial transducers; the Thompson-Gray Model for the response from an internal defect; the Independent Scatterer Model for backscattered grain noise; and the Stanke-Kino Unified Model for attenuation. The initial emphasis was on reformulating the research-grade code into a suitable modular form, adding the graphical user interface and performing computations rapidly and robustly. Thus the initial inspection problem being addressed is relatively simple. A normal-incidence pulse/echo immersion inspection is simulated for a curved metal component having a non-uniform microstructure, specifically an equiaxed, untextured microstructure in which the average grain size may vary with depth. The defect may be a flat-bottomed-hole reference reflector, a spherical void or a spherical inclusion. In future generations of the software, microstructures and defect types will be generalized and oblique incidence inspections will be treated as well. This paper provides an overview of the modeling approach and presents illustrative results output by the first-generation software.

  9. Building Data and Information Capacity in Environmental Public Health: A Best-Worst Scaling Experiment.

    PubMed

    Wallar, Lauren E; Sargeant, Jan M; McEwen, Scott A; Mercer, Nicola J; Papadopoulos, Andrew

    Environmental public health practitioners rely on information technology (IT) to maintain and improve environmental health. However, current systems have limited capacity. A better understanding of the importance of IT features is needed to enhance data and information capacity. (1) Rank IT features according to the percentage of respondents who rated them as essential to an information management system and (2) quantify the relative importance of a subset of these features using best-worst scaling. Information technology features were initially identified from a previously published systematic review of software evaluation criteria and a list of software options from a private corporation specializing in inspection software. Duplicates and features unrelated to environmental public health were removed. The condensed list was refined by a working group of environmental public health management to a final list of 57 IT features. The essentialness of features was electronically rated by environmental public health managers. Features where 50% to 80% of respondents rated them as essential (n = 26) were subsequently evaluated using best-worst scaling. Ontario, Canada. Environmental public health professionals in local public health. Importance scores of IT features. The majority of IT features (47/57) were considered essential to an information management system by at least half of the respondents (n = 52). The highest-rated features were delivery to printer, software encryption capability, and software maintenance services. Of the 26 features evaluated in the best-worst scaling exercise, the most important features were orientation to all practice areas, off-line capability, and ability to view past inspection reports and results. The development of a single, unified environmental public health information management system that fulfills the reporting and functionality needs of system users is recommended. This system should be implemented by all public health units to support data and information capacity in local environmental public health. This study can be used to guide vendor evaluation, negotiation, and selection in local environmental public health, and provides an example of academia-practice partnerships and the use of best-worst scaling in public health research.

  10. Factors Affecting the Adoption of E-Learning Systems in Qatar and USA: Extending the Unified Theory of Acceptance and Use of Technology 2 (UTAUT2)

    ERIC Educational Resources Information Center

    El-Masri, Mazen; Tarhini, Ali

    2017-01-01

    This study examines the major factors that may hinder or enable the adoption of e-learning systems by university students in developing (Qatar) as well as developed (USA) countries. To this end, we used extended Unified Theory of Acceptance and Use of Technology 2 (UTAUT2) with Trust as an external variable. By means of an online survey, data were…

  11. jade: An End-To-End Data Transfer and Catalog Tool

    NASA Astrophysics Data System (ADS)

    Meade, P.

    2017-10-01

    The IceCube Neutrino Observatory is a cubic kilometer neutrino telescope located at the Geographic South Pole. IceCube collects 1 TB of data every day. An online filtering farm processes this data in real time and selects 10% to be sent via satellite to the main data center at the University of Wisconsin-Madison. IceCube has two year-round on-site operators. New operators are hired every year, due to the hard conditions of wintering at the South Pole. These operators are tasked with the daily operations of running a complex detector in serious isolation conditions. One of the systems they operate is the data archiving and transfer system. Due to these challenging operational conditions, the data archive and transfer system must above all be simple and robust. It must also share the limited resource of satellite bandwidth, and collect and preserve useful metadata. The original data archive and transfer software for IceCube was written in 2005. After running in production for several years, the decision was taken to fully rewrite it, in order to address a number of structural drawbacks. The new data archive and transfer software (JADE2) has been in production for several months providing improved performance and resiliency. One of the main goals for JADE2 is to provide a unified system that handles the IceCube data end-to-end: from collection at the South Pole, all the way to long-term archive and preservation in dedicated repositories at the North. In this contribution, we describe our experiences and lessons learned from developing and operating the data archive and transfer software for a particle physics experiment in extreme operational conditions like IceCube.

  12. SHARP pre-release v1.0 - Current Status and Documentation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mahadevan, Vijay S.; Rahaman, Ronald O.

    The NEAMS Reactor Product Line effort aims to develop an integrated multiphysics simulation capability for the design and analysis of future generations of nuclear power plants. The Reactor Product Line code suite’s multi-resolution hierarchy is being designed to ultimately span the full range of length and time scales present in relevant reactor design and safety analyses, as well as scale from desktop to petaflop computing platforms. In this report, building on a several previous report issued in September 2014, we describe our continued efforts to integrate thermal/hydraulics, neutronics, and structural mechanics modeling codes to perform coupled analysis of a representativemore » fast sodium-cooled reactor core in preparation for a unified release of the toolkit. The work reported in the current document covers the software engineering aspects of managing the entire stack of components in the SHARP toolkit and the continuous integration efforts ongoing to prepare a release candidate for interested reactor analysis users. Here we report on the continued integration effort of PROTEUS/Nek5000 and Diablo into the NEAMS framework and the software processes that enable users to utilize the capabilities without losing scientific productivity. Due to the complexity of the individual modules and their necessary/optional dependency library chain, we focus on the configuration and build aspects for the SHARP toolkit, which includes capability to autodownload dependencies and configure/install with optimal flags in an architecture-aware fashion. Such complexity is untenable without strong software engineering processes such as source management, source control, change reviews, unit tests, integration tests and continuous test suites. Details on these processes are provided in the report as a building step for a SHARP user guide that will accompany the first release, expected by Mar 2016.« less

  13. Unified formalism for higher order non-autonomous dynamical systems

    NASA Astrophysics Data System (ADS)

    Prieto-Martínez, Pedro Daniel; Román-Roy, Narciso

    2012-03-01

    This work is devoted to giving a geometric framework for describing higher order non-autonomous mechanical systems. The starting point is to extend the Lagrangian-Hamiltonian unified formalism of Skinner and Rusk for these kinds of systems, generalizing previous developments for higher order autonomous mechanical systems and first-order non-autonomous mechanical systems. Then, we use this unified formulation to derive the standard Lagrangian and Hamiltonian formalisms, including the Legendre-Ostrogradsky map and the Euler-Lagrange and the Hamilton equations, both for regular and singular systems. As applications of our model, two examples of regular and singular physical systems are studied.

  14. MaROS: Web Visualization of Mars Orbiting and Landed Assets

    NASA Technical Reports Server (NTRS)

    Wallick, Michael N.; Allard, Daniel A.; Gladden, Roy E.; Hy, Franklin H.

    2011-01-01

    Mars Relay operations currently involve several e-mails and phone calls between lander and orbiter teams in order to settle on an agreed time for performing a communication pass between the landed asset (i.e. rover or lander) and orbiter, then back to Earth. This new application aims to reduce this complexity by presenting a visualization of the overpass time ranges and elevation angle, as well as other information. The user is able to select a specific overflight opportunity to receive further information about that particular pass. This software presents a unified view of the potential communication passes available between orbiting and landed assets on Mars. Each asset is presented to the user in a graphical view showing overpass opportunities, elevation angle, requested and acknowledged communication windows, forward and back latencies, warnings, conflicts, relative planetary times, ACE Schedules, and DSN information. This software is unique in that it is the first of its kind to visually display the information regarding communication opportunities between landed and orbiting Mars assets. The software is written using ActionScript/FLEX, a Web language, meaning that this information may be accessed over the Internet from anywhere in the world.

  15. A Software Tool for the Annotation of Embolic Events in Echo Doppler Audio Signals

    PubMed Central

    Pierleoni, Paola; Maurizi, Lorenzo; Palma, Lorenzo; Belli, Alberto; Valenti, Simone; Marroni, Alessandro

    2017-01-01

    The use of precordial Doppler monitoring to prevent decompression sickness (DS) is well known by the scientific community as an important instrument for early diagnosis of DS. However, the timely and correct diagnosis of DS without assistance from diving medical specialists is unreliable. Thus, a common protocol for the manual annotation of echo Doppler signals and a tool for their automated recording and annotation are necessary. We have implemented original software for efficient bubble appearance annotation and proposed a unified annotation protocol. The tool auto-sets the response time of human “bubble examiners,” performs playback of the Doppler file by rendering it independent of the specific audio player, and enables the annotation of individual bubbles or multiple bubbles known as “showers.” The tool provides a report with an optimized data structure and estimates the embolic risk level according to the Extended Spencer Scale. The tool is built in accordance with ISO/IEC 9126 on software quality and has been projected and tested with assistance from the Divers Alert Network (DAN) Europe Foundation, which employs this tool for its diving data acquisition campaigns. PMID:29242701

  16. State-Chart Autocoder

    NASA Technical Reports Server (NTRS)

    Clark, Kenneth; Watney, Garth; Murray, Alexander; Benowitz, Edward

    2007-01-01

    A computer program translates Unified Modeling Language (UML) representations of state charts into source code in the C, C++, and Python computing languages. ( State charts signifies graphical descriptions of states and state transitions of a spacecraft or other complex system.) The UML representations constituting the input to this program are generated by using a UML-compliant graphical design program to draw the state charts. The generated source code is consistent with the "quantum programming" approach, which is so named because it involves discrete states and state transitions that have features in common with states and state transitions in quantum mechanics. Quantum programming enables efficient implementation of state charts, suitable for real-time embedded flight software. In addition to source code, the autocoder program generates a graphical-user-interface (GUI) program that, in turn, generates a display of state transitions in response to events triggered by the user. The GUI program is wrapped around, and can be used to exercise the state-chart behavior of, the generated source code. Once the expected state-chart behavior is confirmed, the generated source code can be augmented with a software interface to the rest of the software with which the source code is required to interact.

  17. A Vehicle Management End-to-End Testing and Analysis Platform for Validation of Mission and Fault Management Algorithms to Reduce Risk for NASA's Space Launch System

    NASA Technical Reports Server (NTRS)

    Trevino, Luis; Patterson, Jonathan; Teare, David; Johnson, Stephen

    2015-01-01

    The engineering development of the new Space Launch System (SLS) launch vehicle requires cross discipline teams with extensive knowledge of launch vehicle subsystems, information theory, and autonomous algorithms dealing with all operations from pre-launch through on orbit operations. The characteristics of these spacecraft systems must be matched with the autonomous algorithm monitoring and mitigation capabilities for accurate control and response to abnormal conditions throughout all vehicle mission flight phases, including precipitating safing actions and crew aborts. This presents a large and complex system engineering challenge, which is being addressed in part by focusing on the specific subsystems involved in the handling of off-nominal mission and fault tolerance with response management. Using traditional model based system and software engineering design principles from the Unified Modeling Language (UML) and Systems Modeling Language (SysML), the Mission and Fault Management (M&FM) algorithms for the vehicle are crafted and vetted in specialized Integrated Development Teams (IDTs) composed of multiple development disciplines such as Systems Engineering (SE), Flight Software (FSW), Safety and Mission Assurance (S&MA) and the major subsystems and vehicle elements such as Main Propulsion Systems (MPS), boosters, avionics, Guidance, Navigation, and Control (GNC), Thrust Vector Control (TVC), and liquid engines. These model based algorithms and their development lifecycle from inception through Flight Software certification are an important focus of this development effort to further insure reliable detection and response to off-nominal vehicle states during all phases of vehicle operation from pre-launch through end of flight. NASA formed a dedicated M&FM team for addressing fault management early in the development lifecycle for the SLS initiative. As part of the development of the M&FM capabilities, this team has developed a dedicated testbed that integrates specific M&FM algorithms, specialized nominal and off-nominal test cases, and vendor-supplied physics-based launch vehicle subsystem models. Additionally, the team has developed processes for implementing and validating these algorithms for concept validation and risk reduction for the SLS program. The flexibility of the Vehicle Management End-to-end Testbed (VMET) enables thorough testing of the M&FM algorithms by providing configurable suites of both nominal and off-nominal test cases to validate the developed algorithms utilizing actual subsystem models such as MPS. The intent of VMET is to validate the M&FM algorithms and substantiate them with performance baselines for each of the target vehicle subsystems in an independent platform exterior to the flight software development infrastructure and its related testing entities. In any software development process there is inherent risk in the interpretation and implementation of concepts into software through requirements and test cases into flight software compounded with potential human errors throughout the development lifecycle. Risk reduction is addressed by the M&FM analysis group working with other organizations such as S&MA, Structures and Environments, GNC, Orion, the Crew Office, Flight Operations, and Ground Operations by assessing performance of the M&FM algorithms in terms of their ability to reduce Loss of Mission and Loss of Crew probabilities. In addition, through state machine and diagnostic modeling, analysis efforts investigate a broader suite of failure effects and associated detection and responses that can be tested in VMET to ensure that failures can be detected, and confirm that responses do not create additional risks or cause undesired states through interactive dynamic effects with other algorithms and systems. VMET further contributes to risk reduction by prototyping and exercising the M&FM algorithms early in their implementation and without any inherent hindrances such as meeting FSW processor scheduling constraints due to their target platform - ARINC 653 partitioned OS, resource limitations, and other factors related to integration with other subsystems not directly involved with M&FM such as telemetry packing and processing. The baseline plan for use of VMET encompasses testing the original M&FM algorithms coded in the same C++ language and state machine architectural concepts as that used by Flight Software. This enables the development of performance standards and test cases to characterize the M&FM algorithms and sets a benchmark from which to measure the effectiveness of M&FM algorithms performance in the FSW development and test processes.

  18. SFCHECK: a unified set of procedures for evaluating the quality of macromolecular structure-factor data and their agreement with the atomic model.

    PubMed

    Vaguine, A A; Richelle, J; Wodak, S J

    1999-01-01

    In this paper we present SFCHECK, a stand-alone software package that features a unified set of procedures for evaluating the structure-factor data obtained from X-ray diffraction experiments and for assessing the agreement of the atomic coordinates with these data. The evaluation is performed completely automatically, and produces a concise PostScript pictorial output similar to that of PROCHECK [Laskowski, MacArthur, Moss & Thornton (1993). J. Appl. Cryst. 26, 283-291], greatly facilitating visual inspection of the results. The required inputs are the structure-factor amplitudes and the atomic coordinates. Having those, the program summarizes relevant information on the deposited structure factors and evaluates their quality using criteria such as data completeness, structure-factor uncertainty and the optical resolution computed from the Patterson origin peak. The dependence of various parameters on the nominal resolution (d spacing) is also given. To evaluate the global agreement of the atomic model with the experimental data, the program recomputes the R factor, the correlation coefficient between observed and calculated structure-factor amplitudes and Rfree (when appropriate). In addition, it gives several estimates of the average error in the atomic coordinates. The local agreement between the model and the electron-density map is evaluated on a per-residue basis, considering separately the macromolecule backbone and side-chain atoms, as well as solvent atoms and heterogroups. Among the criteria are the normalized average atomic displacement, the local density correlation coefficient and the polymer chain connectivity. The possibility of computing these criteria using the omit-map procedure is also provided. The described software should be a valuable tool in monitoring the refinement procedure and in assessing structures deposited in databases.

  19. A Model of RHIC Using the Unified Accelerator Libraries

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pilat, F.; Tepikian, S.; Trahern, C. G.

    1998-01-01

    The Unified Accelerator Library (UAL) is an object oriented and modular software environment for accelerator physics which comprises an accelerator object model for the description of the machine (SMF, for Standard Machine Format), a collection of Physics Libraries, and a Perl inte,face that provides a homo­geneous shell for integrating and managing these components. Currently available physics libraries include TEAPOT++, a collection of C++ physics modules conceptually derived from TEAPOT, and DNZLIB, a differential algebra package for map generation. This software environment has been used to build a flat model of RHIC which retains the hierarchical lat­tice description while assigning specificmore » characteristics to individual elements, such as measured field har­monics. A first application of the model and of the simulation capabilities of UAL has been the study of RHIC stability in the presence of siberian snakes and spin rotators. The building blocks of RHIC snakes and rotators are helical dipoles, unconventional devices that can not be modeled by traditional accelerator phys­ics codes and have been implemented in UAL as Taylor maps. Section 2 describes the RHIC data stores, Section 3 the RHIC SMF format and Section 4 the RHIC spe­cific Perl interface (RHIC Shell). Section 5 explains how the RHIC SMF and UAL have been used to study the RHIC dynamic behavior and presents detuning and dynamic aperture results. If the reader is not familiar with the motivation and characteristics of UAL, we include in the Appendix an useful overview paper. An example of a complete set of Perl Scripts for RHIC simulation can also be found in the Appendix.« less

  20. Toxicology ontology perspectives.

    PubMed

    Hardy, Barry; Apic, Gordana; Carthew, Philip; Clark, Dominic; Cook, David; Dix, Ian; Escher, Sylvia; Hastings, Janna; Heard, David J; Jeliazkova, Nina; Judson, Philip; Matis-Mitchell, Sherri; Mitic, Dragana; Myatt, Glenn; Shah, Imran; Spjuth, Ola; Tcheremenskaia, Olga; Toldo, Luca; Watson, David; White, Andrew; Yang, Chihae

    2012-01-01

    The field of predictive toxicology requires the development of open, public, computable, standardized toxicology vocabularies and ontologies to support the applications required by in silico, in vitro, and in vivo toxicology methods and related analysis and reporting activities. In this article we review ontology developments based on a set of perspectives showing how ontologies are being used in predictive toxicology initiatives and applications. Perspectives on resources and initiatives reviewed include OpenTox, eTOX, Pistoia Alliance, ToxWiz, Virtual Liver, EU-ADR, BEL, ToxML, and Bioclipse. We also review existing ontology developments in neighboring fields that can contribute to establishing an ontological framework for predictive toxicology. A significant set of resources is already available to provide a foundation for an ontological framework for 21st century mechanistic-based toxicology research. Ontologies such as ToxWiz provide a basis for application to toxicology investigations, whereas other ontologies under development in the biological, chemical, and biomedical communities could be incorporated in an extended future framework. OpenTox has provided a semantic web framework for the implementation of such ontologies into software applications and linked data resources. Bioclipse developers have shown the benefit of interoperability obtained through ontology by being able to link their workbench application with remote OpenTox web services. Although these developments are promising, an increased international coordination of efforts is greatly needed to develop a more unified, standardized, and open toxicology ontology framework.

  1. MTpy: A Python toolbox for magnetotellurics

    NASA Astrophysics Data System (ADS)

    Krieger, Lars; Peacock, Jared R.

    2014-11-01

    We present the software package MTpy that allows handling, processing, and imaging of magnetotelluric (MT) data sets. Written in Python, the code is open source, containing sub-packages and modules for various tasks within the standard MT data processing and handling scheme. Besides the independent definition of classes and functions, MTpy provides wrappers and convenience scripts to call standard external data processing and modelling software. In its current state, modules and functions of MTpy work on raw and pre-processed MT data. However, opposite to providing a static compilation of software, we prefer to introduce MTpy as a flexible software toolbox, whose contents can be combined and utilised according to the respective needs of the user. Just as the overall functionality of a mechanical toolbox can be extended by adding new tools, MTpy is a flexible framework, which will be dynamically extended in the future. Furthermore, it can help to unify and extend existing codes and algorithms within the (academic) MT community. In this paper, we introduce the structure and concept of MTpy. Additionally, we show some examples from an everyday work-flow of MT data processing: the generation of standard EDI data files from raw electric (E-) and magnetic flux density (B-) field time series as input, the conversion into MiniSEED data format, as well as the generation of a graphical data representation in the form of a Phase Tensor pseudosection.

  2. MAPI: towards the integrated exploitation of bioinformatics Web Services.

    PubMed

    Ramirez, Sergio; Karlsson, Johan; Trelles, Oswaldo

    2011-10-27

    Bioinformatics is commonly featured as a well assorted list of available web resources. Although diversity of services is positive in general, the proliferation of tools, their dispersion and heterogeneity complicate the integrated exploitation of such data processing capacity. To facilitate the construction of software clients and make integrated use of this variety of tools, we present a modular programmatic application interface (MAPI) that provides the necessary functionality for uniform representation of Web Services metadata descriptors including their management and invocation protocols of the services which they represent. This document describes the main functionality of the framework and how it can be used to facilitate the deployment of new software under a unified structure of bioinformatics Web Services. A notable feature of MAPI is the modular organization of the functionality into different modules associated with specific tasks. This means that only the modules needed for the client have to be installed, and that the module functionality can be extended without the need for re-writing the software client. The potential utility and versatility of the software library has been demonstrated by the implementation of several currently available clients that cover different aspects of integrated data processing, ranging from service discovery to service invocation with advanced features such as workflows composition and asynchronous services calls to multiple types of Web Services including those registered in repositories (e.g. GRID-based, SOAP, BioMOBY, R-bioconductor, and others).

  3. Four Courses within a Discipline: UGA Unified Core

    ERIC Educational Resources Information Center

    Powell, Gwynn M.; Johnson, Corey W.; James, Joy; Dunlap, Rudy

    2013-01-01

    This article introduces the reader to the Unified Core Curriculum model developed and implemented at the University of Georgia (UGA). Four courses are taught as one course to the juniors coming into the Recreation and Leisure Studies major. An overview of the blended course and sample assignments are provided, as well as a discussion of challenges…

  4. The Role of Fisher Information Theory in the Development of Fundamental Laws in Physical Chemistry

    ERIC Educational Resources Information Center

    Honig, J. M.

    2009-01-01

    The unifying principle that involves rendering the Fisher information measure an extremum is reviewed. It is shown that with this principle, in conjunction with appropriate constraints, a large number of fundamental laws can be derived from a common source in a unified manner. The resulting economy of thought pertaining to fundamental principles…

  5. Developing Early Warning Indicators for the San Francisco Unified School District. Youth Data Archive Issue Brief

    ERIC Educational Resources Information Center

    John W. Gardner Center for Youth and Their Communities, 2011

    2011-01-01

    San Francisco's Bridge to Success (BtS) initiative brings together the City and County of San Francisco, the San Francisco Unified School District (SFUSD), the City College of San Francisco (CCSF), and key community organizations to promote postsecondary success for underrepresented students. Partners agree that the first step in achieving this…

  6. Sacramento City Unified School District and Sacramento City College Articulation Council Year-End Report.

    ERIC Educational Resources Information Center

    Giugni, Tom; Burris, Douglas W.

    In 1982, the President of Sacramento City College (SCC) and the Superintendent of the Sacramento City Unified School District (SCUSD) developed the new concept of a joint articulation council to address current problems related to the number of under-prepared students and the possible duplication of effort in basic skills instruction and…

  7. SSBRP Communication & Data System Development using the Unified Modeling Language (UML)

    NASA Technical Reports Server (NTRS)

    Windrem, May; Picinich, Lou; Givens, John J. (Technical Monitor)

    1998-01-01

    The Unified Modeling Language (UML) is the standard method for specifying, visualizing, and documenting the artifacts of an object-oriented system under development. UML is the unification of the object-oriented methods developed by Grady Booch and James Rumbaugh, and of the Use Case Model developed by Ivar Jacobson. This paper discusses the application of UML by the Communications and Data Systems (CDS) team to model the ground control and command of the Space Station Biological Research Project (SSBRP) User Operations Facility (UOF). UML is used to define the context of the system, the logical static structure, the life history of objects, and the interactions among objects.

  8. Sharing brain mapping statistical results with the neuroimaging data model

    PubMed Central

    Maumet, Camille; Auer, Tibor; Bowring, Alexander; Chen, Gang; Das, Samir; Flandin, Guillaume; Ghosh, Satrajit; Glatard, Tristan; Gorgolewski, Krzysztof J.; Helmer, Karl G.; Jenkinson, Mark; Keator, David B.; Nichols, B. Nolan; Poline, Jean-Baptiste; Reynolds, Richard; Sochat, Vanessa; Turner, Jessica; Nichols, Thomas E.

    2016-01-01

    Only a tiny fraction of the data and metadata produced by an fMRI study is finally conveyed to the community. This lack of transparency not only hinders the reproducibility of neuroimaging results but also impairs future meta-analyses. In this work we introduce NIDM-Results, a format specification providing a machine-readable description of neuroimaging statistical results along with key image data summarising the experiment. NIDM-Results provides a unified representation of mass univariate analyses including a level of detail consistent with available best practices. This standardized representation allows authors to relay methods and results in a platform-independent regularized format that is not tied to a particular neuroimaging software package. Tools are available to export NIDM-Result graphs and associated files from the widely used SPM and FSL software packages, and the NeuroVault repository can import NIDM-Results archives. The specification is publically available at: http://nidm.nidash.org/specs/nidm-results.html. PMID:27922621

  9. Microphysics in Multi-scale Modeling System with Unified Physics

    NASA Technical Reports Server (NTRS)

    Tao, Wei-Kuo

    2012-01-01

    Recently, a multi-scale modeling system with unified physics was developed at NASA Goddard. It consists of (1) a cloud-resolving model (Goddard Cumulus Ensemble model, GCE model), (2) a regional scale model (a NASA unified weather research and forecast, WRF), (3) a coupled CRM and global model (Goddard Multi-scale Modeling Framework, MMF), and (4) a land modeling system. The same microphysical processes, long and short wave radiative transfer and land processes and the explicit cloud-radiation, and cloud-land surface interactive processes are applied in this multi-scale modeling system. This modeling system has been coupled with a multi-satellite simulator to use NASA high-resolution satellite data to identify the strengths and weaknesses of cloud and precipitation processes simulated by the model. In this talk, a review of developments and applications of the multi-scale modeling system will be presented. In particular, the microphysics development and its performance for the multi-scale modeling system will be presented.

  10. Lagrangian-Hamiltonian unified formalism for autonomous higher order dynamical systems

    NASA Astrophysics Data System (ADS)

    Prieto-Martínez, Pedro Daniel; Román-Roy, Narciso

    2011-09-01

    The Lagrangian-Hamiltonian unified formalism of Skinner and Rusk was originally stated for autonomous dynamical systems in classical mechanics. It has been generalized for non-autonomous first-order mechanical systems, as well as for first-order and higher order field theories. However, a complete generalization to higher order mechanical systems is yet to be described. In this work, after reviewing the natural geometrical setting and the Lagrangian and Hamiltonian formalisms for higher order autonomous mechanical systems, we develop a complete generalization of the Lagrangian-Hamiltonian unified formalism for these kinds of systems, and we use it to analyze some physical models from this new point of view.

  11. Cross-mapping the ICNP with NANDA, HHCC, Omaha System and NIC for unified nursing language system development. International Classification for Nursing Practice. International Council of Nurses. North American Nursing Diagnosis Association. Home Health Care Classification. Nursing Interventions Classification.

    PubMed

    Hyun, S; Park, H A

    2002-06-01

    Nursing language plays an important role in describing and defining nursing phenomena and nursing actions. There are numerous vocabularies describing nursing diagnoses, interventions and outcomes in nursing. However, the lack of a standardized unified nursing language is considered a problem for further development of the discipline of nursing. In an effort to unify the nursing languages, the International Council of Nurses (ICN) has proposed the International Classification for Nursing Practice (ICNP) as a unified nursing language system. The purpose of this study was to evaluate the inclusiveness and expressiveness of the ICNP terms by cross-mapping them with the existing nursing terminologies, specifically the North American Nursing Diagnosis Association (NANDA) taxonomy I, the Omaha System, the Home Health Care Classification (HHCC) and the Nursing Interventions Classification (NIC). Nine hundred and seventy-four terms from these four classifications were cross-mapped with the ICNP terms. This was performed in accordance with the Guidelines for Composing a Nursing Diagnosis and Guidelines for Composing a Nursing Intervention, which were suggested by the ICNP development team. An expert group verified the results. The ICNP Phenomena Classification described 87.5% of the NANDA diagnoses, 89.7% of the HHCC diagnoses and 72.7% of the Omaha System problem classification scheme. The ICNP Action Classification described 79.4% of the NIC interventions, 80.6% of the HHCC interventions and 71.4% of the Omaha System intervention scheme. The results of this study suggest that the ICNP has a sound starting structure for a unified nursing language system and can be used to describe most of the existing terminologies. Recommendations for the addition of terms to the ICNP are provided.

  12. Using Machine Learning as a fast emulator of physical processes within the Met Office's Unified Model

    NASA Astrophysics Data System (ADS)

    Prudden, R.; Arribas, A.; Tomlinson, J.; Robinson, N.

    2017-12-01

    The Unified Model is a numerical model of the atmosphere used at the UK Met Office (and numerous partner organisations including Korean Meteorological Agency, Australian Bureau of Meteorology and US Air Force) for both weather and climate applications.Especifically, dynamical models such as the Unified Model are now a central part of weather forecasting. Starting from basic physical laws, these models make it possible to predict events such as storms before they have even begun to form. The Unified Model can be simply described as having two components: one component solves the navier-stokes equations (usually referred to as the "dynamics"); the other solves relevant sub-grid physical processes (usually referred to as the "physics"). Running weather forecasts requires substantial computing resources - for example, the UK Met Office operates the largest operational High Performance Computer in Europe - and the cost of a typical simulation is spent roughly 50% in the "dynamics" and 50% in the "physics". Therefore there is a high incentive to reduce cost of weather forecasts and Machine Learning is a possible option because, once a machine learning model has been trained, it is often much faster to run than a full simulation. This is the motivation for a technique called model emulation, the idea being to build a fast statistical model which closely approximates a far more expensive simulation. In this paper we discuss the use of Machine Learning as an emulator to replace the "physics" component of the Unified Model. Various approaches and options will be presented and the implications for further model development, operational running of forecasting systems, development of data assimilation schemes, and development of ensemble prediction techniques will be discussed.

  13. Neurological exclusiveness or unified science inclusiveness: Comment on Schwartz et al. (2016).

    PubMed

    Staats, Arthur W

    2016-12-01

    Schwartz, Lilienfeld, Meca, and Sauvigné (2016) argue effectively and productively that neuroscience is monistic (excludes other fields) in a way that affects negatively psychology department makeup, psychology grant support, and the way students are trained. They conclude, rather, that it is important to effect an inclusion of different fields of psychology. This paper broadens and strengthens their position. However, it also points out that a call for inclusiveness raises a central question. How is inclusiveness to be accomplished? Without stipulation to the contrary the call is for an eclecticism. As Schwartz et al. indicate, unified theory is now rejected because grand theory in the past has been monistic. However, science moves on; there are unified theories today that are inclusive. Thus, development of an area in psychology is needed that studies, evaluates, and advances works that unify inclusively, the present article being an example. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  14. Aspects, Wrappers and Events

    NASA Technical Reports Server (NTRS)

    Filman, Robert E.

    2003-01-01

    This viewgraph presentation provides information on Object Infrastructure Framework (OIF), an Aspect-Oriented Programming (AOP) system. The presentation begins with an introduction to the difficulties and requirements of distributed computing, including functional and non-functional requirements (ilities). The architecture of Distributed Object Technology includes stubs, proxies for implementation objects, and skeletons, proxies for client applications. The key OIF ideas (injecting behavior, annotated communications, thread contexts, and pragma) are discussed. OIF is an AOP mechanism; AOP is centered on: 1) Separate expression of crosscutting concerns; 2) Mechanisms to weave the separate expressions into a unified system. AOP is software engineering technology for separately expressing systematic properties while nevertheless producing running systems that embody these properties.

  15. OPUS: Optimal Projection for Uncertain Systems. Volume 1

    DTIC Science & Technology

    1991-09-01

    unifiedI control- design methodology that directly addresses these technology issues. 1 In particular, optimal projection theory addresses the need for...effects, and limited identification accuracy in a 1-g environment. The principal contribution of OPUS is a unified design methodology that...characterizing solutions to constrained control- design problems. Transforming OPUS into a practi- cal design methodology requires the development of

  16. Western Civ., Multiculturalism and the Problem of a Unified World History.

    ERIC Educational Resources Information Center

    Dunn, Ross E.

    This paper traces the development of the concept of a unified world history and applies that concept to the present curriculum. World history became more European-centered over time as other cultures were viewed as backward. The exclusion of so much of humanity from the "known world of progress" made less and less sense over time as global…

  17. Predominance Diagrams, a Useful Tool for the Correlation of the Precipitation-Solubility Equilibrium with Other Ionic Equilibria

    ERIC Educational Resources Information Center

    Pereira, Constantino Fernandez; Alcalde, Manuel; Villegas, Rosario; Vale, Jose

    2007-01-01

    The four types of ionic equilibria--acid-base, redox, precipitation, and complexation--have certain similarities, which has led some authors to develop a unified treatment of them. These authors have highlighted the common aspects and tried to find a systemization of the equilibria that would facilitate learning them. In this unified treatment,…

  18. Becoming a Community School: A Study of Oakland Unified School District Community School Implementation, 2015-2016

    ERIC Educational Resources Information Center

    Fehrer, Kendra; Leos-Urbel, Jacob; Messner, Erica; Riley, Nicole

    2016-01-01

    Since 2014, Oakland Unified School District (OUSD) has partnered with the Gardner Center for Youth and Their Communities at Stanford University (Gardner Center) to support OUSD's efforts to assess, enhance, and scale their community schools work. They began by working with the district to develop a System Strategy Map to articulate the district's…

  19. Design sensitivity analysis of nonlinear structural response

    NASA Technical Reports Server (NTRS)

    Cardoso, J. B.; Arora, J. S.

    1987-01-01

    A unified theory is described of design sensitivity analysis of linear and nonlinear structures for shape, nonshape and material selection problems. The concepts of reference volume and adjoint structure are used to develop the unified viewpoint. A general formula for design sensitivity analysis is derived. Simple analytical linear and nonlinear examples are used to interpret various terms of the formula and demonstrate its use.

  20. Community and Staff Surveys Conducted for the Sacramento City Unified School District. Summary Report [and] Appendices.

    ERIC Educational Resources Information Center

    Franz, Jennifer D.

    Community and staff surveys, conducted in 1982, were commissioned by the Sacramento City (CA) Unified School District Board of Education as part of a project designed by the District's five high school principals. This report is limited to a presentation of the survey results. A subsequent report to be developed will present conclusions and any…

  1. Students' different understandings of class diagrams

    NASA Astrophysics Data System (ADS)

    Boustedt, Jonas

    2012-03-01

    The software industry needs well-trained software designers and one important aspect of software design is the ability to model software designs visually and understand what visual models represent. However, previous research indicates that software design is a difficult task to many students. This article reports empirical findings from a phenomenographic investigation on how students understand class diagrams, Unified Modeling Language (UML) symbols, and relations to object-oriented (OO) concepts. The informants were 20 Computer Science students from four different universities in Sweden. The results show qualitatively different ways to understand and describe UML class diagrams and the "diamond symbols" representing aggregation and composition. The purpose of class diagrams was understood in a varied way, from describing it as a documentation to a more advanced view related to communication. The descriptions of class diagrams varied from seeing them as a specification of classes to a more advanced view, where they were described to show hierarchic structures of classes and relations. The diamond symbols were seen as "relations" and a more advanced way was seeing the white and the black diamonds as different symbols for aggregation and composition. As a consequence of the results, it is recommended that UML should be adopted in courses. It is briefly indicated how the phenomenographic results in combination with variation theory can be used by teachers to enhance students' possibilities to reach advanced understanding of phenomena related to UML class diagrams. Moreover, it is recommended that teachers should put more effort in assessing skills in proper usage of the basic symbols and models and students should be provided with opportunities to practise collaborative design, e.g. using whiteboards.

  2. Development of an integrated chemical weather prediction system for environmental applications at meso to global scales: NMMB/BSC-CHEM

    NASA Astrophysics Data System (ADS)

    Jorba, O.; Pérez, C.; Karsten, K.; Janjic, Z.; Dabdub, D.; Baldasano, J. M.

    2009-09-01

    This contribution presents the ongoing developments of a new fully on-line chemical weather prediction system for meso to global scale applications. The modeling system consists of a mineral dust module and a gas-phase chemistry module coupled on-line to a unified global-regional atmospheric driver. This approach allows solving small scale processes and their interactions at local to global scales. Its unified environment maintains the consistency of all the physico-chemical processes involved. The atmospheric driver is the NCEP/NMMB numerical weather prediction model (Janjic and Black, 2007) developed at National Centers for Environmental Prediction (NCEP). It represents an evolution of the operational WRF-NMME model extending from meso to global scales. Its unified non-hydrostatic dynamical core supports regional and global simulations. The Barcelona Supercomputing Center is currently designing and implementing a chemistry transport model coupled online with the new global/regional NMMB. The new modeling system is intended to be a powerful tool for research and to provide efficient global and regional chemical weather forecasts at sub-synoptic and mesoscale resolutions. The online coupling of the chemistry follows the approach similar to that of the mineral dust module already coupled to the atmospheric driver, NMMB/BSC-DUST (Pérez et al., 2008). Chemical species are advected and mixed at the corresponding time steps of the meteorological tracers using the same numerical scheme. Advection is eulerian, positive definite and monotone. The chemical mechanism and chemistry solver is based on the Kinetic PreProcessor KPP (Damian et al., 2002) package with the main purpose of maintaining a wide flexibility when configuring the model. Such approach will allow using a simplified chemical mechanism for global applications or a more complete mechanism for high-resolution local or regional studies. Moreover, it will permit the implementation of a specific configuration for forecasting applications in regional or global domains. An emission process allows the coupling of different emission inventories sources such as RETRO, EDGAR and GEIA for the global domain, EMEP for Europe and HERMES for Spain. The photolysis scheme is based on the Fast-J scheme, coupled with physics of each model layer (e.g., aerosols, clouds, absorbers as ozone) and it considers grid-scale clouds from the atmospheric driver. The dry deposition scheme follows the deposition velocity analogy for gases, enabling the calculation of deposition fluxes from airborne concentrations. No cloud-chemistry processes are included in the system yet (no wet deposition, scavenging and aqueous chemistry). The modeling system developments will be presented and first results of the gas-phase chemistry at global scale will be discussed. REFERENCES Janjic, Z.I., and Black, T.L., 2007. An ESMF unified model for a broad range of spatial and temporal scales, Geophysical Research Abstracts, 9, 05025. Pérez, C., Haustein, K., Janjic, Z.I., Jorba, O., Baldasano, J.M., Black, T.L., and Nickovic, S., 2008. An online dust model within the meso to global NMMB: current progress and plans. AGU Fall Meeting, San Francisco, A41K-03, 2008. Damian, V., Sandu, A., Damian, M., Potra, F., and Carmichael, G.R., 2002. The kinetic preprocessor KPP - A software environment for solving chemical kinetics. Comp. Chem. Eng., 26, 1567-1579. Sandu, A., and Sander, R., 2006. Technical note:Simulating chemical systems in Fortran90 and Matlab with the Kinetic PreProcessor KPP-2.1. Atmos. Chem. and Phys., 6, 187-195.

  3. "Test" is a Four Letter Word

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pope, G M

    2005-05-03

    For a number of years I had the pleasure of teaching Testing Seminars all over the world and meeting and learning from others in our field. Over a twelve year period, I always asked the following questions to Software Developers, Test Engineers, and Managers who took my two or three day seminar on Software Testing: 'When was the first time you heard the word test'? 'Where were you when you first heard the word test'? 'Who said the word test'? 'How did the word test make you feel'? Most of the thousands of responses were similar to 'It was mymore » third grade teacher at school, and I felt nervous and afraid'. Now there were a few exceptions like 'It was my third grade teacher, and I was happy and excited to show how smart I was'. But by and large, my informal survey found that 'testing' is a word to which most people attach negative meanings, based on its historical context. So why is this important to those of us in the software development business? Because I have found that a preponderance of software developers do not get real excited about hearing that the software they just wrote is going to be 'tested' by the Test Group. Typical reactions I have heard over the years run from: 'I'm sure there is nothing wrong with the software, so go ahead and test it, better you find defects than our customers'. to these extremes: 'There is no need to test my software because there is nothing wrong with it'. 'You are not qualified to test my software because you don't know as much as I do about it'. 'If any Test Engineers come into our office again to test our software we will throw them through the third floor window'. So why is there such a strong negative reaction to testing? It is primitive. It goes back to grade school for many of us. It is a negative word that congers up negative emotions. In other words, 'test' is a four letter word. How many of us associate 'Joy' with 'Test'? Not many. It is hard for most of us to reprogram associations learned at an early age. So what can we do about it (short of hypnotic therapy for software developers)? Well one concept I have used (and still use) is to not call testing 'testing'. Call it something else. Ever wonder why most of the Independent Software Testing groups are called Software Quality Assurance groups? Now you know. Software Quality Assurance is not such a negatively charged phrase, even though Software Quality Assurance is much more than simply testing. It was a real blessing when the concept of Validation and Verification came about for software. Now I define Validation to mean assuring that the product produced does the right thing (usually what the customer wants it to do), and verification means that the product was built the right way (in accordance with some good design principles and practices). So I have deliberately called the System Test Group the Verification and Validation Group, or V&V Group, as a way of avoiding the negative image problem. I remember once having a conversation with a developer colleague who said, in the heat of battle, that it was fine to V&V his code, just don't test it! Once again V&V includes many things besides testing, but it just doesn't sound like an onerous thing to do to software. In my current job, working at a highly regarded national laboratory with world renowned physicists, I have again encountered the negativity about testing software. Except here they don't take kindly to Software Quality Assurance or Software Verification and Validation either. After all, software is just a trivial tool to automate algorithms that implement physics models. Testing, SQA, and V&V take time and get in the way of completing ground breaking science experiments. So I have again had to change the name of software testing to something less negative in the physics world. I found (the hard way) that if I requested more time to do software experimentation, the physicist's resistance melted. And so the conversation continues, 'We have time to run more software experiments. Just don't waste any time testing the software'! In case the concept of not calling testing 'testing' appeals to you, and there may be an opportunity for you to take the sting out of the name at your place of employment, I have compiled a table of things that testing could be called besides 'testing'. Of course we can embellish this by adding some good sounding prefixes and suffixes also. To come up with alternate names for testing, pick a word from columns A, B, and C in the table below. For instance Unified Acceptance Trials (A2,B7,C3) or Tailored Observational Demonstration (A6,B5,C5) or Agile Criteria Scoring (A3,B8,C8) or Rapid Requirement Proof (A1,B9,C7) or Satisfaction Assurance (B10,C1). You can probably think of some additional combinations appropriate for your industry.« less

  4. Evaluating renewable natural resources flow and net primary productivity with a GIS-Emergy approach: A case study of Hokkaido, Japan.

    PubMed

    Wang, Chengdong; Zhang, Shenyan; Yan, Wanglin; Wang, Renqing; Liu, Jian; Wang, Yutao

    2016-11-18

    Renewable natural resources, such as solar radiation, rainfall, wind, and geothermal heat, together with ecosystem services, provide the elementary supports for the sustainable development of human society. To improve regional sustainability, we studied the spatial distributions and quantities of renewable natural resources and net primary productivity (NPP) in Hokkaido, which is the second largest island of Japan. With the help of Geographic Information System (GIS) software, distribution maps for each type of renewable natural resource were generated by kriging interpolation based on statistical records. A composite map of the flow of all types of renewable natural resources was also generated by map layer overlapping. Additionally, we utilized emergy analysis to convert each renewable flow with different attributes into a unified unit (i.e., solar equivalent joules [sej]). As a result, the spatial distributions of the flow of renewable natural resources of the Hokkaido region are presented in the form of thematic emergy maps. Thus, the areas with higher renewable emergy can be easily visualized and identified. The dominant renewable flow in certain areas can also be directly distinguished. The results can provide useful information for regional sustainable development, environmental conservation and ecological management.

  5. Evaluating renewable natural resources flow and net primary productivity with a GIS-Emergy approach: A case study of Hokkaido, Japan

    PubMed Central

    Wang, Chengdong; Zhang, Shenyan; Yan, Wanglin; Wang, Renqing; Liu, Jian; Wang, Yutao

    2016-01-01

    Renewable natural resources, such as solar radiation, rainfall, wind, and geothermal heat, together with ecosystem services, provide the elementary supports for the sustainable development of human society. To improve regional sustainability, we studied the spatial distributions and quantities of renewable natural resources and net primary productivity (NPP) in Hokkaido, which is the second largest island of Japan. With the help of Geographic Information System (GIS) software, distribution maps for each type of renewable natural resource were generated by kriging interpolation based on statistical records. A composite map of the flow of all types of renewable natural resources was also generated by map layer overlapping. Additionally, we utilized emergy analysis to convert each renewable flow with different attributes into a unified unit (i.e., solar equivalent joules [sej]). As a result, the spatial distributions of the flow of renewable natural resources of the Hokkaido region are presented in the form of thematic emergy maps. Thus, the areas with higher renewable emergy can be easily visualized and identified. The dominant renewable flow in certain areas can also be directly distinguished. The results can provide useful information for regional sustainable development, environmental conservation and ecological management. PMID:27857230

  6. The Unified Database for BM@N experiment data handling

    NASA Astrophysics Data System (ADS)

    Gertsenberger, Konstantin; Rogachevsky, Oleg

    2018-04-01

    The article describes the developed Unified Database designed as a comprehensive relational data storage for the BM@N experiment at the Joint Institute for Nuclear Research in Dubna. The BM@N experiment, which is one of the main elements of the first stage of the NICA project, is a fixed target experiment at extracted Nuclotron beams of the Laboratory of High Energy Physics (LHEP JINR). The structure and purposes of the BM@N setup are briefly presented. The article considers the scheme of the Unified Database, its attributes and implemented features in detail. The use of the developed BM@N database provides correct multi-user access to actual information of the experiment for data processing. It stores information on the experiment runs, detectors and their geometries, different configuration, calibration and algorithm parameters used in offline data processing. An important part of any database - user interfaces are presented.

  7. The OME Framework for genome-scale systems biology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Palsson, Bernhard O.; Ebrahim, Ali; Federowicz, Steve

    The life sciences are undergoing continuous and accelerating integration with computational and engineering sciences. The biology that many in the field have been trained on may be hardly recognizable in ten to twenty years. One of the major drivers for this transformation is the blistering pace of advancements in DNA sequencing and synthesis. These advances have resulted in unprecedented amounts of new data, information, and knowledge. Many software tools have been developed to deal with aspects of this transformation and each is sorely needed [1-3]. However, few of these tools have been forced to deal with the full complexity ofmore » genome-scale models along with high throughput genome- scale data. This particular situation represents a unique challenge, as it is simultaneously necessary to deal with the vast breadth of genome-scale models and the dizzying depth of high-throughput datasets. It has been observed time and again that as the pace of data generation continues to accelerate, the pace of analysis significantly lags behind [4]. It is also evident that, given the plethora of databases and software efforts [5-12], it is still a significant challenge to work with genome-scale metabolic models, let alone next-generation whole cell models [13-15]. We work at the forefront of model creation and systems scale data generation [16-18]. The OME Framework was borne out of a practical need to enable genome-scale modeling and data analysis under a unified framework to drive the next generation of genome-scale biological models. Here we present the OME Framework. It exists as a set of Python classes. However, we want to emphasize the importance of the underlying design as an addition to the discussions on specifications of a digital cell. A great deal of work and valuable progress has been made by a number of communities [13, 19-24] towards interchange formats and implementations designed to achieve similar goals. While many software tools exist for handling genome-scale metabolic models or for genome-scale data analysis, no implementations exist that explicitly handle data and models concurrently. The OME Framework structures data in a connected loop with models and the components those models are composed of. This results in the first full, practical implementation of a framework that can enable genome-scale design-build-test. Over the coming years many more software packages will be developed and tools will necessarily change. However, we hope that the underlying designs shared here can help to inform the design of future software.« less

  8. The layered sensing operations center: a modeling and simulation approach to developing complex ISR networks

    NASA Astrophysics Data System (ADS)

    Curtis, Christopher; Lenzo, Matthew; McClure, Matthew; Preiss, Bruce

    2010-04-01

    In order to anticipate the constantly changing landscape of global warfare, the United States Air Force must acquire new capabilities in the field of Intelligence, Surveillance, and Reconnaissance (ISR). To meet this challenge, the Air Force Research Laboratory (AFRL) is developing a unifying construct of "Layered Sensing" which will provide military decision-makers at all levels with the timely, actionable, and trusted information necessary for complete battlespace awareness. Layered Sensing is characterized by the appropriate combination of sensors and platforms (including those for persistent sensing), infrastructure, and exploitation capabilities to enable this synergistic awareness. To achieve the Layered Sensing vision, AFRL is pursuing a Modeling & Simulation (M&S) strategy through the Layered Sensing Operations Center (LSOC). An experimental ISR system-of-systems test-bed, the LSOC integrates DoD standard simulation tools with commercial, off-the-shelf video game technology for rapid scenario development and visualization. These tools will help facilitate sensor management performance characterization, system development, and operator behavioral analysis. Flexible and cost-effective, the LSOC will implement a non-proprietary, open-architecture framework with well-defined interfaces. This framework will incentivize the transition of current ISR performance models to service-oriented software design for maximum re-use and consistency. This paper will present the LSOC's development and implementation thus far as well as a summary of lessons learned and future plans for the LSOC.

  9. Summary of ADTT Website Functionality and Features

    NASA Technical Reports Server (NTRS)

    Hawke, Veronica; Duong, Trang; Liang, Lawrence; Gage, Peter; Lawrence, Scott (Technical Monitor)

    2001-01-01

    This report summarizes development of the ADTT web-based design environment by the ELORET team in 2000. The Advanced Design Technology Testbed had been in development for several years, with demonstration applications restricted to aerodynamic analyses of subsonic aircraft. The key changes achieved this year were improvements in Web-based accessibility, evaluation of collaborative visualization, remote invocation of geometry updates and performance analysis, and application to aerospace system analysis. Significant effort was also devoted to post-processing of data, chiefly through comparison of similar data for alternative vehicle concepts. Such comparison is an essential requirement for designers to make informed choices between alternatives. The next section of this report provides more discussion of the goals for ADTT development. Section 3 provides screen shots from a sample session in the ADTT environment, including Login and navigation to the project of interest, data inspection, analysis execution and output evaluation. The following section provides discussion of implementation details and recommendations for future development of the software and information technologies that provide the key functionality of the ADTT system. Section 5 discusses the integration architecture for the system, which links machines running different operating systems and provides unified access to data stored in distributed locations. Security is a significant issue for this system, especially for remote access to NAS machines, so Section 6 discusses several architectural considerations with respect to security. Additional details of some aspects of ADTT development are included in Appendices.

  10. 77 FR 7663 - Introduction to the Unified Agenda of Federal Regulatory and Deregulatory Actions

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-02-13

    ...The Regulatory Flexibility Act requires that agencies publish semiannual regulatory agendas in the Federal Register describing regulatory actions they are developing that may have a significant economic impact on a substantial number of small entities (5 U.S.C. 602). Executive Order 12866 ``Regulatory Planning and Review,'' signed September 30, 1993 (58 FR 51735), and Office of Management and Budget memoranda implementing section 4 of that Order establish minimum standards for agencies' agendas, including specific types of information for each entry. The Unified Agenda of Federal Regulatory and Deregulatory Actions (Unified Agenda) helps agencies fulfill these requirements. All Federal regulatory agencies have chosen to publish their regulatory agendas as part of the Unified Agenda. Editions of the Unified Agenda prior to fall 2007 were printed in their entirety in the Federal Register. Beginning with the fall 2007 edition, the Internet is the basic means for conveying regulatory agenda information to the maximum extent legally permissible. The complete Unified Agenda for fall 2011, which contains the regulatory agendas for 59 Federal agencies, is available to the public at http:// reginfo.gov. The fall 2011 Unified Agenda publication appearing in the Federal Register consists of agency regulatory flexibility agendas, in accordance with the publication requirements of the Regulatory Flexibility Act. Agency regulatory flexibility agendas contain only those Agenda entries for rules that are likely to have a significant economic impact on a substantial number of small entities and entries that have been selected for periodic review under section 610 of the Regulatory Flexibility Act.

  11. The HARNESS Workbench: Unified and Adaptive Access to Diverse HPC Platforms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sunderam, Vaidy S.

    2012-03-20

    The primary goal of the Harness WorkBench (HWB) project is to investigate innovative software environments that will help enhance the overall productivity of applications science on diverse HPC platforms. Two complementary frameworks were designed: one, a virtualized command toolkit for application building, deployment, and execution, that provides a common view across diverse HPC systems, in particular the DOE leadership computing platforms (Cray, IBM, SGI, and clusters); and two, a unified runtime environment that consolidates access to runtime services via an adaptive framework for execution-time and post processing activities. A prototype of the first was developed based on the concept ofmore » a 'system-call virtual machine' (SCVM), to enhance portability of the HPC application deployment process across heterogeneous high-end machines. The SCVM approach to portable builds is based on the insertion of toolkit-interpretable directives into original application build scripts. Modifications resulting from these directives preserve the semantics of the original build instruction flow. The execution of the build script is controlled by our toolkit that intercepts build script commands in a manner transparent to the end-user. We have applied this approach to a scientific production code (Gamess-US) on the Cray-XT5 machine. The second facet, termed Unibus, aims to facilitate provisioning and aggregation of multifaceted resources from resource providers and end-users perspectives. To achieve that, Unibus proposes a Capability Model and mediators (resource drivers) to virtualize access to diverse resources, and soft and successive conditioning to enable automatic and user-transparent resource provisioning. A proof of concept implementation has demonstrated the viability of this approach on high end machines, grid systems and computing clouds.« less

  12. Mobile Food Ordering Application using Android OS Platform

    NASA Astrophysics Data System (ADS)

    Yosep Ricky, Michael

    2014-03-01

    The purpose of this research is making an ordering food application based on Android with New Order, Order History, Restaurant Profile, Order Status, Tracking Order, and Setting Profile features. The research method used in this research is water model of System Development Life Cycle (SDLC) method with following phases: requirement definition, analyzing and determining the features needed in developing application and making the detail definition of each features, system and software design, designing the flow of developing application by using storyboard design, user experience design, Unified Modeling Language (UML) design, and database structure design, implementation an unit testing, making database and translating the result of designs to programming language code then doing unit testing, integration and System testing, integrating unit program to one unit system then doing system testing, operation and maintenance, operating the result of system testing and if any changes and reparations needed then the previous phases could be back. The result of this research is an ordering food application based on Android for customer and courier user, and a website for restaurant and admin user. The conclusion of this research is to help customer in making order easily, to give detail information needed by customer, to help restaurant in receiving order, and to help courier while doing delivery.

  13. Codes That Support Smart Growth Development

    EPA Pesticide Factsheets

    Provides examples of local zoning codes that support smart growth development, categorized by: unified development code, form-based code, transit-oriented development, design guidelines, street design standards, and zoning overlay.

  14. Web-GIS platform for monitoring and forecasting of regional climate and ecological changes

    NASA Astrophysics Data System (ADS)

    Gordov, E. P.; Krupchatnikov, V. N.; Lykosov, V. N.; Okladnikov, I.; Titov, A. G.; Shulgina, T. M.

    2012-12-01

    Growing volume of environmental data from sensors and model outputs makes development of based on modern information-telecommunication technologies software infrastructure for information support of integrated scientific researches in the field of Earth sciences urgent and important task (Gordov et al, 2012, van der Wel, 2005). It should be considered that original heterogeneity of datasets obtained from different sources and institutions not only hampers interchange of data and analysis results but also complicates their intercomparison leading to a decrease in reliability of analysis results. However, modern geophysical data processing techniques allow combining of different technological solutions for organizing such information resources. Nowadays it becomes a generally accepted opinion that information-computational infrastructure should rely on a potential of combined usage of web- and GIS-technologies for creating applied information-computational web-systems (Titov et al, 2009, Gordov et al. 2010, Gordov, Okladnikov and Titov, 2011). Using these approaches for development of internet-accessible thematic information-computational systems, and arranging of data and knowledge interchange between them is a very promising way of creation of distributed information-computation environment for supporting of multidiscipline regional and global research in the field of Earth sciences including analysis of climate changes and their impact on spatial-temporal vegetation distribution and state. Experimental software and hardware platform providing operation of a web-oriented production and research center for regional climate change investigations which combines modern web 2.0 approach, GIS-functionality and capabilities of running climate and meteorological models, large geophysical datasets processing, visualization, joint software development by distributed research groups, scientific analysis and organization of students and post-graduate students education is presented. Platform software developed (Shulgina et al, 2012, Okladnikov et al, 2012) includes dedicated modules for numerical processing of regional and global modeling results for consequent analysis and visualization. Also data preprocessing, run and visualization of modeling results of models WRF and «Planet Simulator» integrated into the platform is provided. All functions of the center are accessible by a user through a web-portal using common graphical web-browser in the form of an interactive graphical user interface which provides, particularly, capabilities of visualization of processing results, selection of geographical region of interest (pan and zoom) and data layers manipulation (order, enable/disable, features extraction). Platform developed provides users with capabilities of heterogeneous geophysical data analysis, including high-resolution data, and discovering of tendencies in climatic and ecosystem changes in the framework of different multidisciplinary researches (Shulgina et al, 2011). Using it even unskilled user without specific knowledge can perform computational processing and visualization of large meteorological, climatological and satellite monitoring datasets through unified graphical web-interface.

  15. A Consortium Approach to Exemplary Career Education Program Development Involving Two Unified School Districts and Two Teacher Education Institutions. Final Report.

    ERIC Educational Resources Information Center

    Borhani, Rahim

    This is the final report of a Kansas state project which had four purposes: (1) Involvement of teacher training institutions with the unified school districts' career education program in order to gather information needed to provide realistic experiences for inservice education of future career education teachers, (2) involve the community in…

  16. Unified sensor management in unknown dynamic clutter

    NASA Astrophysics Data System (ADS)

    Mahler, Ronald; El-Fallah, Adel

    2010-04-01

    In recent years the first author has developed a unified, computationally tractable approach to multisensor-multitarget sensor management. This approach consists of closed-loop recursion of a PHD or CPHD filter with maximization of a "natural" sensor management objective function called PENT (posterior expected number of targets). In this paper we extend this approach so that it can be used in unknown, dynamic clutter backgrounds.

  17. Calibration Issues and Operating System Requirements for Electron-Probe Microanalysis

    NASA Technical Reports Server (NTRS)

    Carpenter, P.

    2006-01-01

    Instrument purchase requirements and dialogue with manufacturers have established hardware parameters for alignment, stability, and reproducibility, which have helped improve the precision and accuracy of electron microprobe analysis (EPMA). The development of correction algorithms and the accurate solution to quantitative analysis problems requires the minimization of systematic errors and relies on internally consistent data sets. Improved hardware and computer systems have resulted in better automation of vacuum systems, stage and wavelength-dispersive spectrometer (WDS) mechanisms, and x-ray detector systems which have improved instrument stability and precision. Improved software now allows extended automated runs involving diverse setups and better integrates digital imaging and quantitative analysis. However, instrumental performance is not regularly maintained, as WDS are aligned and calibrated during installation but few laboratories appear to check and maintain this calibration. In particular, detector deadtime (DT) data is typically assumed rather than measured, due primarily to the difficulty and inconvenience of the measurement process. This is a source of fundamental systematic error in many microprobe laboratories and is unknown to the analyst, as the magnitude of DT correction is not listed in output by microprobe operating systems. The analyst must remain vigilant to deviations in instrumental alignment and calibration, and microprobe system software must conveniently verify the necessary parameters. Microanalysis of mission critical materials requires an ongoing demonstration of instrumental calibration. Possible approaches to improvements in instrument calibration, quality control, and accuracy will be discussed. Development of a set of core requirements based on discussions with users, researchers, and manufacturers can yield documents that improve and unify the methods by which instruments can be calibrated. These results can be used to continue improvements of EPMA.

  18. The capture and recreation of 3D auditory scenes

    NASA Astrophysics Data System (ADS)

    Li, Zhiyun

    The main goal of this research is to develop the theory and implement practical tools (in both software and hardware) for the capture and recreation of 3D auditory scenes. Our research is expected to have applications in virtual reality, telepresence, film, music, video games, auditory user interfaces, and sound-based surveillance. The first part of our research is concerned with sound capture via a spherical microphone array. The advantage of this array is that it can be steered into any 3D directions digitally with the same beampattern. We develop design methodologies to achieve flexible microphone layouts, optimal beampattern approximation and robustness constraint. We also design novel hemispherical and circular microphone array layouts for more spatially constrained auditory scenes. Using the captured audio, we then propose a unified and simple approach for recreating them by exploring the reciprocity principle that is satisfied between the two processes. Our approach makes the system easy to build, and practical. Using this approach, we can capture the 3D sound field by a spherical microphone array and recreate it using a spherical loudspeaker array, and ensure that the recreated sound field matches the recorded field up to a high order of spherical harmonics. For some regular or semi-regular microphone layouts, we design an efficient parallel implementation of the multi-directional spherical beamformer by using the rotational symmetries of the beampattern and of the spherical microphone array. This can be implemented in either software or hardware and easily adapted for other regular or semi-regular layouts of microphones. In addition, we extend this approach for headphone-based system. Design examples and simulation results are presented to verify our algorithms. Prototypes are built and tested in real-world auditory scenes.

  19. Improving Collaboration by Standardization Efforts in Systems Biology

    PubMed Central

    Dräger, Andreas; Palsson, Bernhard Ø.

    2014-01-01

    Collaborative genome-scale reconstruction endeavors of metabolic networks would not be possible without a common, standardized formal representation of these systems. The ability to precisely define biological building blocks together with their dynamic behavior has even been considered a prerequisite for upcoming synthetic biology approaches. Driven by the requirements of such ambitious research goals, standardization itself has become an active field of research on nearly all levels of granularity in biology. In addition to the originally envisaged exchange of computational models and tool interoperability, new standards have been suggested for an unambiguous graphical display of biological phenomena, to annotate, archive, as well as to rank models, and to describe execution and the outcomes of simulation experiments. The spectrum now even covers the interaction of entire neurons in the brain, three-dimensional motions, and the description of pharmacometric studies. Thereby, the mathematical description of systems and approaches for their (repeated) simulation are clearly separated from each other and also from their graphical representation. Minimum information definitions constitute guidelines and common operation protocols in order to ensure reproducibility of findings and a unified knowledge representation. Central database infrastructures have been established that provide the scientific community with persistent links from model annotations to online resources. A rich variety of open-source software tools thrives for all data formats, often supporting a multitude of programing languages. Regular meetings and workshops of developers and users lead to continuous improvement and ongoing development of these standardization efforts. This article gives a brief overview about the current state of the growing number of operation protocols, mark-up languages, graphical descriptions, and fundamental software support with relevance to systems biology. PMID:25538939

  20. Master Middle Ware: A Tool to Integrate Water Resources and Fish Population Dynamics Models

    NASA Astrophysics Data System (ADS)

    Yi, S.; Sandoval Solis, S.; Thompson, L. C.; Kilduff, D. P.

    2017-12-01

    Linking models that investigate separate components of ecosystem processes has the potential to unify messages regarding management decisions by evaluating potential trade-offs in a cohesive framework. This project aimed to improve the ability of riparian resource managers to forecast future water availability conditions and resultant fish habitat suitability, in order to better inform their management decisions. To accomplish this goal, we developed a middleware tool that is capable of linking and overseeing the operations of two existing models, a water resource planning tool Water Evaluation and Planning (WEAP) model and a habitat-based fish population dynamics model (WEAPhish). First, we designed the Master Middle Ware (MMW) software in Visual Basic for Application® in one Excel® file that provided a familiar framework for both data input and output Second, MMW was used to link and jointly operate WEAP and WEAPhish, using Visual Basic Application (VBA) macros to implement system level calls to run the models. To demonstrate the utility of this approach, hydrological, biological, and middleware model components were developed for the Butte Creek basin. This tributary of the Sacramento River, California is managed for both hydropower and the persistence of a threatened population of spring-run Chinook salmon (Oncorhynchus tschawytscha). While we have demonstrated the use of MMW for a particular watershed and fish population, MMW can be customized for use with different rivers and fish populations, assuming basic data requirements are met. This model integration improves on ad hoc linkages for managing data transfer between software programs by providing a consistent, user-friendly, and familiar interface across different model implementations. Furthermore, the data-viewing capabilities of MMW facilitate the rapid interpretation of model results by hydrologists, fisheries biologists, and resource managers, in order to accelerate learning and management decision making.

  1. Technical Data Exchange Software Tools Adapted to Distributed Microsatellite Design

    NASA Astrophysics Data System (ADS)

    Pache, Charly

    2002-01-01

    One critical issue concerning distributed design of satellites, is the collaborative work it requires. In particular, the exchange of data between each group responsible for each subsystem can be complex and very time-consuming. The goal of this paper is to present a design collaborative tool, the SSETI Design Model (SDM), specifically developed for enabling satellite distributed design. SDM is actually used in the ongoing Student Space Exploration &Technology (SSETI) initiative (www.sseti.net). SSETI is lead by European Space Agency (ESA) outreach office (http://www.estec.esa.nl/outreach), involving student groups from all over Europe for design, construction and launch of a microsatellite. The first part of this paper presents the current version of the SDM tool, a collection of Microsoft Excel linked worksheets, one for each subsystem. An overview of the project framework/structure is given, explaining the different actors, the flows between them, as well as the different types of data and the links - formulas - between data sets. Unified Modeling Language (UML) diagrams give an overview of the different parts . Then the SDM's functionalities, developed in VBA scripts (Visual Basic for Application), are introduced, as well as the interactive features, user interfaces and administration tools. The second part discusses the capabilities and limitations of SDM current version. Taking into account these capabilities and limitations, the third part outlines the next version of SDM, a web-oriented, database-driven evolution of the current version. This new approach will enable real-time data exchange and processing between the different actors of the mission. Comprehensive UML diagrams will guide the audience through the entire modeling process of such a system. Tradeoffs simulation capabilities, security, reliability, hardware and software issues will also be thoroughly discussed.

  2. Mars-Learning AN Open Access Educational Database

    NASA Astrophysics Data System (ADS)

    Kolankowski, S. M.; Fox, P. A.

    2016-12-01

    Schools across America have begun focusing more and more on science and technology, giving their students greater opportunities to learn about planetary science and engineering. With the development of rovers and advanced scientific instrumentation, we are learning about Mars' geologic history on a daily basis. These discoveries are crucial to our understanding of Earth and our solar system. By bringing these findings into the classroom, students can learn key concepts about Earth and Planetary sciences while focusing on a relevant current event. However, with an influx of readily accessible information, it is difficult for educators and students to find accurate and relevant material. Mars-Learning seeks to unify these discoveries and resources. This site will provide links to educational resources, software, and blogs with a focus on Mars. Activities will be grouped by grade for the middle and high school levels. Programs and software will be labeled, open access, free, or paid to ensure users have the proper tools to get the information they need. For new educators or those new to the subject, relevant blogs and pre-made lesson plans will be available so instructors can ensure their success. The expectation of Mars-Learning is to provide stress-free access to learning materials that falls within a wide range of curriculum. By providing a thorough and encompassing site, Mars-Learning hopes to further our understanding of the Red Planet and equip students with the knowledge and passion to continue this research.

  3. A service-oriented distributed semantic mediator: integrating multiscale biomedical information.

    PubMed

    Mora, Oscar; Engelbrecht, Gerhard; Bisbal, Jesus

    2012-11-01

    Biomedical research continuously generates large amounts of heterogeneous and multimodal data spread over multiple data sources. These data, if appropriately shared and exploited, could dramatically improve the research practice itself, and ultimately the quality of health care delivered. This paper presents DISMED (DIstributed Semantic MEDiator), an open source semantic mediator that provides a unified view of a federated environment of multiscale biomedical data sources. DISMED is a Web-based software application to query and retrieve information distributed over a set of registered data sources, using semantic technologies. It also offers a userfriendly interface specifically designed to simplify the usage of these technologies by non-expert users. Although the architecture of the software mediator is generic and domain independent, in the context of this paper, DISMED has been evaluated for managing biomedical environments and facilitating research with respect to the handling of scientific data distributed in multiple heterogeneous data sources. As part of this contribution, a quantitative evaluation framework has been developed. It consist of a benchmarking scenario and the definition of five realistic use-cases. This framework, created entirely with public datasets, has been used to compare the performance of DISMED against other available mediators. It is also available to the scientific community in order to evaluate progress in the domain of semantic mediation, in a systematic and comparable manner. The results show an average improvement in the execution time by DISMED of 55% compared to the second best alternative in four out of the five use-cases of the experimental evaluation.

  4. Medical data sheet in safe havens - A tri-layer cryptic solution.

    PubMed

    Praveenkumar, Padmapriya; Amirtharajan, Rengarajan; Thenmozhi, K; Balaguru Rayappan, John Bosco

    2015-07-01

    Secured sharing of the diagnostic reports and scan images of patients among doctors with complementary expertise for collaborative treatment will help to provide maximum care through faster and decisive decisions. In this context, a tri-layer cryptic solution has been proposed and implemented on Digital Imaging and Communications in Medicine (DICOM) images to establish a secured communication for effective referrals among peers without compromising the privacy of patients. In this approach, a blend of three cryptic schemes, namely Latin square image cipher (LSIC), discrete Gould transform (DGT) and Rubik׳s encryption, has been adopted. Among them, LSIC provides better substitution, confusion and shuffling of the image blocks; DGT incorporates tamper proofing with authentication; and Rubik renders a permutation of DICOM image pixels. The developed algorithm has been successfully implemented and tested in both the software (MATLAB 7) and hardware Universal Software Radio Peripheral (USRP) environments. Specifically, the encrypted data were tested by transmitting them through an additive white Gaussian noise (AWGN) channel model. Furthermore, the sternness of the implemented algorithm was validated by employing standard metrics such as the unified average changing intensity (UACI), number of pixels change rate (NPCR), correlation values and histograms. The estimated metrics have also been compared with the existing methods and dominate in terms of large key space to defy brute force attack, cropping attack, strong key sensitivity and uniform pixel value distribution on encryption. Copyright © 2015 Elsevier Ltd. All rights reserved.

  5. Reduction of parameters in Finite Unified Theories and the MSSM

    NASA Astrophysics Data System (ADS)

    Heinemeyer, Sven; Mondragón, Myriam; Tracas, Nicholas; Zoupanos, George

    2018-02-01

    The method of reduction of couplings developed by W. Zimmermann, combined with supersymmetry, can lead to realistic quantum field theories, where the gauge and Yukawa sectors are related. It is the basis to find all-loop Finite Unified Theories, where the β-function vanishes to all-loops in perturbation theory. It can also be applied to the Minimal Supersymmetric Standard Model, leading to a drastic reduction in the number of parameters. Both Finite Unified Theories and the reduced MSSM lead to successful predictions for the masses of the third generation of quarks and the Higgs boson, and also predict a heavy supersymmetric spectrum, consistent with the non-observation of supersymmetry so far.

  6. Research and development activities in unified control-structure modeling and design

    NASA Technical Reports Server (NTRS)

    Nayak, A. P.

    1985-01-01

    Results of work to develop a unified control/structures modeling and design capability for large space structures modeling are presented. Recent analytical results are presented to demonstrate the significant interdependence between structural and control properties. A new design methodology is suggested in which the structure, material properties, dynamic model and control design are all optimized simultaneously. Parallel research done by other researchers is reviewed. The development of a methodology for global design optimization is recommended as a long-term goal. It is suggested that this methodology should be incorporated into computer aided engineering programs, which eventually will be supplemented by an expert system to aid design optimization.

  7. Regularization and computational methods for precise solution of perturbed orbit transfer problems

    NASA Astrophysics Data System (ADS)

    Woollands, Robyn Michele

    The author has developed a suite of algorithms for solving the perturbed Lambert's problem in celestial mechanics. These algorithms have been implemented as a parallel computation tool that has broad applicability. This tool is composed of four component algorithms and each provides unique benefits for solving a particular type of orbit transfer problem. The first one utilizes a Keplerian solver (a-iteration) for solving the unperturbed Lambert's problem. This algorithm not only provides a "warm start" for solving the perturbed problem but is also used to identify which of several perturbed solvers is best suited for the job. The second algorithm solves the perturbed Lambert's problem using a variant of the modified Chebyshev-Picard iteration initial value solver that solves two-point boundary value problems. This method converges over about one third of an orbit and does not require a Newton-type shooting method and thus no state transition matrix needs to be computed. The third algorithm makes use of regularization of the differential equations through the Kustaanheimo-Stiefel transformation and extends the domain of convergence over which the modified Chebyshev-Picard iteration two-point boundary value solver will converge, from about one third of an orbit to almost a full orbit. This algorithm also does not require a Newton-type shooting method. The fourth algorithm uses the method of particular solutions and the modified Chebyshev-Picard iteration initial value solver to solve the perturbed two-impulse Lambert problem over multiple revolutions. The method of particular solutions is a shooting method but differs from the Newton-type shooting methods in that it does not require integration of the state transition matrix. The mathematical developments that underlie these four algorithms are derived in the chapters of this dissertation. For each of the algorithms, some orbit transfer test cases are included to provide insight on accuracy and efficiency of these individual algorithms. Following this discussion, the combined parallel algorithm, known as the unified Lambert tool, is presented and an explanation is given as to how it automatically selects which of the three perturbed solvers to compute the perturbed solution for a particular orbit transfer. The unified Lambert tool may be used to determine a single orbit transfer or for generating of an extremal field map. A case study is presented for a mission that is required to rendezvous with two pieces of orbit debris (spent rocket boosters). The unified Lambert tool software developed in this dissertation is already being utilized by several industrial partners and we are confident that it will play a significant role in practical applications, including solution of Lambert problems that arise in the current applications focused on enhanced space situational awareness.

  8. DockoMatic 2.0: high throughput inverse virtual screening and homology modeling.

    PubMed

    Bullock, Casey; Cornia, Nic; Jacob, Reed; Remm, Andrew; Peavey, Thomas; Weekes, Ken; Mallory, Chris; Oxford, Julia T; McDougal, Owen M; Andersen, Timothy L

    2013-08-26

    DockoMatic is a free and open source application that unifies a suite of software programs within a user-friendly graphical user interface (GUI) to facilitate molecular docking experiments. Here we describe the release of DockoMatic 2.0; significant software advances include the ability to (1) conduct high throughput inverse virtual screening (IVS); (2) construct 3D homology models; and (3) customize the user interface. Users can now efficiently setup, start, and manage IVS experiments through the DockoMatic GUI by specifying receptor(s), ligand(s), grid parameter file(s), and docking engine (either AutoDock or AutoDock Vina). DockoMatic automatically generates the needed experiment input files and output directories and allows the user to manage and monitor job progress. Upon job completion, a summary of results is generated by Dockomatic to facilitate interpretation by the user. DockoMatic functionality has also been expanded to facilitate the construction of 3D protein homology models using the Timely Integrated Modeler (TIM) wizard. The wizard TIM provides an interface that accesses the basic local alignment search tool (BLAST) and MODELER programs and guides the user through the necessary steps to easily and efficiently create 3D homology models for biomacromolecular structures. The DockoMatic GUI can be customized by the user, and the software design makes it relatively easy to integrate additional docking engines, scoring functions, or third party programs. DockoMatic is a free comprehensive molecular docking software program for all levels of scientists in both research and education.

  9. [Recombinant granulocyte-colony stimulating factor (filgrastim): optimization of conditions of isolation and purification from inclusion body].

    PubMed

    Kononova, N V; Iakovlev, A V; Zhuravko, A M; Pankeev, N N; Minaev, S V; Bobruskin, A I; Mart'ianov, V A

    2014-01-01

    We developed a unified process platform for two recombinant human GCSF medicines--one with the non-prolonged and the other with prolonged action. This unified technology led to a simpler and cheaper production while introduction of the additional pegylation stage to the technological line eased obtaining of the medicines with different action and allowed to standardize technological process documenting according to GMP requirements.

  10. Left Handed Materials Based on Magnetic Nanocomposites

    DTIC Science & Technology

    2006-10-18

    theory that unifies DNMs and SNMs as a function of two flmdamental material parameters: quality factors for permittivity (Qe=e’/e") and permeability (Qu...simultaneously negative effective permeability/uff and permittivity Seff to form LHM or only single negative parameter (SNM) to form negative indexed...developed a theory that unifies DNMs and SNMs as a function of two fundamental material parameters: quality factors for permittivity (Q, = -’/ 6") and

  11. Checkout systems: Summary report for the universal control and display console

    NASA Technical Reports Server (NTRS)

    1971-01-01

    The development of a unified test equipment checkout concept based on a universal control and display console system is discussed. The checkout requirements are analyzed for the shuttle and space station. Capability, size, utilization requirements and specifications of the ground checkout system are made on the basis of engineering trade-off studies. Recommendations related to the attainment of overall unified test equipment conceptual goals and objectives are submitted.

  12. Towards a unification of evolutionary dynamics. Comment on "Answering Schrödinger's question: A free-energy formulation" by Maxwell James Désormeau Ramstead et al.

    NASA Astrophysics Data System (ADS)

    Campbell, John O.

    2018-03-01

    In 2006 Karl Friston introduced the free energy principle (FEP) to neuroscience as a unifying concept [1]. This proposal, along with its use in developing the 'Bayesian Brain' formulation quickly gained traction and a 2008 feature article in New Scientist heralded it as providing a promising unified theory of the brain [2]:

  13. A Unified Air-Sea Visualization System: Survey on Gridding Structures

    NASA Technical Reports Server (NTRS)

    Anand, Harsh; Moorhead, Robert

    1995-01-01

    The goal is to develop a Unified Air-Sea Visualization System (UASVS) to enable the rapid fusion of observational, archival, and model data for verification and analysis. To design and develop UASVS, modelers were polled to determine the gridding structures and visualization systems used, and their needs with respect to visual analysis. A basic UASVS requirement is to allow a modeler to explore multiple data sets within a single environment, or to interpolate multiple datasets onto one unified grid. From this survey, the UASVS should be able to visualize 3D scalar/vector fields; render isosurfaces; visualize arbitrary slices of the 3D data; visualize data defined on spectral element grids with the minimum number of interpolation stages; render contours; produce 3D vector plots and streamlines; provide unified visualization of satellite images, observations and model output overlays; display the visualization on a projection of the users choice; implement functions so the user can derive diagnostic values; animate the data to see the time-evolution; animate ocean and atmosphere at different rates; store the record of cursor movement, smooth the path, and animate a window around the moving path; repeatedly start and stop the visual time-stepping; generate VHS tape animations; work on a variety of workstations; and allow visualization across clusters of workstations and scalable high performance computer systems.

  14. Exploring Environmental Factors in Nursing Workplaces That Promote Psychological Resilience: Constructing a Unified Theoretical Model.

    PubMed

    Cusack, Lynette; Smith, Morgan; Hegney, Desley; Rees, Clare S; Breen, Lauren J; Witt, Regina R; Rogers, Cath; Williams, Allison; Cross, Wendy; Cheung, Kin

    2016-01-01

    Building nurses' resilience to complex and stressful practice environments is necessary to keep skilled nurses in the workplace and ensuring safe patient care. A unified theoretical framework titled Health Services Workplace Environmental Resilience Model (HSWERM), is presented to explain the environmental factors in the workplace that promote nurses' resilience. The framework builds on a previously-published theoretical model of individual resilience, which identified the key constructs of psychological resilience as self-efficacy, coping and mindfulness, but did not examine environmental factors in the workplace that promote nurses' resilience. This unified theoretical framework was developed using a literary synthesis drawing on data from international studies and literature reviews on the nursing workforce in hospitals. The most frequent workplace environmental factors were identified, extracted and clustered in alignment with key constructs for psychological resilience. Six major organizational concepts emerged that related to a positive resilience-building workplace and formed the foundation of the theoretical model. Three concepts related to nursing staff support (professional, practice, personal) and three related to nursing staff development (professional, practice, personal) within the workplace environment. The unified theoretical model incorporates these concepts within the workplace context, linking to the nurse, and then impacting on personal resilience and workplace outcomes, and its use has the potential to increase staff retention and quality of patient care.

  15. Trichotomous processes in early memory development, aging, and neurocognitive impairment: a unified theory.

    PubMed

    Brainerd, C J; Reyna, V F; Howe, M L

    2009-10-01

    One of the most extensively investigated topics in the adult memory literature, dual memory processes, has had virtually no impact on the study of early memory development. The authors remove the key obstacles to such research by formulating a trichotomous theory of recall that combines the traditional dual processes of recollection and familiarity with a reconstruction process. The theory is then embedded in a hidden Markov model that measures all 3 processes with low-burden tasks that are appropriate for even young children. These techniques are applied to a large corpus of developmental studies of recall, yielding stable findings about the emergence of dual memory processes between childhood and young adulthood and generating tests of many theoretical predictions. The techniques are extended to the study of healthy aging and to the memory sequelae of common forms of neurocognitive impairment, resulting in a theoretical framework that is unified over 4 major domains of memory research: early development, mainstream adult research, aging, and neurocognitive impairment. The techniques are also extended to recognition, creating a unified dual process framework for recall and recognition.

  16. A unified architecture for biomedical search engines based on semantic web technologies.

    PubMed

    Jalali, Vahid; Matash Borujerdi, Mohammad Reza

    2011-04-01

    There is a huge growth in the volume of published biomedical research in recent years. Many medical search engines are designed and developed to address the over growing information needs of biomedical experts and curators. Significant progress has been made in utilizing the knowledge embedded in medical ontologies and controlled vocabularies to assist these engines. However, the lack of common architecture for utilized ontologies and overall retrieval process, hampers evaluating different search engines and interoperability between them under unified conditions. In this paper, a unified architecture for medical search engines is introduced. Proposed model contains standard schemas declared in semantic web languages for ontologies and documents used by search engines. Unified models for annotation and retrieval processes are other parts of introduced architecture. A sample search engine is also designed and implemented based on the proposed architecture in this paper. The search engine is evaluated using two test collections and results are reported in terms of precision vs. recall and mean average precision for different approaches used by this search engine.

  17. Design, Development and Analysis of Centrifugal Blower

    NASA Astrophysics Data System (ADS)

    Baloni, Beena Devendra; Channiwala, Salim Abbasbhai; Harsha, Sugnanam Naga Ramannath

    2018-06-01

    Centrifugal blowers are widely used turbomachines equipment in all kinds of modern and domestic life. Manufacturing of blowers seldom follow an optimum design solution for individual blower. Although centrifugal blowers are developed as highly efficient machines, design is still based on various empirical and semi empirical rules proposed by fan designers. There are different methodologies used to design the impeller and other components of blowers. The objective of present study is to study explicit design methodologies and tracing unified design to get better design point performance. This unified design methodology is based more on fundamental concepts and minimum assumptions. Parametric study is also carried out for the effect of design parameters on pressure ratio and their interdependency in the design. The code is developed based on a unified design using C programming. Numerical analysis is carried out to check the flow parameters inside the blower. Two blowers, one based on the present design and other on industrial design, are developed with a standard OEM blower manufacturing unit. A comparison of both designs is done based on experimental performance analysis as per IS standard. The results suggest better efficiency and more flow rate for the same pressure head in case of the present design compared with industrial one.

  18. Pulseq-Graphical Programming Interface: Open source visual environment for prototyping pulse sequences and integrated magnetic resonance imaging algorithm development.

    PubMed

    Ravi, Keerthi Sravan; Potdar, Sneha; Poojar, Pavan; Reddy, Ashok Kumar; Kroboth, Stefan; Nielsen, Jon-Fredrik; Zaitsev, Maxim; Venkatesan, Ramesh; Geethanath, Sairam

    2018-03-11

    To provide a single open-source platform for comprehensive MR algorithm development inclusive of simulations, pulse sequence design and deployment, reconstruction, and image analysis. We integrated the "Pulseq" platform for vendor-independent pulse programming with Graphical Programming Interface (GPI), a scientific development environment based on Python. Our integrated platform, Pulseq-GPI, permits sequences to be defined visually and exported to the Pulseq file format for execution on an MR scanner. For comparison, Pulseq files using either MATLAB only ("MATLAB-Pulseq") or Python only ("Python-Pulseq") were generated. We demonstrated three fundamental sequences on a 1.5 T scanner. Execution times of the three variants of implementation were compared on two operating systems. In vitro phantom images indicate equivalence with the vendor supplied implementations and MATLAB-Pulseq. The examples demonstrated in this work illustrate the unifying capability of Pulseq-GPI. The execution times of all the three implementations were fast (a few seconds). The software is capable of user-interface based development and/or command line programming. The tool demonstrated here, Pulseq-GPI, integrates the open-source simulation, reconstruction and analysis capabilities of GPI Lab with the pulse sequence design and deployment features of Pulseq. Current and future work includes providing an ISMRMRD interface and incorporating Specific Absorption Ratio and Peripheral Nerve Stimulation computations. Copyright © 2018 Elsevier Inc. All rights reserved.

  19. CaseMIDAS - A reactive planning architecture for the man-machine integration design and analysis system

    NASA Technical Reports Server (NTRS)

    Pease, R. Adam

    1995-01-01

    MIDAS is a set of tools which allow a designer to specify the physical and functional characteristics of a complex system such as an aircraft cockpit, and analyze the system with regard to human performance. MIDAS allows for a number of static analyses such as military standard reach and fit analysis, display legibility analysis, and vision polars. It also supports dynamic simulation of mission segments with 3d visualization. MIDAS development has incorporated several models of human planning behavior. The CaseMIDAS effort has been to provide a simplified and unified approach to modeling task selection behavior. Except for highly practiced, routine procedures, a human operator exhibits a cognitive effort while determining what step to take next in the accomplishment of mission tasks. Current versions of MIDAS do not model this effort in a consistent and inclusive manner. CaseMIDAS also attempts to address this issue. The CaseMIDAS project has yielded an easy to use software module for case creation and execution which is integrated with existing MIDAS simulation components.

  20. Position measurement of the direct drive motor of Large Aperture Telescope

    NASA Astrophysics Data System (ADS)

    Li, Ying; Wang, Daxing

    2010-07-01

    Along with the development of space and astronomy science, production of large aperture telescope and super large aperture telescope will definitely become the trend. It's one of methods to solve precise drive of large aperture telescope using direct drive technology unified designed of electricity and magnetism structure. A direct drive precise rotary table with diameter of 2.5 meters researched and produced by us is a typical mechanical & electrical integration design. This paper mainly introduces position measurement control system of direct drive motor. In design of this motor, position measurement control system requires having high resolution, and precisely aligning the position of rotor shaft and making measurement, meanwhile transferring position information to position reversing information corresponding to needed motor pole number. This system has chosen high precision metal band coder and absolute type coder, processing information of coders, and has sent 32-bit RISC CPU making software processing, and gained high resolution composite coder. The paper gives relevant laboratory test results at the end, indicating the position measurement can apply to large aperture telescope control system. This project is subsidized by Chinese National Natural Science Funds (10833004).

Top