Sample records for reusing architectural knowledge

  1. Software design by reusing architectures

    NASA Technical Reports Server (NTRS)

    Bhansali, Sanjay; Nii, H. Penny

    1992-01-01

    Abstraction fosters reuse by providing a class of artifacts that can be instantiated or customized to produce a set of artifacts meeting different specific requirements. It is proposed that significant leverage can be obtained by abstracting software system designs and the design process. The result of such an abstraction is a generic architecture and a set of knowledge-based, customization tools that can be used to instantiate the generic architecture. An approach for designing software systems based on the above idea are described. The approach is illustrated through an implemented example, and the advantages and limitations of the approach are discussed.

  2. Participative Knowledge Production of Learning Objects for E-Books.

    ERIC Educational Resources Information Center

    Dodero, Juan Manuel; Aedo, Ignacio; Diaz, Paloma

    2002-01-01

    Defines a learning object as any digital resource that can be reused to support learning and thus considers electronic books as learning objects. Highlights include knowledge management; participative knowledge production, i.e. authoring electronic books by a distributed group of authors; participative knowledge production architecture; and…

  3. A task-based support architecture for developing point-of-care clinical decision support systems for the emergency department.

    PubMed

    Wilk, S; Michalowski, W; O'Sullivan, D; Farion, K; Sayyad-Shirabad, J; Kuziemsky, C; Kukawka, B

    2013-01-01

    The purpose of this study was to create a task-based support architecture for developing clinical decision support systems (CDSSs) that assist physicians in making decisions at the point-of-care in the emergency department (ED). The backbone of the proposed architecture was established by a task-based emergency workflow model for a patient-physician encounter. The architecture was designed according to an agent-oriented paradigm. Specifically, we used the O-MaSE (Organization-based Multi-agent System Engineering) method that allows for iterative translation of functional requirements into architectural components (e.g., agents). The agent-oriented paradigm was extended with ontology-driven design to implement ontological models representing knowledge required by specific agents to operate. The task-based architecture allows for the creation of a CDSS that is aligned with the task-based emergency workflow model. It facilitates decoupling of executable components (agents) from embedded domain knowledge (ontological models), thus supporting their interoperability, sharing, and reuse. The generic architecture was implemented as a pilot system, MET3-AE--a CDSS to help with the management of pediatric asthma exacerbation in the ED. The system was evaluated in a hospital ED. The architecture allows for the creation of a CDSS that integrates support for all tasks from the task-based emergency workflow model, and interacts with hospital information systems. Proposed architecture also allows for reusing and sharing system components and knowledge across disease-specific CDSSs.

  4. The Study on Collaborative Manufacturing Platform Based on Agent

    NASA Astrophysics Data System (ADS)

    Zhang, Xiao-yan; Qu, Zheng-geng

    To fulfill the trends of knowledge-intensive in collaborative manufacturing development, we have described multi agent architecture supporting knowledge-based platform of collaborative manufacturing development platform. In virtue of wrapper service and communication capacity agents provided, the proposed architecture facilitates organization and collaboration of multi-disciplinary individuals and tools. By effectively supporting the formal representation, capture, retrieval and reuse of manufacturing knowledge, the generalized knowledge repository based on ontology library enable engineers to meaningfully exchange information and pass knowledge across boundaries. Intelligent agent technology increases traditional KBE systems efficiency and interoperability and provides comprehensive design environments for engineers.

  5. Applying Service-Oriented Architecture on The Development of Groundwater Modeling Support System

    NASA Astrophysics Data System (ADS)

    Li, C. Y.; WANG, Y.; Chang, L. C.; Tsai, J. P.; Hsiao, C. T.

    2016-12-01

    Groundwater simulation has become an essential step on the groundwater resources management and assessment. There are many stand-alone pre- and post-processing software packages to alleviate the model simulation loading, but the stand-alone software do not consider centralized management of data and simulation results neither do they provide network sharing functions. Hence, it is difficult to share and reuse the data and knowledge (simulation cases) systematically within or across companies. Therefore, this study develops a centralized and network based groundwater modeling support system to assist model construction. The system is based on service-oriented architecture and allows remote user to develop their modeling cases on internet. The data and cases (knowledge) are thus easy to manage centralized. MODFLOW is the modeling engine of the system, which is the most popular groundwater model in the world. The system provides a data warehouse to restore groundwater observations, MODFLOW Support Service, MODFLOW Input File & Shapefile Convert Service, MODFLOW Service, and Expert System Service to assist researchers to build models. Since the system architecture is service-oriented, it is scalable and flexible. The system can be easily extended to include the scenarios analysis and knowledge management to facilitate the reuse of groundwater modeling knowledge.

  6. A Bibliography of Externally Published Works by the SEI Engineering Techniques Program

    DTIC Science & Technology

    1992-08-01

    media, and virtual reality * model- based engineering * programming languages * reuse * software architectures * software engineering as a discipline...Knowledge- Based Engineering Environments." IEEE Expert 3, 2 (May 1988): 18-23, 26-32. Audience: Practitioner [Klein89b] Klein, D.V. "Comparison of...Terms with Software Reuse Terminology: A Model- Based Approach." ACM SIGSOFT Software Engineering Notes 16, 2 (April 1991): 45-51. Audience: Practitioner

  7. Ground support system methodology and architecture

    NASA Technical Reports Server (NTRS)

    Schoen, P. D.

    1991-01-01

    A synergistic approach to systems test and support is explored. A building block architecture provides transportability of data, procedures, and knowledge. The synergistic approach also lowers cost and risk for life cycle of a program. The determination of design errors at the earliest phase reduces cost of vehicle ownership. Distributed scaleable architecture is based on industry standards maximizing transparency and maintainability. Autonomous control structure provides for distributed and segmented systems. Control of interfaces maximizes compatibility and reuse, reducing long term program cost. Intelligent data management architecture also reduces analysis time and cost (automation).

  8. SEL Ada reuse analysis and representations

    NASA Technical Reports Server (NTRS)

    Kester, Rush

    1990-01-01

    Overall, it was revealed that the pattern of Ada reuse has evolved from initial reuse of utility components into reuse of generalized application architectures. Utility components were both domain-independent utilities, such as queues and stacks, and domain-specific utilities, such as those that implement spacecraft orbit and attitude mathematical functions and physics or astronomical models. The level of reuse was significantly increased with the development of a generalized telemetry simulator architecture. The use of Ada generics significantly increased the level of verbatum reuse, which is due to the ability, using Ada generics, to parameterize the aspects of design that are configurable during reuse. A key factor in implementing generalized architectures was the ability to use generic subprogram parameters to tailor parts of the algorithm embedded within the architecture. The use of object oriented design (in which objects model real world entities) significantly improved the modularity for reuse. Encapsulating into packages the data and operations associated with common real world entities creates natural building blocks for reuse.

  9. Design reuse experience of space and hazardous operations robots

    NASA Technical Reports Server (NTRS)

    Oneil, P. Graham

    1994-01-01

    A comparison of design drivers for space and hazardous nuclear waste operating robots details similarities and differences in operations, performance and environmental parameters for these critical environments. The similarities are exploited to provide low risk system components based on reuse principles and design knowledge. Risk reduction techniques are used for bridging areas of significant differences. As an example, risk reduction of a new sensor design for nuclear environment operations is employed to provide upgradeable replacement units in a reusable architecture for significantly higher levels of radiation.

  10. 10th Annual CMMI Technology Conference and User Group Tutorial Session

    DTIC Science & Technology

    2010-11-15

    Reuse That Pays Off: Software Product Lines BUSINESS GOALS/ APPLICATION DOMAIN ARCHITECTURE COMPONENTS and SERVICES pertain to share an are built... services PRODUCT LINES = STRATEGIC REUSE CMMI V1.3 and Architecture Oct 2010 © 2010 Carnegie Mellon University 46 91 CMMI V1.3 and Architecture © 2010... product component, the performance mustquality attribute can sometimes be partitioned for unique allocation to each product component as a derived

  11. Software reuse example and challenges at NSIDC

    NASA Astrophysics Data System (ADS)

    Billingsley, B. W.; Brodzik, M.; Collins, J. A.

    2009-12-01

    NSIDC has created a new data discovery and access system, Searchlight, to provide users with the data they want in the format they want. NSIDC Searchlight supports discovery and access to disparate data types with on-the-fly reprojection, regridding and reformatting. Architected to both reuse open source systems and be reused itself, Searchlight reuses GDAL and Proj4 for manipulating data and format conversions, the netCDF Java library for creating netCDF output, MapServer and OpenLayers for defining spatial criteria and the JTS Topology Suite (JTS) in conjunction with Hibernate Spatial for database interaction and rich OGC-compliant spatial objects. The application reuses popular Java and Java Script libraries including Struts 2, Spring, JPA (Hibernate), Sitemesh, JFreeChart, JQuery, DOJO and a PostGIS PostgreSQL database. Future reuse of Searchlight components is supported at varying architecture levels, ranging from the database and model components to web services. We present the tools, libraries and programs that Searchlight has reused. We describe the architecture of Searchlight and explain the strategies deployed for reusing existing software and how Searchlight is built for reuse. We will discuss NSIDC reuse of the Searchlight components to support rapid development of new data delivery systems.

  12. A General Architecture for Intelligent Tutoring of Diagnostic Classification Problem Solving

    PubMed Central

    Crowley, Rebecca S.; Medvedeva, Olga

    2003-01-01

    We report on a general architecture for creating knowledge-based medical training systems to teach diagnostic classification problem solving. The approach is informed by our previous work describing the development of expertise in classification problem solving in Pathology. The architecture envelops the traditional Intelligent Tutoring System design within the Unified Problem-solving Method description Language (UPML) architecture, supporting component modularity and reuse. Based on the domain ontology, domain task ontology and case data, the abstract problem-solving methods of the expert model create a dynamic solution graph. Student interaction with the solution graph is filtered through an instructional layer, which is created by a second set of abstract problem-solving methods and pedagogic ontologies, in response to the current state of the student model. We outline the advantages and limitations of this general approach, and describe it’s implementation in SlideTutor–a developing Intelligent Tutoring System in Dermatopathology. PMID:14728159

  13. NASA Integrated Model Centric Architecture (NIMA) Model Use and Re-Use

    NASA Technical Reports Server (NTRS)

    Conroy, Mike; Mazzone, Rebecca; Lin, Wei

    2012-01-01

    This whitepaper accepts the goals, needs and objectives of NASA's Integrated Model-centric Architecture (NIMA); adds experience and expertise from the Constellation program as well as NASA's architecture development efforts; and provides suggested concepts, practices and norms that nurture and enable model use and re-use across programs, projects and other complex endeavors. Key components include the ability to effectively move relevant information through a large community, process patterns that support model reuse and the identification of the necessary meta-information (ex. history, credibility, and provenance) to safely use and re-use that information. In order to successfully Use and Re-Use Models and Simulations we must define and meet key organizational and structural needs: 1. We must understand and acknowledge all the roles and players involved from the initial need identification through to the final product, as well as how they change across the lifecycle. 2. We must create the necessary structural elements to store and share NIMA-enabled information throughout the Program or Project lifecycle. 3. We must create the necessary organizational processes to stand up and execute a NIMA-enabled Program or Project throughout its lifecycle. NASA must meet all three of these needs to successfully use and re-use models. The ability to Reuse Models a key component of NIMA and the capabilities inherent in NIMA are key to accomplishing NASA's space exploration goals. 11

  14. Reusable Autonomy

    NASA Technical Reports Server (NTRS)

    Truszkowski, Walt; Obenschain, Arthur F. (Technical Monitor)

    2002-01-01

    Currently, spacecraft ground systems have a well defined and somewhat standard architecture and operations concept. Based on domain analysis studies of various control centers conducted over the years it is clear that ground systems have core capabilities and functionality that are common across all ground systems. This observation alone supports the realization of reuse. Additionally, spacecraft ground systems are increasing in their ability to do things autonomously. They are being engineered using advanced expert systems technology to provide automated support for operators. A clearer understanding of the possible roles of agent technology is advancing the prospects of greater autonomy for these systems. Many of their functional and management tasks are or could be supported by applied agent technology, the dynamics of the ground system's infrastructure could be monitored by agents, there are intelligent agent-based approaches to user-interfaces, etc. The premise of this paper is that the concepts associated with software reuse, applicable in consideration of classically-engineered ground systems, can be updated to address their application in highly agent-based realizations of future ground systems. As a somewhat simplified example consider the following situation, involving human agents in a ground system context. Let Group A of controllers be working on Mission X. They are responsible for the command, control and health and safety of the Mission X spacecraft. Let us suppose that mission X successfully completes it mission and is turned off. Group A could be dispersed or perhaps move to another Mission Y. In this case there would be reuse of the human agents from Mission X to Mission Y. The Group A agents perform their well-understood functions in a somewhat but related context. There will be a learning or familiarization process that the group A agents go through to make the new context, determined by the new Mission Y, understood. This simplified scenario highlights some of the major issues that need to be addressed when considering the situation where Group A is composed of software-based agents (not their human counterparts) and they migrate from one mission support system to another. This paper will address: - definition of an agent architecture appropriate to support reuse; - identification of non-mission-specific agent capabilities required; - appropriate knowledge representation schemes for mission-specific knowledge; - agent interface with mission-specific knowledge (a type of Learning); development of a fully-operational group of cooperative software agents for ground system support; architecture and operation of a repository of reusable agents that could be the source of intelligent components for realizing an autonomous (or nearly autonomous) agent-based ground system, and an agent-based approach to repository management and operation (an intelligent interface for human use of the repository in a ground-system development activity).

  15. Facilitating the Specification Capture and Transformation Process in the Development of Multi-Agent Systems

    NASA Technical Reports Server (NTRS)

    Filho, Aluzio Haendehen; Caminada, Numo; Haeusler, Edward Hermann; vonStaa, Arndt

    2004-01-01

    To support the development of flexible and reusable MAS, we have built a framework designated MAS-CF. MAS-CF is a component framework that implements a layered architecture based on contextual composition. Interaction rules, controlled by architecture mechanisms, ensure very low coupling, making possible the sharing of distributed services in a transparent, dynamic and independent way. These properties propitiate large-scale reuse, since organizational abstractions can be reused and propagated to all instances created from a framework. The objective is to reduce complexity and development time of multi-agent systems through the reuse of generic organizational abstractions.

  16. A unified approach to the design of clinical reporting systems.

    PubMed

    Gouveia-Oliveira, A; Salgado, N C; Azevedo, A P; Lopes, L; Raposo, V D; Almeida, I; de Melo, F G

    1994-12-01

    Computer-based Clinical Reporting Systems (CRS) for diagnostic departments that use structured data entry have a number of functional and structural affinities suggesting that a common software architecture for CRS may be defined. Such an architecture should allow easy expandability and reusability of a CRS. We report the development methodology and the architecture of SISCOPE, a CRS originally designed for gastrointestinal endoscopy that is expandable and reusable. Its main components are a patient database, a knowledge base, a reports base, and screen and reporting engines. The knowledge base contains the description of the controlled vocabulary and all the information necessary to control the menu system, and is easily accessed and modified with a conventional text editor. The structure of the controlled vocabulary is formally presented as an entity-relationship diagram. The screen engine drives a dynamic user interface and the reporting engine automatically creates a medical report; both engines operate by following a set of rules and the information contained in the knowledge base. Clinical experience has shown this architecture to be highly flexible and to allow frequent modifications of both the vocabulary and the menu system. This structure provided increased collaboration among development teams, insulating the domain expert from the details of the database, and enabling him to modify the system as necessary and to test the changes immediately. The system has also been reused in several different domains.

  17. Architecture-driven reuse of code in KASE

    NASA Technical Reports Server (NTRS)

    Bhansali, Sanjay

    1993-01-01

    In order to support the synthesis of large, complex software systems, we need to focus on issues pertaining to the architectural design of a system in addition to algorithm and data structure design. An approach that is based on abstracting the architectural design of a set of problems in the form of a generic architecture, and providing tools that can be used to instantiate the generic architecture for specific problem instances is presented. Such an approach also facilitates reuse of code between different systems belonging to the same problem class. An application of our approach on a realistic problem is described; the results of the exercise are presented; and how our approach compares to other work in this area is discussed.

  18. Analysis of Defenses Against Code Reuse Attacks on Modern and New Architectures

    DTIC Science & Technology

    2015-09-01

    soundness or completeness. An incomplete analysis will produce extra edges in the CFG that might allow an attacker to slip through. An unsound analysis...Analysis of Defenses Against Code Reuse Attacks on Modern and New Architectures by Isaac Noah Evans Submitted to the Department of Electrical...Engineering and Computer Science in partial fulfillment of the requirements for the degree of Master of Engineering in Electrical Engineering and Computer

  19. Generating target system specifications from a domain model using CLIPS

    NASA Technical Reports Server (NTRS)

    Sugumaran, Vijayan; Gomaa, Hassan; Kerschberg, Larry

    1991-01-01

    The quest for reuse in software engineering is still being pursued and researchers are actively investigating the domain modeling approach to software construction. There are several domain modeling efforts reported in the literature and they all agree that the components that are generated from domain modeling are more conducive to reuse. Once a domain model is created, several target systems can be generated by tailoring the domain model or by evolving the domain model and then tailoring it according to the specified requirements. This paper presents the Evolutionary Domain Life Cycle (EDLC) paradigm in which a domain model is created using multiple views, namely, aggregation hierarchy, generalization/specialization hierarchies, object communication diagrams and state transition diagrams. The architecture of the Knowledge Based Requirements Elicitation Tool (KBRET) which is used to generate target system specifications is also presented. The preliminary version of KBRET is implemented in the C Language Integrated Production System (CLIPS).

  20. Case Study: Using The OMG SWRADIO Profile and SDR Forum Input for NASA's Space Telecommunications Radio System

    NASA Technical Reports Server (NTRS)

    Briones, Janette C.; Handler, Louis M.; Hall, Steve C.; Reinhart, Richard C.; Kacpura, Thomas J.

    2009-01-01

    The Space Telecommunication Radio System (STRS) standard is a Software Defined Radio (SDR) architecture standard developed by NASA. The goal of STRS is to reduce NASA s dependence on custom, proprietary architectures with unique and varying interfaces and hardware and support reuse of waveforms across platforms. The STRS project worked with members of the Object Management Group (OMG), Software Defined Radio Forum, and industry partners to leverage existing standards and knowledge. This collaboration included investigating the use of the OMG s Platform-Independent Model (PIM) SWRadio as the basis for an STRS PIM. This paper details the influence of the OMG technologies on the STRS update effort, findings in the STRS/SWRadio mapping, and provides a summary of the SDR Forum recommendations.

  1. Effective domain-dependent reuse in medical knowledge bases.

    PubMed

    Dojat, M; Pachet, F

    1995-12-01

    Knowledge reuse is now a critical issue for most developers of medical knowledge-based systems. As a rule, reuse is addressed from an ambitious, knowledge-engineering perspective that focuses on reusable general purpose knowledge modules, concepts, and methods. However, such a general goal fails to take into account the specific aspects of medical practice. From the point of view of the knowledge engineer, whose goal is to capture the specific features and intricacies of a given domain, this approach addresses the wrong level of generality. In this paper, we adopt a more pragmatic viewpoint, introducing the less ambitious goal of "domain-dependent limited reuse" and suggesting effective means of achieving it in practice. In a knowledge representation framework combining objects and production rules, we propose three mechanisms emerging from the combination of object-oriented programming and rule-based programming. We show these mechanisms contribute to achieve limited reuse and to introduce useful limited variations in medical expertise.

  2. Sensor Open System Architecture (SOSA) evolution for collaborative standards development

    NASA Astrophysics Data System (ADS)

    Collier, Charles Patrick; Lipkin, Ilya; Davidson, Steven A.; Baldwin, Rusty; Orlovsky, Michael C.; Ibrahim, Tim

    2017-04-01

    The Sensor Open System Architecture (SOSA) is a C4ISR-focused technical and economic collaborative effort between the Air Force, Navy, Army, the Department of Defense (DoD), Industry, and other Governmental agencies to develop (and incorporate) a technical Open Systems Architecture standard in order to maximize C4ISR sub-system, system, and platform affordability, re-configurability, and hardware/software/firmware re-use. The SOSA effort will effectively create an operational and technical framework for the integration of disparate payloads into C4ISR systems; with a focus on the development of a modular decomposition (defining functions and behaviors) and associated key interfaces (physical and logical) for common multi-purpose architecture for radar, EO/IR, SIGINT, EW, and Communications. SOSA addresses hardware, software, and mechanical/electrical interfaces. The modular decomposition will produce a set of re-useable components, interfaces, and sub-systems that engender reusable capabilities. This, in effect, creates a realistic and affordable ecosystem enabling mission effectiveness through systematic re-use of all available re-composed hardware, software, and electrical/mechanical base components and interfaces. To this end, SOSA will leverage existing standards as much as possible and evolve the SOSA architecture through modification, reuse, and enhancements to achieve C4ISR goals. This paper will present accomplishments over the first year of SOSA initiative.

  3. Calculating Reuse Distance from Source Code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Narayanan, Sri Hari Krishna; Hovland, Paul

    The efficient use of a system is of paramount importance in high-performance computing. Applications need to be engineered for future systems even before the architecture of such a system is clearly known. Static performance analysis that generates performance bounds is one way to approach the task of understanding application behavior. Performance bounds provide an upper limit on the performance of an application on a given architecture. Predicting cache hierarchy behavior and accesses to main memory is a requirement for accurate performance bounds. This work presents our static reuse distance algorithm to generate reuse distance histograms. We then use these histogramsmore » to predict cache miss rates. Experimental results for kernels studied show that the approach is accurate.« less

  4. On the formalization and reuse of scientific research.

    PubMed

    King, Ross D; Liakata, Maria; Lu, Chuan; Oliver, Stephen G; Soldatova, Larisa N

    2011-10-07

    The reuse of scientific knowledge obtained from one investigation in another investigation is basic to the advance of science. Scientific investigations should therefore be recorded in ways that promote the reuse of the knowledge they generate. The use of logical formalisms to describe scientific knowledge has potential advantages in facilitating such reuse. Here, we propose a formal framework for using logical formalisms to promote reuse. We demonstrate the utility of this framework by using it in a worked example from biology: demonstrating cycles of investigation formalization [F] and reuse [R] to generate new knowledge. We first used logic to formally describe a Robot scientist investigation into yeast (Saccharomyces cerevisiae) functional genomics [f(1)]. With Robot scientists, unlike human scientists, the production of comprehensive metadata about their investigations is a natural by-product of the way they work. We then demonstrated how this formalism enabled the reuse of the research in investigating yeast phenotypes [r(1) = R(f(1))]. This investigation found that the removal of non-essential enzymes generally resulted in enhanced growth. The phenotype investigation was then formally described using the same logical formalism as the functional genomics investigation [f(2) = F(r(1))]. We then demonstrated how this formalism enabled the reuse of the phenotype investigation to investigate yeast systems-biology modelling [r(2) = R(f(2))]. This investigation found that yeast flux-balance analysis models fail to predict the observed changes in growth. Finally, the systems biology investigation was formalized for reuse in future investigations [f(3) = F(r(2))]. These cycles of reuse are a model for the general reuse of scientific knowledge.

  5. Open Architecture SDR for Space

    NASA Technical Reports Server (NTRS)

    Smith, Carl; Long, Chris; Liebetreu, John; Reinhart, Richard C.

    2005-01-01

    This paper describes an open-architecture SDR (software defined radio) infrastructure that is suitable for space-based operations (Space-SDR). SDR technologies will endow space and planetary exploration systems with dramatically increased capability, reduced power consumption, and significantly less mass than conventional systems, at costs reduced by vigorous competition, hardware commonality, dense integration, reduced obsolescence, interoperability, and software re-use. Significant progress has been recorded on developments like the Joint Tactical Radio System (JSTRS) Software Communication Architecture (SCA), which is oriented toward reconfigurable radios for defense forces operating in multiple theaters of engagement. The JTRS-SCA presents a consistent software interface for waveform development, and facilitates interoperability, waveform portability, software re-use, and technology evolution.

  6. Architectural Blueprint for Plate Boundary Observatories based on interoperable Data Management Platforms

    NASA Astrophysics Data System (ADS)

    Kerschke, D. I.; Häner, R.; Schurr, B.; Oncken, O.; Wächter, J.

    2014-12-01

    Interoperable data management platforms play an increasing role in the advancement of knowledge and technology in many scientific disciplines. Through high quality services they support the establishment of efficient and innovative research environments. Well-designed research environments can facilitate the sustainable utilization, exchange, and re-use of scientific data and functionality by using standardized community models. Together with innovative 3D/4D visualization, these concepts provide added value in improving scientific knowledge-gain, even across the boundaries of disciplines. A project benefiting from the added value is the Integrated Plate boundary Observatory in Chile (IPOC). IPOC is a European-South American network to study earthquakes and deformation at the Chilean continental margin and to monitor the plate boundary system for capturing an anticipated great earthquake in a seismic gap. In contrast to conventional observatories that monitor individual signals only, IPOC captures a large range of different processes through various observation methods (e.g., seismographs, GPS, magneto-telluric sensors, creep-meter, accelerometer, InSAR). For IPOC a conceptual design has been devised that comprises an architectural blueprint for a data management platform based on common and standardized data models, protocols, and encodings as well as on an exclusive use of Free and Open Source Software (FOSS) including visualization components. Following the principles of event-driven service-oriented architectures, the design enables novel processes by sharing and re-using functionality and information on the basis of innovative data mining and data fusion technologies. This platform can help to improve the understanding of the physical processes underlying plate deformations as well as the natural hazards induced by them. Through the use of standards, this blueprint can not only be facilitated for other plate observing systems (e.g., the European Plate Observing System EPOS), it also supports integrated approaches to include sensor networks that provide complementary processes for dynamic monitoring. Moreover, the integration of such observatories into superordinate research infrastructures (federation of virtual observatories) will be enabled.

  7. Viability of a Reusable In-Space Transportation System

    NASA Technical Reports Server (NTRS)

    Jefferies, Sharon A.; McCleskey, Carey M.; Nufer, Brian M.; Lepsch, Roger A.; Merrill, Raymond G.; North, David D.; Martin, John G.; Komar, David R.

    2015-01-01

    The National Aeronautics and Space Administration (NASA) is currently developing options for an Evolvable Mars Campaign (EMC) that expands human presence from Low Earth Orbit (LEO) into the solar system and to the surface of Mars. The Hybrid in-space transportation architecture is one option being investigated within the EMC. The architecture enables return of the entire in-space propulsion stage and habitat to cis-lunar space after a round trip to Mars. This concept of operations opens the door for a fully reusable Mars transportation system from cis-lunar space to a Mars parking orbit and back. This paper explores the reuse of in-space transportation systems, with a focus on the propulsion systems. It begins by examining why reusability should be pursued and defines reusability in space-flight context. A range of functions and enablers associated with preparing a system for reuse are identified and a vision for reusability is proposed that can be advanced and implemented as new capabilities are developed. Following this, past reusable spacecraft and servicing capabilities, as well as those currently in development are discussed. Using the Hybrid transportation architecture as an example, an assessment of the degree of reusability that can be incorporated into the architecture with current capabilities is provided and areas for development are identified that will enable greater levels of reuse in the future. Implications and implementation challenges specific to the architecture are also presented.

  8. Robotic Form-Finding and Construction Based on the Architectural Projection Logic

    NASA Astrophysics Data System (ADS)

    Zexin, Sun; Mei, Hongyuan

    2017-06-01

    In this article we analyze the relationship between the architectural drawings and form-finding, indicate that architects should reuse and redefine the traditional architectural drawings as a from-finding tool. Explain the projection systems and analyze how these systems affected the architectural design. Use robotic arm to do the experiment and establish a cylindrical projection form-finding system.

  9. Reusing Design Knowledge Based on Design Cases and Knowledge Map

    ERIC Educational Resources Information Center

    Yang, Cheng; Liu, Zheng; Wang, Haobai; Shen, Jiaoqi

    2013-01-01

    Design knowledge was reused for innovative design work to support designers with product design knowledge and help designers who lack rich experiences to improve their design capacity and efficiency. First, based on the ontological model of product design knowledge constructed by taxonomy, implicit and explicit knowledge was extracted from some…

  10. AMPHION: Specification-based programming for scientific subroutine libraries

    NASA Technical Reports Server (NTRS)

    Lowry, Michael; Philpot, Andrew; Pressburger, Thomas; Underwood, Ian; Waldinger, Richard; Stickel, Mark

    1994-01-01

    AMPHION is a knowledge-based software engineering (KBSE) system that guides a user in developing a diagram representing a formal problem specification. It then automatically implements a solution to this specification as a program consisting of calls to subroutines from a library. The diagram provides an intuitive domain oriented notation for creating a specification that also facilitates reuse and modification. AMPHION'S architecture is domain independent. AMPHION is specialized to an application domain by developing a declarative domain theory. Creating a domain theory is an iterative process that currently requires the joint expertise of domain experts and experts in automated formal methods for software development.

  11. STRS Compliant FPGA Waveform Development

    NASA Technical Reports Server (NTRS)

    Nappier, Jennifer; Downey, Joseph; Mortensen, Dale

    2008-01-01

    The Space Telecommunications Radio System (STRS) Architecture Standard describes a standard for NASA space software defined radios (SDRs). It provides a common framework that can be used to develop and operate a space SDR in a reconfigurable and reprogrammable manner. One goal of the STRS Architecture is to promote waveform reuse among multiple software defined radios. Many space domain waveforms are designed to run in the special signal processing (SSP) hardware. However, the STRS Architecture is currently incomplete in defining a standard for designing waveforms in the SSP hardware. Therefore, the STRS Architecture needs to be extended to encompass waveform development in the SSP hardware. The extension of STRS to the SSP hardware will promote easier waveform reconfiguration and reuse. A transmit waveform for space applications was developed to determine ways to extend the STRS Architecture to a field programmable gate array (FPGA). These extensions include a standard hardware abstraction layer for FPGAs and a standard interface between waveform functions running inside a FPGA. A FPGA-based transmit waveform implementation of the proposed standard interfaces on a laboratory breadboard SDR will be discussed.

  12. Generic Software Architecture for Prognostics (GSAP) User Guide

    NASA Technical Reports Server (NTRS)

    Teubert, Christopher Allen; Daigle, Matthew John; Watkins, Jason; Sankararaman, Shankar; Goebel, Kai

    2016-01-01

    The Generic Software Architecture for Prognostics (GSAP) is a framework for applying prognostics. It makes applying prognostics easier by implementing many of the common elements across prognostic applications. The standard interface enables reuse of prognostic algorithms and models across systems using the GSAP framework.

  13. Termontography and DOGMA for Knowledge Engineering within PROLIX

    NASA Astrophysics Data System (ADS)

    de Baer, Peter; Meersman, Robert; Temmerman, Rita

    In this article, we describe our ongoing research to combine two approaches, i.e. Termontography and DOGMA, for knowledge engineering. Both approaches have in common that they mainly rely on natural language to describe meaning. Termontography is a special form of terminography that results in an ontologically structured terminological resource. DOGMA is an abbreviation of Developing Ontology Guided Mediation for Agents. The DOGMA approach results in a scalable and modular ontology that can easily be (re)used for different domains and applications. Both Termontography and DOGMA have already been used separately during several research projects. In this article we explain how both approaches are being combined within the PROLIX project, and what the advantages of this combination are. The goal of PROLIX is to develop an open, integrated reference architecture for process-oriented learning and information exchange.

  14. A New FPGA Architecture of FAST and BRIEF Algorithm for On-Board Corner Detection and Matching.

    PubMed

    Huang, Jingjin; Zhou, Guoqing; Zhou, Xiang; Zhang, Rongting

    2018-03-28

    Although some researchers have proposed the Field Programmable Gate Array (FPGA) architectures of Feature From Accelerated Segment Test (FAST) and Binary Robust Independent Elementary Features (BRIEF) algorithm, there is no consideration of image data storage in these traditional architectures that will result in no image data that can be reused by the follow-up algorithms. This paper proposes a new FPGA architecture that considers the reuse of sub-image data. In the proposed architecture, a remainder-based method is firstly designed for reading the sub-image, a FAST detector and a BRIEF descriptor are combined for corner detection and matching. Six pairs of satellite images with different textures, which are located in the Mentougou district, Beijing, China, are used to evaluate the performance of the proposed architecture. The Modelsim simulation results found that: (i) the proposed architecture is effective for sub-image reading from DDR3 at a minimum cost; (ii) the FPGA implementation is corrected and efficient for corner detection and matching, such as the average value of matching rate of natural areas and artificial areas are approximately 67% and 83%, respectively, which are close to PC's and the processing speed by FPGA is approximately 31 and 2.5 times faster than those by PC processing and by GPU processing, respectively.

  15. An Ontology-Based Conceptual Model For Accumulating And Reusing Knowledge In A DMAIC Process

    NASA Astrophysics Data System (ADS)

    Nguyen, ThanhDat; Kifor, Claudiu Vasile

    2015-09-01

    DMAIC (Define, Measure, Analyze, Improve, and Control) is an important process used to enhance quality of processes basing on knowledge. However, it is difficult to access DMAIC knowledge. Conventional approaches meet a problem arising from structuring and reusing DMAIC knowledge. The main reason is that DMAIC knowledge is not represented and organized systematically. In this article, we overcome the problem basing on a conceptual model that is a combination of DMAIC process, knowledge management, and Ontology engineering. The main idea of our model is to utilizing Ontologies to represent knowledge generated by each of DMAIC phases. We build five different knowledge bases for storing all knowledge of DMAIC phases with the support of necessary tools and appropriate techniques in Information Technology area. Consequently, these knowledge bases provide knowledge available to experts, managers, and web users during or after DMAIC execution in order to share and reuse existing knowledge.

  16. An Architecture Based on Linked Data Technologies for the Integration and Reuse of OER in MOOCs Context

    ERIC Educational Resources Information Center

    Piedra, Nelson; Chicaiza, Janneth Alexandra; López, Jorge; Tovar, Edmundo

    2014-01-01

    The Linked Data initiative is considered as one of the most effective alternatives for creating global shared information spaces, it has become an interesting approach for discovering and enriching open educational resources data, as well as achieving semantic interoperability and re-use between multiple OER repositories. The notion of Linked Data…

  17. NASA JPL Distributed Systems Technology (DST) Object-Oriented Component Approach for Software Inter-Operability and Reuse

    NASA Technical Reports Server (NTRS)

    Hall, Laverne; Hung, Chaw-Kwei; Lin, Imin

    2000-01-01

    The purpose of this paper is to provide a description of NASA JPL Distributed Systems Technology (DST) Section's object-oriented component approach to open inter-operable systems software development and software reuse. It will address what is meant by the terminology object component software, give an overview of the component-based development approach and how it relates to infrastructure support of software architectures and promotes reuse, enumerate on the benefits of this approach, and give examples of application prototypes demonstrating its usage and advantages. Utilization of the object-oriented component technology approach for system development and software reuse will apply to several areas within JPL, and possibly across other NASA Centers.

  18. Design Knowledge Management System (DKMS) Beta Test Report

    DTIC Science & Technology

    1992-11-01

    design process. These problems, which include knowledge representation, constraint propagation, model design, and information integration, are...effective delivery of life-cycle engineering knowledge assistance and information to the design/engineering activities. It does not matter whether these...platfomi. 4. Reuse - existing data, information , and knowledge can be reused. 5. Remote Execution -- automatically handles remote execution without

  19. “Docs 'n Drugs” - A System for Case-Oriented and Web-based Training in Medicine

    PubMed Central

    Martens, A.; Bernauer, J.

    1999-01-01

    The tutoring process of conventional case-oriented medical training systems can be characterised as either guided or unguided. In contrast to that, the aim of the system “Docs'n Drugs” is to distinguish between different levels of guidance. The author can realise the tutoring case either as a guided, a half guided or a unguided tutoring process. The system architecture distinguishes between an authoring system and a tutoring system. Fundaments of these are the tutoring process model and the case-based knowledge model. This structure allows the reuse of elements of existing tutoring cases. The tutoring cases can be realised in German and English.

  20. Guidelines and Metrics for Assessing Space System Cost Estimates

    DTIC Science & Technology

    2008-01-01

    analysis time, reuse tooling, models , mechanical ground-support equipment [MGSE]) High mass margin ( simplifying assumptions used to bound solution...engineering environment changes High reuse of architecture, design , tools, code, test scripts, and commercial real- time operating systems Simplified life...Coronal Explorer TWTA traveling wave tube amplifier USAF U.S. Air Force USCM Unmanned Space Vehicle Cost Model USN U.S. Navy UV ultraviolet UVOT UV

  1. CardioOp: an integrated approach to teleteaching in cardiac surgery.

    PubMed

    Friedl, R; Preisack, M; Schefer, M; Klas, W; Tremper, J; Rose, T; Bay, J; Albers, J; Engels, P; Guilliard, P; Vahl, C F; Hannekum, A

    2000-01-01

    The complexity of cardiac surgery requires continuous training, education and information addressing different individuals: physicians (cardiac surgeons, residents, anaesthesiologists, cardiologists), medical students, perfusionists and patients. Efficacy and efficiency of education and training will likely be improved by the use of multimedia information systems. Nevertheless, computer-based education is facing some serious disadvantages: 1) multimedia productions require tremendous financial and time resources; 2) the obtained multimedia data are only usable for one specific target user group in one specific instructional context; 3) computer based learning programs often show deficiencies in the support of individual learning styles and in providing individual information adjusted to the learner's individual needs. In this paper we describe a computer-system, providing multiple re-use of multimedia-data in different instructional sceneries and providing flexible composition of content to different target user groups. The ZYX document model has been developed, allowing the modelling and flexible on-the-fly composition of multimedia fragments. It has been implemented as a DataBlade module into the object-relational database system Informix Dynamic Server and allows for presentation-neutral storage of multimedia content from the application domain, delivery and presentation of multimedia material, content based retrieval, re-use and composition of multimedia material for different instructional settings. Multimedia data stored in the repository, that can be processed and authored in terms of our identified needs is created by using a next generation authoring environment called CardioOP-Wizard. High-quality intra-operative video is recorded using a video-robot. Difficult surgical procedures are visualized with generic and CT-based 3D-animations. An on-line architecture for multiple re-use and flexible composition of media data has been established. The system contains the following instructional applications (prototypically implemented): a multimedia textbook on operative techniques, an interactive module for problem based-training, a module for creation and presentation of lectures and a module for patient information. Principles of cognitive psychology and knowledge management have been employed in the program. These instructional applications provide information ranging from basic knowledge at the beginner's level, procedural knowledge for the advanced level to implicit knowledge for the professional level. For media-annotation with meta-data a metainformation system, the CardioOP-Clas has been developed. The prototype focuses on aortocoronary bypass grafting and heart transplantation. The demonstrated system reflects an integrated approach in terms of information technology and teaching by means of multiple re-use and composition of stored media-items to the individual user and the chosen educational setting on different instructional levels.

  2. Design, implementation, use, and preliminary evaluation of SEBASTIAN, a standards-based Web service for clinical decision support.

    PubMed

    Kawamoto, Kensaku; Lobach, David F

    2005-01-01

    Despite their demonstrated ability to improve care quality, clinical decision support systems are not widely used. In part, this limited use is due to the difficulty of sharing medical knowledge in a machine-executable format. To address this problem, we developed a decision support Web service known as SEBASTIAN. In SEBASTIAN, individual knowledge modules define the data requirements for assessing a patient, the conclusions that can be drawn using that data, and instructions on how to generate those conclusions. Using standards-based XML messages transmitted over HTTP, client decision support applications provide patient data to SEBASTIAN and receive patient-specific assessments and recommendations. SEBASTIAN has been used to implement four distinct decision support systems; an architectural overview is provided for one of these systems. Preliminary assessments indicate that SEBASTIAN fulfills all original design objectives, including the re-use of executable medical knowledge across diverse applications and care settings, the straightforward authoring of knowledge modules, and use of the framework to implement decision support applications with significant clinical utility.

  3. Using Best Practices to Extract, Organize, and Reuse Embedded Decision Support Content Knowledge Rules from Mature Clinical Systems.

    PubMed

    DesAutels, Spencer J; Fox, Zachary E; Giuse, Dario A; Williams, Annette M; Kou, Qing-Hua; Weitkamp, Asli; Neal R, Patel; Bettinsoli Giuse, Nunzia

    2016-01-01

    Clinical decision support (CDS) knowledge, embedded over time in mature medical systems, presents an interesting and complex opportunity for information organization, maintenance, and reuse. To have a holistic view of all decision support requires an in-depth understanding of each clinical system as well as expert knowledge of the latest evidence. This approach to clinical decision support presents an opportunity to unify and externalize the knowledge within rules-based decision support. Driven by an institutional need to prioritize decision support content for migration to new clinical systems, the Center for Knowledge Management and Health Information Technology teams applied their unique expertise to extract content from individual systems, organize it through a single extensible schema, and present it for discovery and reuse through a newly created Clinical Support Knowledge Acquisition and Archival Tool (CS-KAAT). CS-KAAT can build and maintain the underlying knowledge infrastructure needed by clinical systems.

  4. Reuse: A knowledge-based approach

    NASA Technical Reports Server (NTRS)

    Iscoe, Neil; Liu, Zheng-Yang; Feng, Guohui

    1992-01-01

    This paper describes our research in automating the reuse process through the use of application domain models. Application domain models are explicit formal representations of the application knowledge necessary to understand, specify, and generate application programs. Furthermore, they provide a unified repository for the operational structure, rules, policies, and constraints of a specific application area. In our approach, domain models are expressed in terms of a transaction-based meta-modeling language. This paper has described in detail the creation and maintenance of hierarchical structures. These structures are created through a process that includes reverse engineering of data models with supplementary enhancement from application experts. Source code is also reverse engineered but is not a major source of domain model instantiation at this time. In the second phase of the software synthesis process, program specifications are interactively synthesized from an instantiated domain model. These specifications are currently integrated into a manual programming process but will eventually be used to derive executable code with mechanically assisted transformations. This research is performed within the context of programming-in-the-large types of systems. Although our goals are ambitious, we are implementing the synthesis system in an incremental manner through which we can realize tangible results. The client/server architecture is capable of supporting 16 simultaneous X/Motif users and tens of thousands of attributes and classes. Domain models have been partially synthesized from five different application areas. As additional domain models are synthesized and additional knowledge is gathered, we will inevitably add to and modify our representation. However, our current experience indicates that it will scale and expand to meet our modeling needs.

  5. Progress in the Development of a Prototype Reuse Enablement System

    NASA Astrophysics Data System (ADS)

    Marshall, J. J.; Downs, R. R.; Gilliam, L. J.; Wolfe, R. E.

    2008-12-01

    An important part of promoting software reuse is to ensure that reusable software assets are readily available to the software developers who want to use them. Through dialogs with the community, the NASA Earth Science Data Systems Software Reuse Working Group has learned that the lack of a centralized, domain- specific software repository or catalog system addressing the needs of the Earth science community is a major barrier to software reuse within the community. The Working Group has proposed the creation of such a reuse enablement system, which would provide capabilities for contributing and obtaining reusable software, to remove this barrier. The Working Group has recommended the development of a Reuse Enablement System to NASA and has performed a trade study to review systems with similar capabilities and to identify potential platforms for the proposed system. This was followed by an architecture study to determine an expeditious and cost-effective solution for this system. A number of software packages and systems were examined through both creating prototypes and examining existing systems that use the same software packages and systems. Based on the results of the architecture study, the Working Group developed a prototype of the proposed system using the recommended software package, through an iterative process of identifying needed capabilities and improving the system to provide those capabilities. Policies for the operation and maintenance of the system are being established for the system, and the identification of system policies also has contributed to the development process. Additionally, a test plan is being developed for formal testing of the prototype, to ensure that it meets all of the requirements previously developed by the Working Group. This poster summarizes the results of our work to date, focusing on the most recent activities.

  6. Integration into Big Data: First Steps to Support Reuse of Comprehensive Toxicity Model Modules (SOT)

    EPA Science Inventory

    Data surrounding the needs of human disease and toxicity modeling are largely siloed limiting the ability to extend and reuse modules across knowledge domains. Using an infrastructure that supports integration across knowledge domains (animal toxicology, high-throughput screening...

  7. A SCORM Thin Client Architecture for E-Learning Systems Based on Web Services

    ERIC Educational Resources Information Center

    Casella, Giovanni; Costagliola, Gennaro; Ferrucci, Filomena; Polese, Giuseppe; Scanniello, Giuseppe

    2007-01-01

    In this paper we propose an architecture of e-learning systems characterized by the use of Web services and a suitable middleware component. These technical infrastructures allow us to extend the system with new services as well as to integrate and reuse heterogeneous software e-learning components. Moreover, they let us better support the…

  8. ASIC-based architecture for the real-time computation of 2D convolution with large kernel size

    NASA Astrophysics Data System (ADS)

    Shao, Rui; Zhong, Sheng; Yan, Luxin

    2015-12-01

    Bidimensional convolution is a low-level processing algorithm of interest in many areas, but its high computational cost constrains the size of the kernels, especially in real-time embedded systems. This paper presents a hardware architecture for the ASIC-based implementation of 2-D convolution with medium-large kernels. Aiming to improve the efficiency of storage resources on-chip, reducing off-chip bandwidth of these two issues, proposed construction of a data cache reuse. Multi-block SPRAM to cross cached images and the on-chip ping-pong operation takes full advantage of the data convolution calculation reuse, design a new ASIC data scheduling scheme and overall architecture. Experimental results show that the structure can achieve 40× 32 size of template real-time convolution operations, and improve the utilization of on-chip memory bandwidth and on-chip memory resources, the experimental results show that the structure satisfies the conditions to maximize data throughput output , reducing the need for off-chip memory bandwidth.

  9. HANDBOOK ON THE BENEFITS, COSTS, AND IMPACTS OF LAND CLEANUP AND REUSE

    EPA Science Inventory

    Summarizes the theoretical and empirical literature addressing benefit-cost and impact assessment of the land cleanup and reuse scenario. When possible, recommendations are provided for conducting economic analysis of land cleanup and reuse sites and programs. The knowledge base ...

  10. Knowledge base methodology: Methodology for first Engineering Script Language (ESL) knowledge base

    NASA Technical Reports Server (NTRS)

    Peeris, Kumar; Izygon, Michel E.

    1992-01-01

    The primary goal of reusing software components is that software can be developed faster, cheaper and with higher quality. Though, reuse is not automatic and can not just happen. It has to be carefully engineered. For example a component needs to be easily understandable in order to be reused, and it has also to be malleable enough to fit into different applications. In fact the software development process is deeply affected when reuse is being applied. During component development, a serious effort has to be directed toward making these components as reusable. This implies defining reuse coding style guidelines and applying then to any new component to create as well as to any old component to modify. These guidelines should point out the favorable reuse features and may apply to naming conventions, module size and cohesion, internal documentation, etc. During application development, effort is shifted from writing new code toward finding and eventually modifying existing pieces of code, then assembling them together. We see here that reuse is not free, and therefore has to be carefully managed.

  11. Using Best Practices to Extract, Organize, and Reuse Embedded Decision Support Content Knowledge Rules from Mature Clinical Systems

    PubMed Central

    DesAutels, Spencer J.; Fox, Zachary E.; Giuse, Dario A.; Williams, Annette M.; Kou, Qing-hua; Weitkamp, Asli; Neal R, Patel; Bettinsoli Giuse, Nunzia

    2016-01-01

    Clinical decision support (CDS) knowledge, embedded over time in mature medical systems, presents an interesting and complex opportunity for information organization, maintenance, and reuse. To have a holistic view of all decision support requires an in-depth understanding of each clinical system as well as expert knowledge of the latest evidence. This approach to clinical decision support presents an opportunity to unify and externalize the knowledge within rules-based decision support. Driven by an institutional need to prioritize decision support content for migration to new clinical systems, the Center for Knowledge Management and Health Information Technology teams applied their unique expertise to extract content from individual systems, organize it through a single extensible schema, and present it for discovery and reuse through a newly created Clinical Support Knowledge Acquisition and Archival Tool (CS-KAAT). CS-KAAT can build and maintain the underlying knowledge infrastructure needed by clinical systems. PMID:28269846

  12. Evolution of the Standard Simulation Architecture

    DTIC Science & Technology

    2004-06-01

    Interoperability Workshop, Paper 02S-SIW-016. 5. Deitel & Deitel , 2001. “C++ How to Program , Third Edition.” Prentice Hall, Inc., Upper Saddle...carefully followed for software to be successfully reused in other programs . The Standard Simulation Architecture (SSA) promotes these principles...capabilities described in this paper have been developed and successfully used on various government programs . This paper attempts to bring these

  13. "banca del Fare" Summer School in Alta Langa: «THE Ruins to BE Rebuilt Will BE Our CLASSROOMS». Knowledge from Artisans to New Generations, from Ancient Skills to New Building Techniques

    NASA Astrophysics Data System (ADS)

    Villata, M.

    2017-05-01

    "Banca del fare" is an ambitious project proposed by "Cultural Park Alta Langa". It is born to hand ancient knowledges down to young people, as meeting place useful to exchange the development of new construction techniques and at the same time the traditional ones. A program of educational workshops, which constitute the summer school, was organized for increasing communication among different generations. Indeed, the last local craftsmen or artisans are coming out from their employment and there is no training process to ensure the migration of knowledge to young architects. The activities of the school took place for the first time during summer of 2016 in Alta Langa, the southern part of Langhe in Piedmont. The landscape of this area is marked by small rural architectures called "ciabòts" shed all over the countryside. Artisans and students work together to recover these buildings every year. The aim of this landscape heritage's valorization is to relate the restored ciabòt into a network, in order to create a widespread hotels system. Therefore, the essay wants to present the results of "Banca del fare" and to suggest a GIS project that can gather information about numerous "ciabòt" widespread in this territory. The interaction between land development and networking process can ensure the optimal reuse of these rural architectures.

  14. A Core Plug and Play Architecture for Reusable Flight Software Systems

    NASA Technical Reports Server (NTRS)

    Wilmot, Jonathan

    2006-01-01

    The Flight Software Branch, at Goddard Space Flight Center (GSFC), has been working on a run-time approach to facilitate a formal software reuse process. The reuse process is designed to enable rapid development and integration of high-quality software systems and to more accurately predict development costs and schedule. Previous reuse practices have been somewhat successful when the same teams are moved from project to project. But this typically requires taking the software system in an all-or-nothing approach where useful components cannot be easily extracted from the whole. As a result, the system is less flexible and scalable with limited applicability to new projects. This paper will focus on the rationale behind, and implementation of the run-time executive. This executive is the core for the component-based flight software commonality and reuse process adopted at Goddard.

  15. CARDS: A blueprint and environment for domain-specific software reuse

    NASA Technical Reports Server (NTRS)

    Wallnau, Kurt C.; Solderitsch, Anne Costa; Smotherman, Catherine

    1992-01-01

    CARDS (Central Archive for Reusable Defense Software) exploits advances in domain analysis and domain modeling to identify, specify, develop, archive, retrieve, understand, and reuse domain-specific software components. An important element of CARDS is to provide visibility into the domain model artifacts produced by, and services provided by, commercial computer-aided software engineering (CASE) technology. The use of commercial CASE technology is important to provide rich, robust support for the varied roles involved in a reuse process. We refer to this kind of use of knowledge representation systems as supporting 'knowledge-based integration.'

  16. User Interface Technology for Formal Specification Development

    NASA Technical Reports Server (NTRS)

    Lowry, Michael; Philpot, Andrew; Pressburger, Thomas; Underwood, Ian; Lum, Henry, Jr. (Technical Monitor)

    1994-01-01

    Formal specification development and modification are an essential component of the knowledge-based software life cycle. User interface technology is needed to empower end-users to create their own formal specifications. This paper describes the advanced user interface for AMPHION1 a knowledge-based software engineering system that targets scientific subroutine libraries. AMPHION is a generic, domain-independent architecture that is specialized to an application domain through a declarative domain theory. Formal specification development and reuse is made accessible to end-users through an intuitive graphical interface that provides semantic guidance in creating diagrams denoting formal specifications in an application domain. The diagrams also serve to document the specifications. Automatic deductive program synthesis ensures that end-user specifications are correctly implemented. The tables that drive AMPHION's user interface are automatically compiled from a domain theory; portions of the interface can be customized by the end-user. The user interface facilitates formal specification development by hiding syntactic details, such as logical notation. It also turns some of the barriers for end-user specification development associated with strongly typed formal languages into active sources of guidance, without restricting advanced users. The interface is especially suited for specification modification. AMPHION has been applied to the domain of solar system kinematics through the development of a declarative domain theory. Testing over six months with planetary scientists indicates that AMPHION's interactive specification acquisition paradigm enables users to develop, modify, and reuse specifications at least an order of magnitude more rapidly than manual program development.

  17. System-Level Reuse of Space Systems Simulations

    NASA Technical Reports Server (NTRS)

    Hazen, Michael R.; Williams, Joseph C.

    2004-01-01

    One of the best ways to enhance space systems simulation fidelity is to leverage off of (reuse) existing high-fidelity simulations. But what happens when the model you would like to reuse is in a different coding language or other barriers arise that make one want to just start over with a clean sheet of paper? Three diverse system-level simulation reuse case studies are described based on experience to date in the development of NASA's Space Station Training Facility (SSTF) at the Johnson Space Center in Houston, Texas. Case studies include (a) the Boeing/Rocketdyne-provided Electrical Power Simulation (EPSIM), (b) the NASA Automation and Robotics Division-provided TRICK robotics systems model, and (c) the Russian Space Agency- provided Russian Segment Trainer. In each case, there was an initial tendency to dismiss simulation reuse candidates based on an apparent lack of suitability. A more careful examination based on a more structured assessment of architectural and requirements-oriented representations of the reuse candidates revealed significant reuse potential. Specific steps used to conduct the detailed assessments are discussed. The steps include the following: 1) Identifying reuse candidates; 2) Requirements compatibility assessment; 3) Maturity assessment; 4) Life-cycle cost determination; and 5) Risk assessment. Observations and conclusions are presented related to the real cost of system-level simulation component reuse. Finally, lessons learned that relate to maximizing the benefits of space systems simulation reuse are shared. These concepts should be directly applicable for use in the development of space systems simulations in the future.

  18. Knowledge-based reusable software synthesis system

    NASA Technical Reports Server (NTRS)

    Donaldson, Cammie

    1989-01-01

    The Eli system, a knowledge-based reusable software synthesis system, is being developed for NASA Langley under a Phase 2 SBIR contract. Named after Eli Whitney, the inventor of interchangeable parts, Eli assists engineers of large-scale software systems in reusing components while they are composing their software specifications or designs. Eli will identify reuse potential, search for components, select component variants, and synthesize components into the developer's specifications. The Eli project began as a Phase 1 SBIR to define a reusable software synthesis methodology that integrates reusabilityinto the top-down development process and to develop an approach for an expert system to promote and accomplish reuse. The objectives of the Eli Phase 2 work are to integrate advanced technologies to automate the development of reusable components within the context of large system developments, to integrate with user development methodologies without significant changes in method or learning of special languages, and to make reuse the easiest operation to perform. Eli will try to address a number of reuse problems including developing software with reusable components, managing reusable components, identifying reusable components, and transitioning reuse technology. Eli is both a library facility for classifying, storing, and retrieving reusable components and a design environment that emphasizes, encourages, and supports reuse.

  19. The NASA Navigator Program Ground Based Archives at the Michelson Science Center: Supporting the Search for Habitable Planets

    NASA Astrophysics Data System (ADS)

    Berriman, G. B.; Ciardi, D. R.; Good, J. C.; Laity, A. C.; Zhang, A.

    2006-07-01

    At ADASS XIV, we described how the W. M. Keck Observatory Archive (KOA) re-uses and extends the component based architecture of the NASA/IPAC Infrared Science Archive (IRSA) to ingest and serve level 0 observations made with HIRES, the High Resolution Echelle Spectrometer. Since August 18, the KOA has ingested 325 GB of data from 135 nights of observations. The architecture exploits a service layer between the mass storage layer and the user interface. This service layer consists of standalone utilities called through a simple executive that perform generic query and retrieval functions, such as query generation, database table sub-setting, and return page generation etc. It has been extended to implement proprietary access to data through deployment of query management middleware developed for the National Virtual Observatory. The MSC archives have recently extended this design to query and retrieve complex data sets describing the properties of potential target stars for the Terrestrial Planet Finder (TPF) missions. The archives can now support knowledge based retrieval, as well as data retrieval. This paper describes how extensions to the IRSA architecture, which is applicable across all wavelengths and astronomical datatypes, supports the design and development of the MSC NP archives at modest cost.

  20. The Generalized Support Software (GSS) Domain Engineering Process: An Object-Oriented Implementation and Reuse Success at Goddard Space Flight Center

    NASA Technical Reports Server (NTRS)

    Condon, Steven; Hendrick, Robert; Stark, Michael E.; Steger, Warren

    1997-01-01

    The Flight Dynamics Division (FDD) of NASA's Goddard Space Flight Center (GSFC) recently embarked on a far-reaching revision of its process for developing and maintaining satellite support software. The new process relies on an object-oriented software development method supported by a domain specific library of generalized components. This Generalized Support Software (GSS) Domain Engineering Process is currently in use at the NASA GSFC Software Engineering Laboratory (SEL). The key facets of the GSS process are (1) an architecture for rapid deployment of FDD applications, (2) a reuse asset library for FDD classes, and (3) a paradigm shift from developing software to configuring software for mission support. This paper describes the GSS architecture and process, results of fielding the first applications, lessons learned, and future directions

  1. An efficient interpolation filter VLSI architecture for HEVC standard

    NASA Astrophysics Data System (ADS)

    Zhou, Wei; Zhou, Xin; Lian, Xiaocong; Liu, Zhenyu; Liu, Xiaoxiang

    2015-12-01

    The next-generation video coding standard of High-Efficiency Video Coding (HEVC) is especially efficient for coding high-resolution video such as 8K-ultra-high-definition (UHD) video. Fractional motion estimation in HEVC presents a significant challenge in clock latency and area cost as it consumes more than 40 % of the total encoding time and thus results in high computational complexity. With aims at supporting 8K-UHD video applications, an efficient interpolation filter VLSI architecture for HEVC is proposed in this paper. Firstly, a new interpolation filter algorithm based on the 8-pixel interpolation unit is proposed in this paper. It can save 19.7 % processing time on average with acceptable coding quality degradation. Based on the proposed algorithm, an efficient interpolation filter VLSI architecture, composed of a reused data path of interpolation, an efficient memory organization, and a reconfigurable pipeline interpolation filter engine, is presented to reduce the implement hardware area and achieve high throughput. The final VLSI implementation only requires 37.2k gates in a standard 90-nm CMOS technology at an operating frequency of 240 MHz. The proposed architecture can be reused for either half-pixel interpolation or quarter-pixel interpolation, which can reduce the area cost for about 131,040 bits RAM. The processing latency of our proposed VLSI architecture can support the real-time processing of 4:2:0 format 7680 × 4320@78fps video sequences.

  2. International developments in openEHR archetypes and templates.

    PubMed

    Leslie, Heather

    Electronic Health Records (EHRs) are a complex knowledge domain. The ability to design EHRs to cope with the changing nature of health knowledge, and to be shareable, has been elusive. A recent pilot study1 tested the applicability of the CEN 13606 as an electronic health record standard. Using openEHR archetypes and tools2, 650 clinical content specifi cations (archetypes) were created (e.g. for blood pressure) and re-used across all clinical specialties and contexts. Groups of archetypes were aggregated in templates to support clinical information gathering or viewing (e.g. 80 separate archetypes make up the routine antenatal visit record). Over 60 templates were created for use in the emergency department, antenatal care and delivery of an infant, and paediatric hearing loss assessment. The primary goal is to define a logical clinical record architecture for the NHS but potentially, with archetypes as the keystone, shareable EHRs will also be attainable. Archetype and template development work is ongoing, with associated evaluation occurring in parallel.

  3. A reference architecture for the component factory

    NASA Technical Reports Server (NTRS)

    Basili, Victor R.; Caldiera, Gianluigi; Cantone, Giovanni

    1992-01-01

    Software reuse can be achieved through an organization that focuses on utilization of life cycle products from previous developments. The component factory is both an example of the more general concepts of experience and domain factory and an organizational unit worth being considered independently. The critical features of such an organization are flexibility and continuous improvement. In order to achieve these features we can represent the architecture of the factory at different levels of abstraction and define a reference architecture from which specific architectures can be derived by instantiation. A reference architecture is an implementation and organization independent representation of the component factory and its environment. The paper outlines this reference architecture, discusses the instantiation process, and presents some examples of specific architectures by comparing them in the framework of the reference model.

  4. A Design Rationale Capture Tool to Support Design Verification and Re-use

    NASA Technical Reports Server (NTRS)

    Hooey, Becky Lee; Da Silva, Jonny C.; Foyle, David C.

    2012-01-01

    A design rationale tool (DR tool) was developed to capture design knowledge to support design verification and design knowledge re-use. The design rationale tool captures design drivers and requirements, and documents the design solution including: intent (why it is included in the overall design); features (why it is designed the way it is); information about how the design components support design drivers and requirements; and, design alternatives considered but rejected. For design verification purposes, the tool identifies how specific design requirements were met and instantiated within the final design, and which requirements have not been met. To support design re-use, the tool identifies which design decisions are affected when design drivers and requirements are modified. To validate the design tool, the design knowledge from the Taxiway Navigation and Situation Awareness (T-NASA; Foyle et al., 1996) system was captured and the DR tool was exercised to demonstrate its utility for validation and re-use.

  5. A web service for service composition to aid geospatial modelers

    NASA Astrophysics Data System (ADS)

    Bigagli, L.; Santoro, M.; Roncella, R.; Mazzetti, P.

    2012-04-01

    The identification of appropriate mechanisms for process reuse, chaining and composition is considered a key enabler for the effective uptake of a global Earth Observation infrastructure, currently pursued by the international geospatial research community. In the Earth and Space Sciences, such a facility could primarily enable integrated and interoperable modeling, for what several approaches have been proposed and developed, over the last years. In fact, GEOSS is specifically tasked with the development of the so-called "Model Web". At increasing levels of abstraction and generalization, the initial stove-pipe software tools have evolved to community-wide modeling frameworks, to Component-Based Architecture solution, and, more recently, started to embrace Service-Oriented Architectures technologies, such as the OGC WPS specification and the WS-* stack of W3C standards for service composition. However, so far, the level of abstraction seems too low for implementing the Model Web vision, and far too complex technological aspects must still be addressed by both providers and users, resulting in limited usability and, eventually, difficult uptake. As by the recent ICT trend of resource virtualization, it has been suggested that users in need of a particular processing capability, required by a given modeling workflow, may benefit from outsourcing the composition activities into an external first-class service, according to the Composition as a Service (CaaS) approach. A CaaS system provides the necessary interoperability service framework for adaptation, reuse and complementation of existing processing resources (including models and geospatial services in general) in the form of executable workflows. This work introduces the architecture of a CaaS system, as a distributed information system for creating, validating, editing, storing, publishing, and executing geospatial workflows. This way, the users can be freed from the need of a composition infrastructure and alleviated from the technicalities of workflow definitions (type matching, identification of external services endpoints, binding issues, etc.) and focus on their intended application. Moreover, the user may submit an incomplete workflow definition, and leverage CaaS recommendations (that may derive from an aggregated knowledge base of user feedback, underpinned by Web 2.0 technologies) to execute it. This is of particular interest for multidisciplinary scientific contexts, where different communities may benefit of each other knowledge through model chaining. Indeed, the CaaS approach is presented as an attempt to combine the recent advances in service-oriented computing with collaborative research principles, and social network information in general. Arguably, it may be considered a fundamental capability of the Model Web. The CaaS concept is being investigated in several application scenarios identified in the FP7 UncertWeb and EuroGEOSS projects. Key aspects of the described CaaS solution are: it provides a standard WPS interface for invoking Business Processes and allows on the fly recursive compositions of Business Processes into other Composite Processes; it is designed according to the extended SOA (broker-based) and the System-of-Systems approach, to support the reuse and integration of existing resources, in compliance with the GEOSS Model Web architecture. The research leading to these results has received funding from the European Community's Seventh Framework Programme (FP7/2007-2013) under Grant Agreement n° 248488.

  6. Design of a lattice-based faceted classification system

    NASA Technical Reports Server (NTRS)

    Eichmann, David A.; Atkins, John

    1992-01-01

    We describe a software reuse architecture supporting component retrieval by facet classes. The facets are organized into a lattice of facet sets and facet n-tuples. The query mechanism supports precise retrieval and flexible browsing.

  7. From scenarios to domain models: processes and representations

    NASA Astrophysics Data System (ADS)

    Haddock, Gail; Harbison, Karan

    1994-03-01

    The domain specific software architectures (DSSA) community has defined a philosophy for the development of complex systems. This philosophy improves productivity and efficiency by increasing the user's role in the definition of requirements, increasing the systems engineer's role in the reuse of components, and decreasing the software engineer's role to the development of new components and component modifications only. The scenario-based engineering process (SEP), the first instantiation of the DSSA philosophy, has been adopted by the next generation controller project. It is also the chosen methodology of the trauma care information management system project, and the surrogate semi-autonomous vehicle project. SEP uses scenarios from the user to create domain models and define the system's requirements. Domain knowledge is obtained from a variety of sources including experts, documents, and videos. This knowledge is analyzed using three techniques: scenario analysis, task analysis, and object-oriented analysis. Scenario analysis results in formal representations of selected scenarios. Task analysis of the scenario representations results in descriptions of tasks necessary for object-oriented analysis and also subtasks necessary for functional system analysis. Object-oriented analysis of task descriptions produces domain models and system requirements. This paper examines the representations that support the DSSA philosophy, including reference requirements, reference architectures, and domain models. The processes used to create and use the representations are explained through use of the scenario-based engineering process. Selected examples are taken from the next generation controller project.

  8. Application of Various NDT Methods for the Evaluation of Building Steel Structures for Reuse

    PubMed Central

    Fujita, Masanori; Masuda, Tomoya

    2014-01-01

    The reuse system proposed by the authors is an overall business system for realizing a cyclic reuse flow through the processes of design, fabrication, construction, maintenance, demolition and storage. The reuse system is one of the methods to reduce the environmental burden in the field of building steel structures. These buildings are assumed to be demolished within approximately 30 years or more for physical, architectural, economic and social reasons in Japan. In this paper, focusing on building steel structures used for plants, warehouses and offices without fire protection, the performance of steel structural members for reuse is evaluated by a non-destructive test. First, performance evaluation procedures for a non-destructive test, such as mechanical properties, chemical compositions, dimension and degradation, are shown. Tensile strengths are estimated using Vickers hardness measured by a portable ultrasonic hardness tester, and chemical compositions are measured by a portable optical emission spectrometer. The weldability of steel structural members is estimated by carbon equivalent and weld crack sensitivity composition using chemical compositions. Finally, the material grade of structural members of the building steel structure for reuse is estimated based on the proposed procedures. PMID:28788237

  9. Application of Various NDT Methods for the Evaluation of Building Steel Structures for Reuse.

    PubMed

    Fujita, Masanori; Masuda, Tomoya

    2014-10-22

    The reuse system proposed by the authors is an overall business system for realizing a cyclic reuse flow through the processes of design, fabrication, construction, maintenance, demolition and storage. The reuse system is one of the methods to reduce the environmental burden in the field of building steel structures. These buildings are assumed to be demolished within approximately 30 years or more for physical, architectural, economic and social reasons in Japan. In this paper, focusing on building steel structures used for plants, warehouses and offices without fire protection, the performance of steel structural members for reuse is evaluated by a non-destructive test. First, performance evaluation procedures for a non-destructive test, such as mechanical properties, chemical compositions, dimension and degradation, are shown. Tensile strengths are estimated using Vickers hardness measured by a portable ultrasonic hardness tester, and chemical compositions are measured by a portable optical emission spectrometer. The weldability of steel structural members is estimated by carbon equivalent and weld crack sensitivity composition using chemical compositions. Finally, the material grade of structural members of the building steel structure for reuse is estimated based on the proposed procedures.

  10. 40 CFR 35.2005 - Definitions.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... sludge; aquifer recharge; aquaculture; direct reuse (non-potable); horticulture; revegetation of... design-type projects within the scope of the practice of architecture or professional engineering as... as designed. (10) Collector sewer. The common lateral sewers, within a publicly owned treatment...

  11. 40 CFR 35.2005 - Definitions.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... sludge; aquifer recharge; aquaculture; direct reuse (non-potable); horticulture; revegetation of... design-type projects within the scope of the practice of architecture or professional engineering as... as designed. (10) Collector sewer. The common lateral sewers, within a publicly owned treatment...

  12. 40 CFR 35.2005 - Definitions.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... sludge; aquifer recharge; aquaculture; direct reuse (non-potable); horticulture; revegetation of... design-type projects within the scope of the practice of architecture or professional engineering as... as designed. (10) Collector sewer. The common lateral sewers, within a publicly owned treatment...

  13. Support for comprehensive reuse

    NASA Technical Reports Server (NTRS)

    Basili, V. R.; Rombach, H. D.

    1991-01-01

    Reuse of products, processes, and other knowledge will be the key to enable the software industry to achieve the dramatic improvement in productivity and quality required to satisfy the anticipated growing demands. Although experience shows that certain kinds of reuse can be successful, general success has been elusive. A software life-cycle technology which allows comprehensive reuse of all kinds of software-related experience could provide the means to achieving the desired order-of-magnitude improvements. A comprehensive framework of models, model-based characterization schemes, and support mechanisms for better understanding, evaluating, planning, and supporting all aspects of reuse are introduced.

  14. Towards a comprehensive framework for reuse: A reuse-enabling software evolution environment

    NASA Technical Reports Server (NTRS)

    Basili, V. R.; Rombach, H. D.

    1988-01-01

    Reuse of products, processes and knowledge will be the key to enable the software industry to achieve the dramatic improvement in productivity and quality required to satisfy the anticipated growing demand. Although experience shows that certain kinds of reuse can be successful, general success has been elusive. A software life-cycle technology which allows broad and extensive reuse could provide the means to achieving the desired order-of-magnitude improvements. The scope of a comprehensive framework for understanding, planning, evaluating and motivating reuse practices and the necessary research activities is outlined. As a first step towards such a framework, a reuse-enabling software evolution environment model is introduced which provides a basis for the effective recording of experience, the generalization and tailoring of experience, the formalization of experience, and the (re-)use of experience.

  15. Facilitating Decision Making, Re-Use and Collaboration: A Knowledge Management Approach for System Self-Awareness

    DTIC Science & Technology

    2009-10-01

    FACILITATING DECISION MAKING, RE-USE AND COLLABORATION: A KNOWLEDGE MANAGEMENT APPROACH FOR SYSTEM SELF- AWARENESS Shelley P. Gallup, Douglas J... Information Systems Experimentation (DISE) Group Naval Postgraduate School, Monterey, CA 93943 Keywords: Program self- awareness , decision making...decision makers express in obtaining constant awareness of what is going on in their domains of decision making because information that is needed

  16. EMMA: a new paradigm in configurable software

    DOE PAGES

    Nogiec, J. M.; Trombly-Freytag, K.

    2017-11-23

    EMMA is a framework designed to create a family of configurable software systems, with emphasis on extensibility and flexibility. It is based on a loosely coupled, event driven architecture. The EMMA framework has been built upon the premise of composing software systems from independent components. It opens up opportunities for reuse of components and their functionality and composing them together in many different ways. As a result, it provides the developer of test and measurement applications with a lightweight alternative to microservices, while sharing their various advantages, including composability, loose coupling, encapsulation, and reuse.

  17. EMMA: A New Paradigm in Configurable Software

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nogiec, J. M.; Trombly-Freytag, K.

    EMMA is a framework designed to create a family of configurable software systems, with emphasis on extensibility and flexibility. It is based on a loosely coupled, event driven architecture. The EMMA framework has been built upon the premise of composing software systems from independent components. It opens up opportunities for reuse of components and their functionality and composing them together in many different ways. It provides the developer of test and measurement applications with a lightweight alternative to microservices, while sharing their various advantages, including composability, loose coupling, encapsulation, and reuse.

  18. EMMA: a new paradigm in configurable software

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nogiec, J. M.; Trombly-Freytag, K.

    EMMA is a framework designed to create a family of configurable software systems, with emphasis on extensibility and flexibility. It is based on a loosely coupled, event driven architecture. The EMMA framework has been built upon the premise of composing software systems from independent components. It opens up opportunities for reuse of components and their functionality and composing them together in many different ways. As a result, it provides the developer of test and measurement applications with a lightweight alternative to microservices, while sharing their various advantages, including composability, loose coupling, encapsulation, and reuse.

  19. Dshell++: A Component Based, Reusable Space System Simulation Framework

    NASA Technical Reports Server (NTRS)

    Lim, Christopher S.; Jain, Abhinandan

    2009-01-01

    This paper describes the multi-mission Dshell++ simulation framework for high fidelity, physics-based simulation of spacecraft, robotic manipulation and mobility systems. Dshell++ is a C++/Python library which uses modern script driven object-oriented techniques to allow component reuse and a dynamic run-time interface for complex, high-fidelity simulation of spacecraft and robotic systems. The goal of the Dshell++ architecture is to manage the inherent complexity of physicsbased simulations while supporting component model reuse across missions. The framework provides several features that support a large degree of simulation configurability and usability.

  20. EMMA: a new paradigm in configurable software

    NASA Astrophysics Data System (ADS)

    Nogiec, J. M.; Trombly-Freytag, K.

    2017-10-01

    EMMA is a framework designed to create a family of configurable software systems, with emphasis on extensibility and flexibility. It is based on a loosely coupled, event driven architecture. The EMMA framework has been built upon the premise of composing software systems from independent components. It opens up opportunities for reuse of components and their functionality and composing them together in many different ways. It provides the developer of test and measurement applications with a lightweight alternative to microservices, while sharing their various advantages, including composability, loose coupling, encapsulation, and reuse.

  1. Mars Hybrid Propulsion System Trajectory Analysis. Part II; Cargo Missions

    NASA Technical Reports Server (NTRS)

    Chai, Patrick R.; Merrill, Raymond G.; Qu, Min

    2015-01-01

    NASA's Human Spaceflight Architecture Team is developing a reusable hybrid transportation architecture in which both chemical and electric propulsion systems are used to send crew and cargo to Mars destinations such as Phobos, Deimos, the surface of Mars, and other orbits around Mars. By combining chemical and electrical propulsion into a single spaceship and applying each where it is more effective, the hybrid architecture enables a series of Mars trajectories that are more fuel-efficient than an all chemical architecture without significant increases in flight times. This paper shows the feasibility of the hybrid transportation architecture to pre-deploy cargo to Mars and Phobos in support of the Evolvable Mars Campaign crew missions. The analysis shows that the hybrid propulsion stage is able to deliver all of the current manifested payload to Phobos and Mars through the first three crew missions. The conjunction class trajectory also allows the hybrid propulsion stage to return to Earth in a timely fashion so it can be reused for additional cargo deployment. The 1,100 days total trip time allows the hybrid propulsion stage to deliver cargo to Mars every other Earth-Mars transit opportunity. For the first two Mars surface mission in the Evolvable Mars Campaign, the short trip time allows the hybrid propulsion stage to be reused for three round-trip journeys to Mars, which matches the hybrid propulsion stage's designed lifetime for three round-trip crew missions to the Martian sphere of influence.

  2. Personalized health care and health information technology policy: an exploratory analysis.

    PubMed

    Wald, Jonathan S; Shapiro, Michael

    2013-01-01

    Personalized healthcare (PHC) is envisioned to enhance clinical practice decision-making using new genome-driven knowledge that tailors diagnosis, treatment, and prevention to the individual patient. In 2012, we conducted a focused environmental scan and informal interviews with fifteen experts to anticipate how PHC might impact health Information Technology (IT) policy in the United States. Findings indicatedthat PHC has a variable impact on current clinical practice, creates complex questions for providers, patients, and policy-makers, and will require a robust health IT infrastructure with advanced data architecture, clinical decision support, provider workflow tools, and re-use of clinical data for research. A number of health IT challenge areas were identified, along with five policy areas including: interoperable clinical decision support, standards for patient values and preferences, patient engagement, data transparency, and robust privacy and security.

  3. Health care professional workstation: software system construction using DSSA scenario-based engineering process.

    PubMed

    Hufnagel, S; Harbison, K; Silva, J; Mettala, E

    1994-01-01

    This paper describes a new method for the evolutionary determination of user requirements and system specifications called scenario-based engineering process (SEP). Health care professional workstations are critical components of large scale health care system architectures. We suggest that domain-specific software architectures (DSSAs) be used to specify standard interfaces and protocols for reusable software components throughout those architectures, including workstations. We encourage the use of engineering principles and abstraction mechanisms. Engineering principles are flexible guidelines, adaptable to particular situations. Abstraction mechanisms are simplifications for management of complexity. We recommend object-oriented design principles, graphical structural specifications, and formal components' behavioral specifications. We give an ambulatory care scenario and associated models to demonstrate SEP. The scenario uses health care terminology and gives patients' and health care providers' system views. Our goal is to have a threefold benefit. (i) Scenario view abstractions provide consistent interdisciplinary communications. (ii) Hierarchical object-oriented structures provide useful abstractions for reuse, understandability, and long term evolution. (iii) SEP and health care DSSA integration into computer aided software engineering (CASE) environments. These environments should support rapid construction and certification of individualized systems, from reuse libraries.

  4. Enhancing the Reuse of Digital Resources for Integrated Systems to Represent, Understand and Dynamize Complex Interactions in Architectural Cultural Heritage Environments

    NASA Astrophysics Data System (ADS)

    Delgado, F. J.; Martinez, R.; Finat, J.; Martinez, J.; Puche, J. C.; Finat, F. J.

    2013-07-01

    In this work we develop a multiply interconnected system which involves objects, agents and interactions between them from the use of ICT applied to open repositories, users communities and web services. Our approach is applied to Architectural Cultural Heritage Environments (ACHE). It includes components relative to digital accessibility (to augmented ACHE repositories), contents management (ontologies for the semantic web), semiautomatic recognition (to ease the reuse of materials) and serious videogames (for interaction in urban environments). Their combination provides a support for local real/remote virtual tourism (including some tools for low-level RT display of rendering in portable devices), mobile-based smart interactions (with a special regard to monitored environments) and CH related games (as extended web services). Main contributions to AR models on usual GIS applied to architectural environments, concern to an interactive support performed directly on digital files which allows to access to CH contents which are referred to GIS of urban districts (involving facades, historical or preindustrial buildings) and/or CH repositories in a ludic and transversal way to acquire cognitive, medial and social abilities in collaborative environments.

  5. Fashion Your New Library from Old.

    ERIC Educational Resources Information Center

    Burgin, William R.

    1997-01-01

    Renovation, addition, and adaptive reuse of existing facilities offer many advantages over new construction: savings, preservation of historical or architecturally significant buildings, preservation of traditional location, and faster relocation to a more desirable location. Discusses building criteria: structure, hazardous materials, siting,…

  6. The 1993 AIA/ALA Building Award Recipients.

    ERIC Educational Resources Information Center

    Muller, Karen

    1993-01-01

    Describes the eight library buildings that won the 1993 Awards of Excellence for Library Architecture from the American Institute of Architects (AIA) and the American Library Association (ALA) including one adaptive reuse, four expansions, two new buildings, and one temporary building. (EAM)

  7. Packaging Software Assets for Reuse

    NASA Astrophysics Data System (ADS)

    Mattmann, C. A.; Marshall, J. J.; Downs, R. R.

    2010-12-01

    The reuse of existing software assets such as code, architecture, libraries, and modules in current software and systems development projects can provide many benefits, including reduced costs, in time and effort, and increased reliability. Many reusable assets are currently available in various online catalogs and repositories, usually broken down by disciplines such as programming language (Ibiblio for Maven/Java developers, PyPI for Python developers, CPAN for Perl developers, etc.). The way these assets are packaged for distribution can play a role in their reuse - an asset that is packaged simply and logically is typically easier to understand, install, and use, thereby increasing its reusability. A well-packaged asset has advantages in being more reusable and thus more likely to provide benefits through its reuse. This presentation will discuss various aspects of software asset packaging and how they can affect the reusability of the assets. The characteristics of well-packaged software will be described. A software packaging domain model will be introduced, and some existing packaging approaches examined. An example case study of a Reuse Enablement System (RES), currently being created by near-term Earth science decadal survey missions, will provide information about the use of the domain model. Awareness of these factors will help software developers package their reusable assets so that they can provide the most benefits for software reuse.

  8. EASY-SIM: A Visual Simulation System Software Architecture with an ADA 9X Application Framework

    DTIC Science & Technology

    1994-12-01

    devop -_ ment of software systems within a domain. Because an architecture promotes reuse at the design level, systems developers do not have to devote...physically separated actors into a battlefield situation, The interaction be- tween the various simulators is accomplished by means of network connec...realized that it would be more productive to make reusable components from scratch (Sny93,31-32]. Of notable exception were the network communications

  9. Using a Foundational Ontology for Reengineering a Software Enterprise Ontology

    NASA Astrophysics Data System (ADS)

    Perini Barcellos, Monalessa; de Almeida Falbo, Ricardo

    The knowledge about software organizations is considerably relevant to software engineers. The use of a common vocabulary for representing the useful knowledge about software organizations involved in software projects is important for several reasons, such as to support knowledge reuse and to allow communication and interoperability between tools. Domain ontologies can be used to define a common vocabulary for sharing and reuse of knowledge about some domain. Foundational ontologies can be used for evaluating and re-designing domain ontologies, giving to these real-world semantics. This paper presents an evaluating of a Software Enterprise Ontology that was reengineered using the Unified Foundation Ontology (UFO) as basis.

  10. ATOS-1: Designing the infrastructure for an advanced spacecraft operations system

    NASA Technical Reports Server (NTRS)

    Poulter, K. J.; Smith, H. N.

    1993-01-01

    The space industry has identified the need to use artificial intelligence and knowledge based system techniques as integrated, central, symbolic processing components of future mission design, support and operations systems. Various practical and commercial constraints require that off-the-shelf applications, and their knowledge bases, are reused where appropriate and that different mission contractors, potentially using different KBS technologies, can provide application and knowledge sub-modules of an overall integrated system. In order to achieve this integration, which we call knowledge sharing and distributed reasoning, there needs to be agreement on knowledge representations, knowledge interchange-formats, knowledge level communications protocols, and ontology. Research indicates that the latter is most important, providing the applications with a common conceptualization of the domain, in our case spacecraft operations, mission design, and planning. Agreement on ontology permits applications that employ different knowledge representations to interwork through mediators which we refer to as knowledge agents. This creates the illusion of a shared model without the constraints, both technical and commercial, that occur in centralized or uniform architectures. This paper explains how these matters are being addressed within the ATOS program at ESOC, using techniques which draw upon ideas and standards emerging from the DARPA Knowledge Sharing Effort. In particular, we explain how the project is developing an electronic Ontology of Spacecraft Operations and how this can be used as an enabling component within space support systems that employ advanced software engineering. We indicate our hope and expectation that the core ontology developed in ATOS, will permit the full development of standards for such systems throughout the space industry.

  11. STGT program: Ada coding and architecture lessons learned

    NASA Technical Reports Server (NTRS)

    Usavage, Paul; Nagurney, Don

    1992-01-01

    STGT (Second TDRSS Ground Terminal) is currently halfway through the System Integration Test phase (Level 4 Testing). To date, many software architecture and Ada language issues have been encountered and solved. This paper, which is the transcript of a presentation at the 3 Dec. meeting, attempts to define these lessons plus others learned regarding software project management and risk management issues, training, performance, reuse, and reliability. Observations are included regarding the use of particular Ada coding constructs, software architecture trade-offs during the prototyping, development and testing stages of the project, and dangers inherent in parallel or concurrent systems, software, hardware, and operations engineering.

  12. Multichannel Baseband Processor for Wideband CDMA

    NASA Astrophysics Data System (ADS)

    Jalloul, Louay M. A.; Lin, Jim

    2005-12-01

    The system architecture of the cellular base station modem engine (CBME) is described. The CBME is a single-chip multichannel transceiver capable of processing and demodulating signals from multiple users simultaneously. It is optimized to process different classes of code-division multiple-access (CDMA) signals. The paper will show that through key functional system partitioning, tightly coupled small digital signal processing cores, and time-sliced reuse architecture, CBME is able to achieve a high degree of algorithmic flexibility while maintaining efficiency. The paper will also highlight the implementation and verification aspects of the CBME chip design. In this paper, wideband CDMA is used as an example to demonstrate the architecture concept.

  13. Object linking in repositories

    NASA Technical Reports Server (NTRS)

    Eichmann, David (Editor); Beck, Jon; Atkins, John; Bailey, Bill

    1992-01-01

    This topic is covered in three sections. The first section explores some of the architectural ramifications of extending the Eichmann/Atkins lattice-based classification scheme to encompass the assets of the full life cycle of software development. A model is considered that provides explicit links between objects in addition to the edges connecting classification vertices in the standard lattice. The second section gives a description of the efforts to implement the repository architecture using a commercially available object-oriented database management system. Some of the features of this implementation are described, and some of the next steps to be taken to produce a working prototype of the repository are pointed out. In the final section, it is argued that design and instantiation of reusable components have competing criteria (design-for-reuse strives for generality, design-with-reuse strives for specificity) and that providing mechanisms for each can be complementary rather than antagonistic. In particular, it is demonstrated how program slicing techniques can be applied to customization of reusable components.

  14. Study on the E-commerce platform based on the agent

    NASA Astrophysics Data System (ADS)

    Fu, Ruixue; Qin, Lishuan; Gao, Yinmin

    2011-10-01

    To solve problem of dynamic integration in e-commerce, the Multi-Agent architecture of electronic commerce platform system based on Agent and Ontology has been introduced, which includes three major types of agent, Ontology and rule collection. In this architecture, service agent and rule are used to realize the business process reengineering, the reuse of software component, and agility of the electronic commerce platform. To illustrate the architecture, a simulation work has been done and the results imply that the architecture provides a very efficient method to design and implement the flexible, distributed, open and intelligent electronic commerce platform system to solve problem of dynamic integration in ecommerce. The objective of this paper is to illustrate the architecture of electronic commerce platform system, and the approach how Agent and Ontology support the electronic commerce platform system.

  15. The Roles of Knowledge Professionals for Knowledge Management.

    ERIC Educational Resources Information Center

    Kim, Seonghee

    This paper starts by exploring the definition of knowledge and knowledge management; examples of acquisition, creation, packaging, application, and reuse of knowledge are provided. It then considers the partnership for knowledge management and especially how librarians as knowledge professionals, users, and technology experts can contribute to…

  16. EPA Scientific Knowledge Management Assessment and Needs

    EPA Science Inventory

    A series of activities have been conducted by a core group of EPA scientists from across the Agency. The activities were initiated in 2012 and the focus was to increase the reuse and interoperability of science software at EPA. The need for increased reuse and interoperability ...

  17. How Do We Measure Value in Data Reuse? Ethical Data Sharing for the Social Sciences and Indigenous Knowledge

    NASA Astrophysics Data System (ADS)

    Strawhacker, C.

    2017-12-01

    As a result of the `open data' movement, an increased focus on how data should be attributed and cited has become increasingly important. As data becomes reused in analyses not performed by the initial data creator, efforts have turned to crediting the data creator, such as data citation and metrics of reuse to ensure appropriate attribution to the original data author. The increased focused on metrics and citation, however, need to be carefully considered when it comes to social science data, local observations, and Indigenous Knowledge held by Indigenous communities. These diverse and sometimes sensitive data/information/knowledge sets often require deep nuance, thought, and compromise within the `open data' framework, in order to consider issues of the confidentiality of research subject and the ownership of data and information, often in a colonial context. Furthermore, these datasets are often highly valuable to one or two villages, saving lives and retaining culture within. In these cases quantitative metrics of "data reuse" and citation do not adequately measure a dataset's `value.' On this panel, I will provide examples of datasets that are highly valuable to small communities from my research in the Arctic and US Southwest. These datasets are not highly cited or have impressive quantitative metrics (e.g., number of downloads) but have been incredibly valuable to the community where the data/information/Knowledge are held. These cases include atlases of placenames held by elders in small Arctic communities, as well as databases of local observations of wildlife and sea ice in Alaska that are essential for sharing knowledge across multiple villages. These examples suggest that a more nuanced approach to understanding how data should be accredited would be useful when working with social science data and Indigenous Knowledge.

  18. Engineering design knowledge recycling in near-real-time

    NASA Technical Reports Server (NTRS)

    Leifer, Larry; Baya, Vinod; Toye, George; Baudin, Catherine; Underwood, Jody Gevins

    1994-01-01

    It is hypothesized that the capture and reuse of machine readable design records is cost beneficial. This informal engineering notebook design knowledge can be used to model the artifact and the design process. Design rationale is, in part, preserved and available for examination. Redesign cycle time is significantly reduced (Baya et al, 1992). These factors contribute to making it less costly to capture and reuse knowledge than to recreate comparable knowledge (current practice). To test the hypothesis, we have focused on validation of the concept and tools in two 'real design' projects this past year: (1) a short (8 month) turnaround project for NASA life science bioreactor researchers was done by a team of three mechanical engineering graduate students at Stanford University (in a class, ME210abc 'Mechatronic Systems Design and Methodology' taught by one of the authors, Leifer); and (2) a long range (8 to 20 year) international consortium project for NASA's Space Science program (STEP: satellite test of the equivalence principle). Design knowledge capture was supported this year by assigning the use of a Team-Design PowerBook. Design records were cataloged in near-real time. These records were used to qualitatively model the artifact design as it evolved. Dedal, an 'intelligent librarian' developed at NASA-ARC, was used to navigate and retrieve captured knowledge for reuse.

  19. Essential Use Cases for Pedagogical Patterns

    ERIC Educational Resources Information Center

    Derntl, Michael; Botturi, Luca

    2006-01-01

    Coming from architecture, through computer science, pattern-based design spread into other disciplines and is nowadays recognized as a powerful way of capturing and reusing effective design practice. However, current pedagogical pattern approaches lack widespread adoption, both by users and authors, and are still limited to individual initiatives.…

  20. Distributed Learning Metadata Standards

    ERIC Educational Resources Information Center

    McClelland, Marilyn

    2004-01-01

    Significant economies can be achieved in distributed learning systems architected with a focus on interoperability and reuse. The key building blocks of an efficient distributed learning architecture are the use of standards and XML technologies. The goal of plug and play capability among various components of a distributed learning system…

  1. Does the architectural layout of a NICU affect alarm pressure? A comparative clinical audit of a single-family room and an open bay area NICU using a retrospective study design.

    PubMed

    Joshi, Rohan; Straaten, Henrica van; Mortel, Heidi van de; Long, Xi; Andriessen, Peter; Pul, Carola van

    2018-06-30

    To determine differences in alarm pressure between two otherwise comparable neonatal intensive care units (NICUs) differing in architectural layout-one of a single-family room (SFR) design and the other of an open bay area (OBA) design. Retrospective audit of more than 2000 patient days from each NICU cataloguing the differences in the number and duration of alarms for critical and alerting alarms, as well as the interaction of clinicians with the patient monitor. Two level 3 NICUs. A total of more than 150 000 critical and 1.2 million alerting alarms were acquired from the two NICUs. The number of audible alarms and the associated noise pollution varied considerably with the OBA NICU generating 44% more alarms per infant per day even though the SFR NICU generated 2.5 as many critical desaturation alarms per infant per day. Differences in the architectural layout of NICUs and the consequent differences in delays, thresholds and distribution systems for alarms are associated with differences in alarm pressure. © Author(s) (or their employer(s)) 2018. Re-use permitted under CC BY-NC. No commercial re-use. See rights and permissions. Published by BMJ.

  2. Logistics Reduction and Repurposing Beyond Low Earth Orbit

    NASA Technical Reports Server (NTRS)

    Ewert, Michael K.; Broyan, James L., Jr.

    2012-01-01

    All human space missions, regardless of destination, require significant logistical mass and volume that is strongly proportional to mission duration. Anything that can be done to reduce initial mass and volume of supplies or reuse items that have been launched will be very valuable. Often, the logistical items require disposal and represent a trash burden. Logistics contributions to total mission architecture mass can be minimized by considering potential reuse using systems engineering analysis. In NASA's Advanced Exploration Systems "Logistics Reduction and Repurposing Project," various tasks will reduce the intrinsic mass of logistical packaging, enable reuse and repurposing of logistical packaging and carriers for other habitation, life support, crew health, and propulsion functions, and reduce or eliminate the nuisance aspects of trash at the same time. Repurposing reduces the trash burden and eliminates the need for hardware whose function can be provided by use of spent logistical items. However, these reuse functions need to be identified and built into future logical systems to enable them to effectively have a secondary function. These technologies and innovations will help future logistics systems to support multiple exploration missions much more efficiently.

  3. 50 MHz-10 GHz low-power resistive feedback current-reuse mixer with inductive peaking for cognitive radio receiver.

    PubMed

    Vitee, Nandini; Ramiah, Harikrishnan; Chong, Wei-Keat; Tan, Gim-Heng; Kanesan, Jeevan; Reza, Ahmed Wasif

    2014-01-01

    A low-power wideband mixer is designed and implemented in 0.13 µm standard CMOS technology based on resistive feedback current-reuse (RFCR) configuration for the application of cognitive radio receiver. The proposed RFCR architecture incorporates an inductive peaking technique to compensate for gain roll-off at high frequency while enhancing the bandwidth. A complementary current-reuse technique is used between transconductance and IF stages to boost the conversion gain without additional power consumption by reusing the DC bias current of the LO stage. This downconversion double-balanced mixer exhibits a high and flat conversion gain (CG) of 14.9 ± 1.4 dB and a noise figure (NF) better than 12.8 dB. The maximum input 1-dB compression point (P1dB) and maximum input third-order intercept point (IIP3) are -13.6 dBm and -4.5 dBm, respectively, over the desired frequency ranging from 50 MHz to 10 GHz. The proposed circuit operates down to a supply headroom of 1 V with a low-power consumption of 3.5 mW.

  4. Framework for Architecture Trade Study Using MBSE and Performance Simulation

    NASA Technical Reports Server (NTRS)

    Ryan, Jessica; Sarkani, Shahram; Mazzuchim, Thomas

    2012-01-01

    Increasing complexity in modern systems as well as cost and schedule constraints require a new paradigm of system engineering to fulfill stakeholder needs. Challenges facing efficient trade studies include poor tool interoperability, lack of simulation coordination (design parameters) and requirements flowdown. A recent trend toward Model Based System Engineering (MBSE) includes flexible architecture definition, program documentation, requirements traceability and system engineering reuse. As a new domain MBSE still lacks governing standards and commonly accepted frameworks. This paper proposes a framework for efficient architecture definition using MBSE in conjunction with Domain Specific simulation to evaluate trade studies. A general framework is provided followed with a specific example including a method for designing a trade study, defining candidate architectures, planning simulations to fulfill requirements and finally a weighted decision analysis to optimize system objectives.

  5. Enhanced Flexibility and Reusability through State Machine-Based Architectures for Multisensor Intelligent Robotics

    PubMed Central

    Herrero, Héctor; Outón, Jose Luis; Puerto, Mildred; Sallé, Damien; López de Ipiña, Karmele

    2017-01-01

    This paper presents a state machine-based architecture, which enhances the flexibility and reusability of industrial robots, more concretely dual-arm multisensor robots. The proposed architecture, in addition to allowing absolute control of the execution, eases the programming of new applications by increasing the reusability of the developed modules. Through an easy-to-use graphical user interface, operators are able to create, modify, reuse and maintain industrial processes, increasing the flexibility of the cell. Moreover, the proposed approach is applied in a real use case in order to demonstrate its capabilities and feasibility in industrial environments. A comparative analysis is presented for evaluating the presented approach versus traditional robot programming techniques. PMID:28561750

  6. Enhanced Flexibility and Reusability through State Machine-Based Architectures for Multisensor Intelligent Robotics.

    PubMed

    Herrero, Héctor; Outón, Jose Luis; Puerto, Mildred; Sallé, Damien; López de Ipiña, Karmele

    2017-05-31

    This paper presents a state machine-based architecture, which enhances the flexibility and reusability of industrial robots, more concretely dual-arm multisensor robots. The proposed architecture, in addition to allowing absolute control of the execution, eases the programming of new applications by increasing the reusability of the developed modules. Through an easy-to-use graphical user interface, operators are able to create, modify, reuse and maintain industrial processes, increasing the flexibility of the cell. Moreover, the proposed approach is applied in a real use case in order to demonstrate its capabilities and feasibility in industrial environments. A comparative analysis is presented for evaluating the presented approach versus traditional robot programming techniques.

  7. Software reuse in spacecraft planning and scheduling systems

    NASA Technical Reports Server (NTRS)

    Mclean, David; Tuchman, Alan; Broseghini, Todd; Yen, Wen; Page, Brenda; Johnson, Jay; Bogovich, Lynn; Burkhardt, Chris; Mcintyre, James; Klein, Scott

    1993-01-01

    The use of a software toolkit and development methodology that supports software reuse is described. The toolkit includes source-code-level library modules and stand-alone tools which support such tasks as data reformatting and report generation, simple relational database applications, user interfaces, tactical planning, strategic planning and documentation. The current toolkit is written in C and supports applications that run on IBM-PC's under DOS and UNlX-based workstations under OpenLook and Motif. The toolkit is fully integrated for building scheduling systems that reuse AI knowledge base technology. A typical scheduling scenario and three examples of applications that utilize the reuse toolkit will be briefly described. In addition to the tools themselves, a description of the software evolution and reuse methodology that was used is presented.

  8. Nation-wide primary healthcare research network: a privacy protection assessment.

    PubMed

    De Clercq, Etienne; Van Casteren, Viviane; Bossuyt, Nathalie; Moreels, Sarah; Goderis, Geert; Bartholomeeusen, Stefaan; Bonte, Pierre; Bangels, Marc

    2012-01-01

    Efficiency and privacy protection are essential when setting up nationwide research networks. This paper investigates the extent to which basic services developed to support the provision of care can be re-used, whilst preserving an acceptable privacy protection level, within a large Belgian primary care research network. The generic sustainable confidentiality management model used to assess the privacy protection level of the selected network architecture is described. A short analysis of the current architecture is provided. Our generic model could also be used in other countries.

  9. Different micromanipulation applications based on common modular control architecture

    NASA Astrophysics Data System (ADS)

    Sipola, Risto; Vallius, Tero; Pudas, Marko; Röning, Juha

    2010-01-01

    This paper validates a previously introduced scalable modular control architecture and shows how it can be used to implement research equipment. The validation is conducted by presenting different kinds of micromanipulation applications that use the architecture. Conditions of the micro-world are very different from those of the macro-world. Adhesive forces are significant compared to gravitational forces when micro-scale objects are manipulated. Manipulation is mainly conducted by automatic control relying on haptic feedback provided by force sensors. The validated architecture is a hierarchical layered hybrid architecture, including a reactive layer and a planner layer. The implementation of the architecture is modular, and the architecture has a lot in common with open architectures. Further, the architecture is extensible, scalable, portable and it enables reuse of modules. These are the qualities that we validate in this paper. To demonstrate the claimed features, we present different applications that require special control in micrometer, millimeter and centimeter scales. These applications include a device that measures cell adhesion, a device that examines properties of thin films, a device that measures adhesion of micro fibers and a device that examines properties of submerged gel produced by bacteria. Finally, we analyze how the architecture is used in these applications.

  10. Demonstration of a tool for automatic learning and re-use of knowledge in the activated sludge process.

    PubMed

    Comas, J; Rodríguez-Roda, I; Poch, M; Gernaey, K V; Rosen, C; Jeppsson, U

    2006-01-01

    Wastewater treatment plant operators encounter complex operational problems related to the activated sludge process and usually respond to these by applying their own intuition and by taking advantage of what they have learnt from past experiences of similar problems. However, previous process experiences are not easy to integrate in numerical control, and new tools must be developed to enable re-use of plant operating experience. The aim of this paper is to investigate the usefulness of a case-based reasoning (CBR) approach to apply learning and re-use of knowledge gained during past incidents to confront actual complex problems through the IWA/COST Benchmark protocol. A case study shows that the proposed CBR system achieves a significant improvement of the benchmark plant performance when facing a high-flow event disturbance.

  11. Adaptive Reuse: Alternative to Vacant Schools. Cincinnati: The Midas Touch.

    ERIC Educational Resources Information Center

    Carroll, Charles W.

    1984-01-01

    Two dozen schools in Cincinnati, Ohio, were closed over a period of 3 years. Property sales yielded over $1 million; one property, considered an architectural gem, is now a health facility with office spaces and four others are leased. The leases can be cancelled if the district needs school space. (MLF)

  12. Creating a sustainable collaborative consumer health application for chronic disease self-management.

    PubMed

    Johnson, Constance M; McIlwain, Steve; Gray, Oliver; Willson, Bradley; Vorderstrasse, Allison

    2017-07-01

    As the prevalence of chronic diseases increase, there is a need for consumer-centric health informatics applications that assist individuals with disease self-management skills. However, due to the cost of development of these applications, there is also a need to build a disease agnostic architecture so that they could be reused for any chronic disease. This paper describes the architecture of a collaborative virtual environment (VE) platform, LIVE©, that was developed to teach self-management skills and provide social support to those individuals with type 2 diabetes. However, a backend database allows for the application to be easily reused for any chronic disease. We tested its usability in the context of a larger randomized controlled trial of its efficacy. The usability was scored as 'good' by half of the participants in the evaluation. Common errors in the testing and solutions to address initial usability issues are discussed. Overall, LIVE© represents a usable and generalizable platform that will be adapted to other chronic diseases and health needs in future research and applications. Copyright © 2017 Elsevier Inc. All rights reserved.

  13. Knowledge Management: The Bedrock of Enterprise Strategy.

    ERIC Educational Resources Information Center

    Stevens, George H.; Krasner, Scott M.

    2001-01-01

    Discussion of information technology and competitive advantages in organizations focuses on the scope of knowledge management, its goals, components, and outcomes. Highlights include the relationship between effective identification and use of existing knowledge; the creation and re-use of new knowledge; and the ability of an organization to…

  14. Modular modelling with Physiome standards

    PubMed Central

    Nickerson, David P.; Nielsen, Poul M. F.; Hunter, Peter J.

    2016-01-01

    Key points The complexity of computational models is increasing, supported by research in modelling tools and frameworks. But relatively little thought has gone into design principles for complex models.We propose a set of design principles for complex model construction with the Physiome standard modelling protocol CellML.By following the principles, models are generated that are extensible and are themselves suitable for reuse in larger models of increasing complexity.We illustrate these principles with examples including an architectural prototype linking, for the first time, electrophysiology, thermodynamically compliant metabolism, signal transduction, gene regulation and synthetic biology.The design principles complement other Physiome research projects, facilitating the application of virtual experiment protocols and model analysis techniques to assist the modelling community in creating libraries of composable, characterised and simulatable quantitative descriptions of physiology. Abstract The ability to produce and customise complex computational models has great potential to have a positive impact on human health. As the field develops towards whole‐cell models and linking such models in multi‐scale frameworks to encompass tissue, organ, or organism levels, reuse of previous modelling efforts will become increasingly necessary. Any modelling group wishing to reuse existing computational models as modules for their own work faces many challenges in the context of construction, storage, retrieval, documentation and analysis of such modules. Physiome standards, frameworks and tools seek to address several of these challenges, especially for models expressed in the modular protocol CellML. Aside from providing a general ability to produce modules, there has been relatively little research work on architectural principles of CellML models that will enable reuse at larger scales. To complement and support the existing tools and frameworks, we develop a set of principles to address this consideration. The principles are illustrated with examples that couple electrophysiology, signalling, metabolism, gene regulation and synthetic biology, together forming an architectural prototype for whole‐cell modelling (including human intervention) in CellML. Such models illustrate how testable units of quantitative biophysical simulation can be constructed. Finally, future relationships between modular models so constructed and Physiome frameworks and tools are discussed, with particular reference to how such frameworks and tools can in turn be extended to complement and gain more benefit from the results of applying the principles. PMID:27353233

  15. Embedded ensemble propagation for improving performance, portability, and scalability of uncertainty quantification on emerging computational architectures

    DOE PAGES

    Phipps, Eric T.; D'Elia, Marta; Edwards, Harold C.; ...

    2017-04-18

    In this study, quantifying simulation uncertainties is a critical component of rigorous predictive simulation. A key component of this is forward propagation of uncertainties in simulation input data to output quantities of interest. Typical approaches involve repeated sampling of the simulation over the uncertain input data, and can require numerous samples when accurately propagating uncertainties from large numbers of sources. Often simulation processes from sample to sample are similar and much of the data generated from each sample evaluation could be reused. We explore a new method for implementing sampling methods that simultaneously propagates groups of samples together in anmore » embedded fashion, which we call embedded ensemble propagation. We show how this approach takes advantage of properties of modern computer architectures to improve performance by enabling reuse between samples, reducing memory bandwidth requirements, improving memory access patterns, improving opportunities for fine-grained parallelization, and reducing communication costs. We describe a software technique for implementing embedded ensemble propagation based on the use of C++ templates and describe its integration with various scientific computing libraries within Trilinos. We demonstrate improved performance, portability and scalability for the approach applied to the simulation of partial differential equations on a variety of CPU, GPU, and accelerator architectures, including up to 131,072 cores on a Cray XK7 (Titan).« less

  16. Logistics Reduction and Repurposing Beyond Low Earth Orbit

    NASA Technical Reports Server (NTRS)

    Broyan, James Lee, Jr.; Ewert, Michael K.

    2011-01-01

    All human space missions, regardless of destination, require significant logistical mass and volume that is strongly proportional to mission duration. Anything that can be done to reduce initial mass and volume of supplies or reuse items that have been launched will be very valuable. Often, the logistical items require disposal and represent a trash burden. Utilizing systems engineering to analyze logistics from cradle-to-grave and then to potential reuse, can minimize logistics contributions to total mission architecture mass. In NASA's Advanced Exploration Systems Logistics Reduction and Repurposing Project , various tasks will reduce the intrinsic mass of logistical packaging, enable reuse and repurposing of logistical packaging and carriers for other habitation, life support, crew health, and propulsion functions, and reduce or eliminate the nuisances aspects of trash at the same time. Repurposing reduces the trash burden and eliminates the need for hardware whose function can be provided by use of spent logistic items. However, these reuse functions need to be identified and built into future logical systems to enable them to effectively have a secondary function. These technologies and innovations will help future logistic systems to support multiple exploration missions much more efficiently.

  17. Domain knowledge patterns in pedagogical diagnostics

    NASA Astrophysics Data System (ADS)

    Miarka, Rostislav

    2017-07-01

    This paper shows a proposal of representation of knowledge patterns in RDF(S) language. Knowledge patterns are used for reuse of knowledge. They can be divided into two groups - Top-level knowledge patterns and Domain knowledge patterns. Pedagogical diagnostics is aimed at testing of knowledge of students at primary and secondary school. An example of domain knowledge pattern from pedagogical diagnostics is part of this paper.

  18. Framework for a clinical information system.

    PubMed

    Van De Velde, R; Lansiers, R; Antonissen, G

    2002-01-01

    The design and implementation of Clinical Information System architecture is presented. This architecture has been developed and implemented based on components following a strong underlying conceptual and technological model. Common Object Request Broker and n-tier technology featuring centralised and departmental clinical information systems as the back-end store for all clinical data are used. Servers located in the "middle" tier apply the clinical (business) model and application rules. The main characteristics are the focus on modelling and reuse of both data and business logic. Scalability as well as adaptability to constantly changing requirements via component driven computing are the main reasons for that approach.

  19. Partitioning Strategy Using Static Analysis Techniques

    NASA Astrophysics Data System (ADS)

    Seo, Yongjin; Soo Kim, Hyeon

    2016-08-01

    Flight software is software used in satellites' on-board computers. It has requirements such as real time and reliability. The IMA architecture is used to satisfy these requirements. The IMA architecture has the concept of partitions and this affected the configuration of flight software. That is, situations occurred in which software that had been loaded on one system was divided into many partitions when being loaded. For new issues, existing studies use experience based partitioning methods. However, these methods have a problem that they cannot be reused. In this respect, this paper proposes a partitioning method that is reusable and consistent.

  20. Flight Software Development for the CHEOPS Instrument with the CORDET Framework

    NASA Astrophysics Data System (ADS)

    Cechticky, V.; Ottensamer, R.; Pasetti, A.

    2015-09-01

    CHEOPS is an ESA S-class mission dedicated to the precise measurement of radii of already known exoplanets using ultra-high precision photometry. The instrument flight software controlling the instrument and handling the science data is developed by the University of Vienna using the CORDET Framework offered by P&P Software GmbH. The CORDET Framework provides a generic software infrastructure for PUS-based applications. This paper describes how the framework is used for the CHEOPS application software to provide a consistent solution for to the communication and control services, event handling and FDIR procedures. This approach is innovative in four respects: (a) it is a true third-party reuse; (b) re-use is done at specification, validation and code level; (c) the re-usable assets and their qualification data package are entirely open-source; (d) re-use is based on call-back with the application developer providing functions which are called by the reusable architecture. File names missing from here on out (I tried to mimic the files names from before.)

  1. Multidisciplinary Modelling of Symptoms and Signs with Archetypes and SNOMED-CT for Clinical Decision Support.

    PubMed

    Marco-Ruiz, Luis; Maldonado, J Alberto; Karlsen, Randi; Bellika, Johan G

    2015-01-01

    Clinical Decision Support Systems (CDSS) help to improve health care and reduce costs. However, the lack of knowledge management and modelling hampers their maintenance and reuse. Current EHR standards and terminologies can allow the semantic representation of the data and knowledge of CDSS systems boosting their interoperability, reuse and maintenance. This paper presents the modelling process of respiratory conditions' symptoms and signs by a multidisciplinary team of clinicians and information architects with the help of openEHR, SNOMED and clinical information modelling tools for a CDSS. The information model of the CDSS was defined by means of an archetype and the knowledge model was implemented by means of an SNOMED-CT based ontology.

  2. STRS Compliant FPGA Waveform Development

    NASA Technical Reports Server (NTRS)

    Nappier, Jennifer; Downey, Joseph

    2008-01-01

    The Space Telecommunications Radio System (STRS) Architecture Standard describes a standard for NASA space software defined radios (SDRs). It provides a common framework that can be used to develop and operate a space SDR in a reconfigurable and reprogrammable manner. One goal of the STRS Architecture is to promote waveform reuse among multiple software defined radios. Many space domain waveforms are designed to run in the special signal processing (SSP) hardware. However, the STRS Architecture is currently incomplete in defining a standard for designing waveforms in the SSP hardware. Therefore, the STRS Architecture needs to be extended to encompass waveform development in the SSP hardware. A transmit waveform for space applications was developed to determine ways to extend the STRS Architecture to a field programmable gate array (FPGA). These extensions include a standard hardware abstraction layer for FPGAs and a standard interface between waveform functions running inside a FPGA. Current standards were researched and new standard interfaces were proposed. The implementation of the proposed standard interfaces on a laboratory breadboard SDR will be presented.

  3. Management of Knowledge Representation Standards Activities

    NASA Technical Reports Server (NTRS)

    Patil, Ramesh S.

    1993-01-01

    Ever since the mid-seventies, researchers have recognized that capturing knowledge is the key to building large and powerful AI systems. In the years since, we have also found that representing knowledge is difficult and time consuming. In spite of the tools developed to help with knowledge acquisition, knowledge base construction remains one of the major costs in building an Al system: For almost every system we build, a new knowledge base must be constructed from scratch. As a result, most systems remain small to medium in size. Even if we build several systems within a general area, such as medicine or electronics diagnosis, significant portions of the domain must be represented for every system we create. The cost of this duplication of effort has been high and will become prohibitive as we attempt to build larger and larger systems. To overcome this barrier we must find ways of preserving existing knowledge bases and of sharing, re-using, and building on them. This report describes the efforts undertaken over the last two years to identify the issues underlying the current difficulties in sharing and reuse, and a community wide initiative to overcome them. First, we discuss four bottlenecks to sharing and reuse, present a vision of a future in which these bottlenecks have been ameliorated, and describe the efforts of the initiative's four working groups to address these bottlenecks. We then address the supporting technology and infrastructure that is critical to enabling the vision of the future. Finally, we consider topics of longer-range interest by reviewing some of the research issues raised by our vision.

  4. Domain Modeling and Application Development of an Archetype- and XML-based EHRS. Practical Experiences and Lessons Learnt.

    PubMed

    Kropf, Stefan; Chalopin, Claire; Lindner, Dirk; Denecke, Kerstin

    2017-06-28

    Access to patient data within the hospital or between hospitals is still problematic since a variety of information systems is in use applying different vendor specific terminologies and underlying knowledge models. Beyond, the development of electronic health record systems (EHRSs) is time and resource consuming. Thus, there is a substantial need for a development strategy of standardized EHRSs. We are applying a reuse-oriented process model and demonstrate its feasibility and realization on a practical medical use case, which is an EHRS holding all relevant data arising in the context of treatment of tumors of the sella region. In this paper, we describe the development process and our practical experiences. Requirements towards the development of the EHRS were collected by interviews with a neurosurgeon and patient data analysis. For modelling of patient data, we selected openEHR as standard and exploited the software tools provided by the openEHR foundation. The patient information model forms the core of the development process, which comprises the EHR generation and the implementation of an EHRS architecture. Moreover, a reuse-oriented process model from the business domain was adapted to the development of the EHRS. The reuse-oriented process model is a model for a suitable abstraction of both, modeling and development of an EHR centralized EHRS. The information modeling process resulted in 18 archetypes that were aggregated in a template and built the boilerplate of the model driven development. The EHRs and the EHRS were developed by openEHR and W3C standards, tightly supported by well-established XML techniques. The GUI of the final EHRS integrates and visualizes information from various examinations, medical reports, findings and laboratory test results. We conclude that the development of a standardized overarching EHR and an EHRS is feasible using openEHR and W3C standards, enabling a high degree of semantic interoperability. The standardized representation visualizes data and can in this way support the decision process of clinicians.

  5. On-Board Software Reference Architecture for Payloads

    NASA Astrophysics Data System (ADS)

    Bos, Victor; Rugina, Ana; Trcka, Adam

    2016-08-01

    The goal of the On-board Software Reference Architecture for Payloads (OSRA-P) is to identify an architecture for payload software to harmonize the payload domain, to enable more reuse of common/generic payload software across different payloads and missions and to ease the integration of the payloads with the platform.To investigate the payload domain, recent and current payload instruments of European space missions have been analyzed. This led to a Payload Catalogue describing 12 payload instruments as well as a Capability Matrix listing specific characteristics of each payload. In addition, a functional decomposition of payload software was prepared which contains functionalities typically found in payload systems. The definition of OSRA-P was evaluated by case studies and a dedicated OSRA-P workshop to gather feedback from the payload community.

  6. Advances in knowledge-based software engineering

    NASA Technical Reports Server (NTRS)

    Truszkowski, Walt

    1991-01-01

    The underlying hypothesis of this work is that a rigorous and comprehensive software reuse methodology can bring about a more effective and efficient utilization of constrained resources in the development of large-scale software systems by both government and industry. It is also believed that correct use of this type of software engineering methodology can significantly contribute to the higher levels of reliability that will be required of future operational systems. An overview and discussion of current research in the development and application of two systems that support a rigorous reuse paradigm are presented: the Knowledge-Based Software Engineering Environment (KBSEE) and the Knowledge Acquisition fo the Preservation of Tradeoffs and Underlying Rationales (KAPTUR) systems. Emphasis is on a presentation of operational scenarios which highlight the major functional capabilities of the two systems.

  7. Managing a project's legacy: implications for organizations and project management

    NASA Technical Reports Server (NTRS)

    Cooper, Lynne P.; Hecht, Michael H.; Majchrzak, Ann

    2003-01-01

    Organizations that rely on projects to implement their products must find effective mechanisms for propagating lessons learned on one project throughout the organization. A broad view of what constitutes a project's 'legacy' is presented that includes not just the design products and leftover parts, but new processes, relationships, technology, skills, planning data, and performance metrics. Based on research evaluating knowledge reuse in innovative contexts, this paper presents an approach to project legacy management that focuses on collecting and using legacy knowledge to promote organizational learning and effective reuse, while addressing factors of post-project responsibility, information obsolescence, and the importance of ancillary contextual information. .

  8. Knowledge-based approach for generating target system specifications from a domain model

    NASA Technical Reports Server (NTRS)

    Gomaa, Hassan; Kerschberg, Larry; Sugumaran, Vijayan

    1992-01-01

    Several institutions in industry and academia are pursuing research efforts in domain modeling to address unresolved issues in software reuse. To demonstrate the concepts of domain modeling and software reuse, a prototype software engineering environment is being developed at George Mason University to support the creation of domain models and the generation of target system specifications. This prototype environment, which is application domain independent, consists of an integrated set of commercial off-the-shelf software tools and custom-developed software tools. This paper describes the knowledge-based tool that was developed as part of the environment to generate target system specifications from a domain model.

  9. Microservices in Web Objects Enabled IoT Environment for Enhancing Reusability

    PubMed Central

    Chong, Ilyoung

    2018-01-01

    In the ubiquitous Internet of Things (IoT) environment, reusing objects instead of creating new one has become important in academics and industries. The situation becomes complex due to the availability of a huge number of connected IoT objects, and each individual service creates a new object instead of reusing the existing one to fulfill a requirement. A well-standard mechanism not only improves the reusability of objects but also improves service modularity and extensibility, and reduces cost. Web Objects enabled IoT environment applies the principle of reusability of objects in multiple IoT application domains through central objects repository and microservices. To reuse objects with microservices and to maintain a relationship with them, this study presents an architecture of Web of Objects platform. In the case of a similar request for an object, the already instantiated object that exists in the same or from other domain can be reused. Reuse of objects through microservices avoids duplications, and reduces time to search and instantiate them from their registries. Further, this article presents an algorithm for microservices and related objects discovery that considers the reusability of objects through the central objects repository. To support the reusability of objects, the necessary algorithm for objects matching is also presented. To realize the reusability of objects in Web Objects enabled IoT environment, a prototype has been designed and implemented based on a use case scenario. Finally, the results of the prototype have been analyzed and discussed to validate the proposed approach. PMID:29373491

  10. Microservices in Web Objects Enabled IoT Environment for Enhancing Reusability.

    PubMed

    Jarwar, Muhammad Aslam; Kibria, Muhammad Golam; Ali, Sajjad; Chong, Ilyoung

    2018-01-26

    In the ubiquitous Internet of Things (IoT) environment, reusing objects instead of creating new one has become important in academics and industries. The situation becomes complex due to the availability of a huge number of connected IoT objects, and each individual service creates a new object instead of reusing the existing one to fulfill a requirement. A well-standard mechanism not only improves the reusability of objects but also improves service modularity and extensibility, and reduces cost. Web Objects enabled IoT environment applies the principle of reusability of objects in multiple IoT application domains through central objects repository and microservices. To reuse objects with microservices and to maintain a relationship with them, this study presents an architecture of Web of Objects platform. In the case of a similar request for an object, the already instantiated object that exists in the same or from other domain can be reused. Reuse of objects through microservices avoids duplications, and reduces time to search and instantiate them from their registries. Further, this article presents an algorithm for microservices and related objects discovery that considers the reusability of objects through the central objects repository. To support the reusability of objects, the necessary algorithm for objects matching is also presented. To realize the reusability of objects in Web Objects enabled IoT environment, a prototype has been designed and implemented based on a use case scenario. Finally, the results of the prototype have been analyzed and discussed to validate the proposed approach.

  11. Clinical professional governance for detailed clinical models.

    PubMed

    Goossen, William; Goossen-Baremans, Anneke

    2013-01-01

    This chapter describes the need for Detailed Clinical Models for contemporary Electronic Health Systems, data exchange and data reuse. It starts with an explanation of the components related to Detailed Clinical Models with a brief summary of knowledge representation, including terminologies representing clinic relevant "things" in the real world, and information models that abstract these in order to let computers process data about these things. Next, Detailed Clinical Models are defined and their purpose is described. It builds on existing developments around the world and accumulates in current work to create a technical specification at the level of the International Standards Organization. The core components of properly expressed Detailed Clinical Models are illustrated, including clinical knowledge and context, data element specification, code bindings to terminologies and meta-information about authors, versioning among others. Detailed Clinical Models to date are heavily based on user requirements and specify the conceptual and logical levels of modelling. It is not precise enough for specific implementations, which requires an additional step. However, this allows Detailed Clinical Models to serve as specifications for many different kinds of implementations. Examples of Detailed Clinical Models are presented both in text and in Unified Modelling Language. Detailed Clinical Models can be positioned in health information architectures, where they serve at the most detailed granular level. The chapter ends with examples of projects that create and deploy Detailed Clinical Models. All have in common that they can often reuse materials from earlier projects, and that strict governance of these models is essential to use them safely in health care information and communication technology. Clinical validation is one point of such governance, and model testing another. The Plan Do Check Act cycle can be applied for governance of Detailed Clinical Models. Finally, collections of clinical models do require a repository in which they can be stored, searched, and maintained. Governance of Detailed Clinical Models is required at local, national, and international levels.

  12. Knowledge acquisition, semantic text mining, and security risks in health and biomedical informatics

    PubMed Central

    Huang, Jingshan; Dou, Dejing; Dang, Jiangbo; Pardue, J Harold; Qin, Xiao; Huan, Jun; Gerthoffer, William T; Tan, Ming

    2012-01-01

    Computational techniques have been adopted in medical and biological systems for a long time. There is no doubt that the development and application of computational methods will render great help in better understanding biomedical and biological functions. Large amounts of datasets have been produced by biomedical and biological experiments and simulations. In order for researchers to gain knowledge from original data, nontrivial transformation is necessary, which is regarded as a critical link in the chain of knowledge acquisition, sharing, and reuse. Challenges that have been encountered include: how to efficiently and effectively represent human knowledge in formal computing models, how to take advantage of semantic text mining techniques rather than traditional syntactic text mining, and how to handle security issues during the knowledge sharing and reuse. This paper summarizes the state-of-the-art in these research directions. We aim to provide readers with an introduction of major computing themes to be applied to the medical and biological research. PMID:22371823

  13. A Collaborative Knowledge Plane for Autonomic Networks

    NASA Astrophysics Data System (ADS)

    Mbaye, Maïssa; Krief, Francine

    Autonomic networking aims to give network components self-managing capabilities. Several autonomic architectures have been proposed. Each of these architectures includes sort of a knowledge plane which is very important to mimic an autonomic behavior. Knowledge plane has a central role for self-functions by providing suitable knowledge to equipment and needs to learn new strategies for more accuracy.However, defining knowledge plane's architecture is still a challenge for researchers. Specially, defining the way cognitive supports interact each other in knowledge plane and implementing them. Decision making process depends on these interactions between reasoning and learning parts of knowledge plane. In this paper we propose a knowledge plane's architecture based on machine learning (inductive logic programming) paradigm and situated view to deal with distributed environment. This architecture is focused on two self-functions that include all other self-functions: self-adaptation and self-organization. Study cases are given and implemented.

  14. Mission Benefits Analysis of Logistics Reduction Technologies

    NASA Technical Reports Server (NTRS)

    Ewert, Michael K.; Broyan, James Lee, Jr.

    2013-01-01

    Future space exploration missions will need to use less logistical supplies if humans are to live for longer periods away from our home planet. Anything that can be done to reduce initial mass and volume of supplies or reuse or recycle items that have been launched will be very valuable. Reuse and recycling also reduce the trash burden and associated nuisances, such as smell, but require good systems engineering and operations integration to reap the greatest benefits. A systems analysis was conducted to quantify the mass and volume savings of four different technologies currently under development by NASA s Advanced Exploration Systems (AES) Logistics Reduction and Repurposing project. Advanced clothing systems lead to savings by direct mass reduction and increased wear duration. Reuse of logistical items, such as packaging, for a second purpose allows fewer items to be launched. A device known as a heat melt compactor drastically reduces the volume of trash, recovers water and produces a stable tile that can be used instead of launching additional radiation protection. The fourth technology, called trash-to-gas, can benefit a mission by supplying fuel such as methane to the propulsion system. This systems engineering work will help improve logistics planning and overall mission architectures by determining the most effective use, and reuse, of all resources.

  15. Mission Benefits Analysis of Logistics Reduction Technologies

    NASA Technical Reports Server (NTRS)

    Ewert, Michael K.; Broyan, James L.

    2012-01-01

    Future space exploration missions will need to use less logistical supplies if humans are to live for longer periods away from our home planet. Anything that can be done to reduce initial mass and volume of supplies or reuse or recycle items that have been launched will be very valuable. Reuse and recycling also reduce the trash burden and associated nuisances, such as smell, but require good systems engineering and operations integration to reap the greatest benefits. A systems analysis was conducted to quantify the mass and volume savings of four different technologies currently under development by NASA fs Advanced Exploration Systems (AES) Logistics Reduction and Repurposing project. Advanced clothing systems lead to savings by direct mass reduction and increased wear duration. Reuse of logistical items, such as packaging, for a second purpose allows fewer items to be launched. A device known as a heat melt compactor drastically reduces the volume of trash, recovers water and produces a stable tile that can be used instead of launching additional radiation protection. The fourth technology, called trash ]to ]supply ]gas, can benefit a mission by supplying fuel such as methane to the propulsion system. This systems engineering work will help improve logistics planning and overall mission architectures by determining the most effective use, and reuse, of all resources.

  16. Intelligent systems technology infrastructure for integrated systems

    NASA Technical Reports Server (NTRS)

    Lum, Henry

    1991-01-01

    A system infrastructure must be properly designed and integrated from the conceptual development phase to accommodate evolutionary intelligent technologies. Several technology development activities were identified that may have application to rendezvous and capture systems. Optical correlators in conjunction with fuzzy logic control might be used for the identification, tracking, and capture of either cooperative or non-cooperative targets without the intensive computational requirements associated with vision processing. A hybrid digital/analog system was developed and tested with a robotic arm. An aircraft refueling application demonstration is planned within two years. Initially this demonstration will be ground based with a follow-on air based demonstration. System dependability measurement and modeling techniques are being developed for fault management applications. This involves usage of incremental solution/evaluation techniques and modularized systems to facilitate reuse and to take advantage of natural partitions in system models. Though not yet commercially available and currently subject to accuracy limitations, technology is being developed to perform optical matrix operations to enhance computational speed. Optical terrain recognition using camera image sequencing processed with optical correlators is being developed to determine position and velocity in support of lander guidance. The system is planned for testing in conjunction with Dryden Flight Research Facility. Advanced architecture technology is defining open architecture design constraints, test bed concepts (processors, multiple hardware/software and multi-dimensional user support, knowledge/tool sharing infrastructure), and software engineering interface issues.

  17. In-situ Resource Utilization (ISRU) and Lunar Surface Systems

    NASA Technical Reports Server (NTRS)

    Sanders, Jerry; Larson, Bill; Sacksteder, Kurt

    2007-01-01

    This viewgraph presentation reviews the benefits of In-Situ Resource Utilization (ISRU) on the surface of the moon. Included in this review is the commercialization of Lunar ISRU. ISRU will strongly influence architecture and critical technologies. ISRU is a critical capability and key implementation of the Vision for Space Exploration (VSE). ISRU will strongly effects lunar outpost logistics, design and crew safety. ISRU will strongly effect outpost critical technologies. ISRU mass investment is minimal compared to immediate and long-term architecture delivery mass and reuse capabilities provided. Therefore, investment in ISRU constitutes a commitment to the mid and long term future of human exploration.

  18. The theory of interface slicing

    NASA Technical Reports Server (NTRS)

    Beck, Jon

    1993-01-01

    Interface slicing is a new tool which was developed to facilitate reuse-based software engineering, by addressing the following problems, needs, and issues: (1) size of systems incorporating reused modules; (2) knowledge requirements for program modification; (3) program understanding for reverse engineering; (4) module granularity and domain management; and (5) time and space complexity of conventional slicing. The definition of a form of static program analysis called interface slicing is addressed.

  19. Laying the Groundwork for Enterprise-Wide Medical Language Processing Services: Architecture and Process

    PubMed Central

    Chen, Elizabeth S.; Maloney, Francine L.; Shilmayster, Eugene; Goldberg, Howard S.

    2009-01-01

    A systematic and standard process for capturing information within free-text clinical documents could facilitate opportunities for improving quality and safety of patient care, enhancing decision support, and advancing data warehousing across an enterprise setting. At Partners HealthCare System, the Medical Language Processing (MLP) services project was initiated to establish a component-based architectural model and processes to facilitate putting MLP functionality into production for enterprise consumption, promote sharing of components, and encourage reuse. Key objectives included exploring the use of an open-source framework called the Unstructured Information Management Architecture (UIMA) and leveraging existing MLP-related efforts, terminology, and document standards. This paper describes early experiences in defining the infrastructure and standards for extracting, encoding, and structuring clinical observations from a variety of clinical documents to serve enterprise-wide needs. PMID:20351830

  20. Laying the groundwork for enterprise-wide medical language processing services: architecture and process.

    PubMed

    Chen, Elizabeth S; Maloney, Francine L; Shilmayster, Eugene; Goldberg, Howard S

    2009-11-14

    A systematic and standard process for capturing information within free-text clinical documents could facilitate opportunities for improving quality and safety of patient care, enhancing decision support, and advancing data warehousing across an enterprise setting. At Partners HealthCare System, the Medical Language Processing (MLP) services project was initiated to establish a component-based architectural model and processes to facilitate putting MLP functionality into production for enterprise consumption, promote sharing of components, and encourage reuse. Key objectives included exploring the use of an open-source framework called the Unstructured Information Management Architecture (UIMA) and leveraging existing MLP-related efforts, terminology, and document standards. This paper describes early experiences in defining the infrastructure and standards for extracting, encoding, and structuring clinical observations from a variety of clinical documents to serve enterprise-wide needs.

  1. A common type system for clinical natural language processing

    PubMed Central

    2013-01-01

    Background One challenge in reusing clinical data stored in electronic medical records is that these data are heterogenous. Clinical Natural Language Processing (NLP) plays an important role in transforming information in clinical text to a standard representation that is comparable and interoperable. Information may be processed and shared when a type system specifies the allowable data structures. Therefore, we aim to define a common type system for clinical NLP that enables interoperability between structured and unstructured data generated in different clinical settings. Results We describe a common type system for clinical NLP that has an end target of deep semantics based on Clinical Element Models (CEMs), thus interoperating with structured data and accommodating diverse NLP approaches. The type system has been implemented in UIMA (Unstructured Information Management Architecture) and is fully functional in a popular open-source clinical NLP system, cTAKES (clinical Text Analysis and Knowledge Extraction System) versions 2.0 and later. Conclusions We have created a type system that targets deep semantics, thereby allowing for NLP systems to encapsulate knowledge from text and share it alongside heterogenous clinical data sources. Rather than surface semantics that are typically the end product of NLP algorithms, CEM-based semantics explicitly build in deep clinical semantics as the point of interoperability with more structured data types. PMID:23286462

  2. A common type system for clinical natural language processing.

    PubMed

    Wu, Stephen T; Kaggal, Vinod C; Dligach, Dmitriy; Masanz, James J; Chen, Pei; Becker, Lee; Chapman, Wendy W; Savova, Guergana K; Liu, Hongfang; Chute, Christopher G

    2013-01-03

    One challenge in reusing clinical data stored in electronic medical records is that these data are heterogenous. Clinical Natural Language Processing (NLP) plays an important role in transforming information in clinical text to a standard representation that is comparable and interoperable. Information may be processed and shared when a type system specifies the allowable data structures. Therefore, we aim to define a common type system for clinical NLP that enables interoperability between structured and unstructured data generated in different clinical settings. We describe a common type system for clinical NLP that has an end target of deep semantics based on Clinical Element Models (CEMs), thus interoperating with structured data and accommodating diverse NLP approaches. The type system has been implemented in UIMA (Unstructured Information Management Architecture) and is fully functional in a popular open-source clinical NLP system, cTAKES (clinical Text Analysis and Knowledge Extraction System) versions 2.0 and later. We have created a type system that targets deep semantics, thereby allowing for NLP systems to encapsulate knowledge from text and share it alongside heterogenous clinical data sources. Rather than surface semantics that are typically the end product of NLP algorithms, CEM-based semantics explicitly build in deep clinical semantics as the point of interoperability with more structured data types.

  3. Successful Architectural Knowledge Sharing: Beware of Emotions

    NASA Astrophysics Data System (ADS)

    Poort, Eltjo R.; Pramono, Agung; Perdeck, Michiel; Clerc, Viktor; van Vliet, Hans

    This chapter presents the analysis and key findings of a survey on architectural knowledge sharing. The responses of 97 architects working in the Dutch IT Industry were analyzed by correlating practices and challenges with project size and success. Impact mechanisms between project size, project success, and architectural knowledge sharing practices and challenges were deduced based on reasoning, experience and literature. We find that architects run into numerous and diverse challenges sharing architectural knowledge, but that the only challenges that have a significant impact are the emotional challenges related to interpersonal relationships. Thus, architects should be careful when dealing with emotions in knowledge sharing.

  4. Low-power G m-C filter employing current-reuse differential difference amplifiers

    DOE PAGES

    Mincey, John S.; Briseno-Vidrios, Carlos; Silva-Martinez, Jose; ...

    2016-08-10

    This study deals with the design of low-power, high performance, continuous-time filters. The proposed OTA architecture employs current-reuse differential difference amplifiers in order to produce more power efficient Gm-C filter solutions. To demonstrate this, a 6th order low-pass Butterworth filter was designed in 0.18 m CMOS achieving a 65-MHz -3-dB frequency, an in-band input-referred third-order intercept point of 12.0 dBm, and an input referred noise density of 40 nV/Hz1=2, while only consuming 8.07 mW from a 1.8 V supply and occupying a total chip area of 0.21 mm2 with a power consumption of only 1.19 mW per pole.

  5. Low-power G m-C filter employing current-reuse differential difference amplifiers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mincey, John S.; Briseno-Vidrios, Carlos; Silva-Martinez, Jose

    This study deals with the design of low-power, high performance, continuous-time filters. The proposed OTA architecture employs current-reuse differential difference amplifiers in order to produce more power efficient Gm-C filter solutions. To demonstrate this, a 6th order low-pass Butterworth filter was designed in 0.18 m CMOS achieving a 65-MHz -3-dB frequency, an in-band input-referred third-order intercept point of 12.0 dBm, and an input referred noise density of 40 nV/Hz1=2, while only consuming 8.07 mW from a 1.8 V supply and occupying a total chip area of 0.21 mm2 with a power consumption of only 1.19 mW per pole.

  6. On patterns and re-use in bioinformatics databases.

    PubMed

    Bell, Michael J; Lord, Phillip

    2017-09-01

    As the quantity of data being depositing into biological databases continues to increase, it becomes ever more vital to develop methods that enable us to understand this data and ensure that the knowledge is correct. It is widely-held that data percolates between different databases, which causes particular concerns for data correctness; if this percolation occurs, incorrect data in one database may eventually affect many others while, conversely, corrections in one database may fail to percolate to others. In this paper, we test this widely-held belief by directly looking for sentence reuse both within and between databases. Further, we investigate patterns of how sentences are reused over time. Finally, we consider the limitations of this form of analysis and the implications that this may have for bioinformatics database design. We show that reuse of annotation is common within many different databases, and that also there is a detectable level of reuse between databases. In addition, we show that there are patterns of reuse that have previously been shown to be associated with percolation errors. Analytical software is available on request. phillip.lord@newcastle.ac.uk. © The Author(s) 2017. Published by Oxford University Press.

  7. On patterns and re-use in bioinformatics databases

    PubMed Central

    Bell, Michael J.; Lord, Phillip

    2017-01-01

    Abstract Motivation: As the quantity of data being depositing into biological databases continues to increase, it becomes ever more vital to develop methods that enable us to understand this data and ensure that the knowledge is correct. It is widely-held that data percolates between different databases, which causes particular concerns for data correctness; if this percolation occurs, incorrect data in one database may eventually affect many others while, conversely, corrections in one database may fail to percolate to others. In this paper, we test this widely-held belief by directly looking for sentence reuse both within and between databases. Further, we investigate patterns of how sentences are reused over time. Finally, we consider the limitations of this form of analysis and the implications that this may have for bioinformatics database design. Results: We show that reuse of annotation is common within many different databases, and that also there is a detectable level of reuse between databases. In addition, we show that there are patterns of reuse that have previously been shown to be associated with percolation errors. Availability and implementation: Analytical software is available on request. Contact: phillip.lord@newcastle.ac.uk PMID:28525546

  8. A component-based problem list subsystem for the HOLON testbed. Health Object Library Online.

    PubMed Central

    Law, V.; Goldberg, H. S.; Jones, P.; Safran, C.

    1998-01-01

    One of the deliverables of the HOLON (Health Object Library Online) project is the specification of a reference architecture for clinical information systems that facilitates the development of a variety of discrete, reusable software components. One of the challenges facing the HOLON consortium is determining what kinds of components can be made available in a library for developers of clinical information systems. To further explore the use of component architectures in the development of reusable clinical subsystems, we have incorporated ongoing work in the development of enterprise terminology services into a Problem List subsystem for the HOLON testbed. We have successfully implemented a set of components using CORBA (Common Object Request Broker Architecture) and Java distributed object technologies that provide a functional problem list application and UMLS-based "Problem Picker." Through this development, we have overcome a variety of obstacles characteristic of rapidly emerging technologies, and have identified architectural issues necessary to scale these components for use and reuse within an enterprise clinical information system. PMID:9929252

  9. A component-based, distributed object services architecture for a clinical workstation.

    PubMed

    Chueh, H C; Raila, W F; Pappas, J J; Ford, M; Zatsman, P; Tu, J; Barnett, G O

    1996-01-01

    Attention to an architectural framework in the development of clinical applications can promote reusability of both legacy systems as well as newly designed software. We describe one approach to an architecture for a clinical workstation application which is based on a critical middle tier of distributed object-oriented services. This tier of network-based services provides flexibility in the creation of both the user interface and the database tiers. We developed a clinical workstation for ambulatory care using this architecture, defining a number of core services including those for vocabulary, patient index, documents, charting, security, and encounter management. These services can be implemented through proprietary or more standard distributed object interfaces such as CORBA and OLE. Services are accessed over the network by a collection of user interface components which can be mixed and matched to form a variety of interface styles. These services have also been reused with several applications based on World Wide Web browser interfaces.

  10. A component-based, distributed object services architecture for a clinical workstation.

    PubMed Central

    Chueh, H. C.; Raila, W. F.; Pappas, J. J.; Ford, M.; Zatsman, P.; Tu, J.; Barnett, G. O.

    1996-01-01

    Attention to an architectural framework in the development of clinical applications can promote reusability of both legacy systems as well as newly designed software. We describe one approach to an architecture for a clinical workstation application which is based on a critical middle tier of distributed object-oriented services. This tier of network-based services provides flexibility in the creation of both the user interface and the database tiers. We developed a clinical workstation for ambulatory care using this architecture, defining a number of core services including those for vocabulary, patient index, documents, charting, security, and encounter management. These services can be implemented through proprietary or more standard distributed object interfaces such as CORBA and OLE. Services are accessed over the network by a collection of user interface components which can be mixed and matched to form a variety of interface styles. These services have also been reused with several applications based on World Wide Web browser interfaces. PMID:8947744

  11. Assessment of modularity architecture for recovery process of electric vehicle in supporting sustainable design

    NASA Astrophysics Data System (ADS)

    Baroroh, D. K.; Alfiah, D.

    2018-05-01

    The electric vehicle is one of the innovations to reduce the pollution of the vehicle. Nevertheless, it still has a problem, especially for disposal stage. In supporting product design and development strategy, which is the idea of sustainable design or problem solving of disposal stage, assessment of modularity architecture from electric vehicle in recovery process needs to be done. This research used Design Structure Matrix (DSM) approach to deciding interaction of components and assessment of modularity architecture using the calculation of value from 3 variables, namely Module Independence (MI), Module Similarity (MS), and Modularity for End of Life Stage (MEOL). The result of this research shows that existing design of electric vehicles has the architectural design which has a high value of modularity for recovery process on disposal stage. Accordingly, so it can be reused and recycled in component level or module without disassembly process to support the product that is environmentally friendly (sustainable design) and able reduce disassembly cost.

  12. A component-based problem list subsystem for the HOLON testbed. Health Object Library Online.

    PubMed

    Law, V; Goldberg, H S; Jones, P; Safran, C

    1998-01-01

    One of the deliverables of the HOLON (Health Object Library Online) project is the specification of a reference architecture for clinical information systems that facilitates the development of a variety of discrete, reusable software components. One of the challenges facing the HOLON consortium is determining what kinds of components can be made available in a library for developers of clinical information systems. To further explore the use of component architectures in the development of reusable clinical subsystems, we have incorporated ongoing work in the development of enterprise terminology services into a Problem List subsystem for the HOLON testbed. We have successfully implemented a set of components using CORBA (Common Object Request Broker Architecture) and Java distributed object technologies that provide a functional problem list application and UMLS-based "Problem Picker." Through this development, we have overcome a variety of obstacles characteristic of rapidly emerging technologies, and have identified architectural issues necessary to scale these components for use and reuse within an enterprise clinical information system.

  13. Strategies for P2P connectivity in reconfigurable converged wired/wireless access networks.

    PubMed

    Puerto, Gustavo; Mora, José; Ortega, Beatriz; Capmany, José

    2010-12-06

    This paper presents different strategies to define the architecture of a Radio-Over-Fiber (RoF) Access networks enabling Peer-to-Peer (P2P) functionalities. The architectures fully exploit the flexibility of a wavelength router based on the feedback configuration of an Arrayed Waveguide Grating (AWG) and an optical switch to broadcast P2P services among diverse infrastructures featuring dynamic channel allocation and enabling an optical platform for 3G and beyond wireless backhaul requirements. The first architecture incorporates a tunable laser to generate a dedicated wavelength for P2P purposes and the second architecture takes advantage of reused wavelengths to enable the P2P connectivity among Optical Network Units (ONUs) or Base Stations (BS). While these two approaches allow the P2P connectivity in a one at a time basis (1:1), the third architecture enables the broadcasting of P2P sessions among different ONUs or BSs at the same time (1:M). Experimental assessment of the proposed architecture shows approximately 0.6% Error Vector Magnitude (EVM) degradation for wireless services and 1 dB penalty in average for 1 x 10(-12) Bit Error Rate (BER) for wired baseband services.

  14. A new HLA-based distributed control architecture for agricultural teams of robots in hybrid applications with real and simulated devices or environments.

    PubMed

    Nebot, Patricio; Torres-Sospedra, Joaquín; Martínez, Rafael J

    2011-01-01

    The control architecture is one of the most important part of agricultural robotics and other robotic systems. Furthermore its importance increases when the system involves a group of heterogeneous robots that should cooperate to achieve a global goal. A new control architecture is introduced in this paper for groups of robots in charge of doing maintenance tasks in agricultural environments. Some important features such as scalability, code reuse, hardware abstraction and data distribution have been considered in the design of the new architecture. Furthermore, coordination and cooperation among the different elements in the system is allowed in the proposed control system. By integrating a network oriented device server Player, Java Agent Development Framework (JADE) and High Level Architecture (HLA), the previous concepts have been considered in the new architecture presented in this paper. HLA can be considered the most important part because it not only allows the data distribution and implicit communication among the parts of the system but also allows to simultaneously operate with simulated and real entities, thus allowing the use of hybrid systems in the development of applications.

  15. Effective Tutorial Ontology Modeling on Organic Rice Farming for Non-Science & Technology Educated Farmers Using Knowledge Engineering

    ERIC Educational Resources Information Center

    Yanchinda, Jirawit; Chakpitak, Nopasit; Yodmongkol, Pitipong

    2015-01-01

    Knowledge of the appropriate technologies for sustainable development projects has encouraged grass roots development, which has in turn promoted sustainable and successful community development, which a requirement is to share and reuse this knowledge effectively. This research aims to propose a tutorial ontology effectiveness modeling on organic…

  16. A knowledge-based decision support system in bioinformatics: an application to protein complex extraction

    PubMed Central

    2013-01-01

    Background We introduce a Knowledge-based Decision Support System (KDSS) in order to face the Protein Complex Extraction issue. Using a Knowledge Base (KB) coding the expertise about the proposed scenario, our KDSS is able to suggest both strategies and tools, according to the features of input dataset. Our system provides a navigable workflow for the current experiment and furthermore it offers support in the configuration and running of every processing component of that workflow. This last feature makes our system a crossover between classical DSS and Workflow Management Systems. Results We briefly present the KDSS' architecture and basic concepts used in the design of the knowledge base and the reasoning component. The system is then tested using a subset of Saccharomyces cerevisiae Protein-Protein interaction dataset. We used this subset because it has been well studied in literature by several research groups in the field of complex extraction: in this way we could easily compare the results obtained through our KDSS with theirs. Our system suggests both a preprocessing and a clustering strategy, and for each of them it proposes and eventually runs suited algorithms. Our system's final results are then composed of a workflow of tasks, that can be reused for other experiments, and the specific numerical results for that particular trial. Conclusions The proposed approach, using the KDSS' knowledge base, provides a novel workflow that gives the best results with regard to the other workflows produced by the system. This workflow and its numeric results have been compared with other approaches about PPI network analysis found in literature, offering similar results. PMID:23368995

  17. The presence of opportunistic pathogens, Legionella spp., L. pneumophila and Mycobacterium avium complex, in South Australian reuse water distribution pipelines.

    PubMed

    Whiley, H; Keegan, A; Fallowfield, H; Bentham, R

    2015-06-01

    Water reuse has become increasingly important for sustainable water management. Currently, its application is primarily constrained by the potential health risks. Presently there is limited knowledge regarding the presence and fate of opportunistic pathogens along reuse water distribution pipelines. In this study opportunistic human pathogens Legionella spp., L. pneumophila and Mycobacterium avium complex were detected using real-time polymerase chain reaction along two South Australian reuse water distribution pipelines at maximum concentrations of 10⁵, 10³ and 10⁵ copies/mL, respectively. During the summer period of sampling the concentration of all three organisms significantly increased (P < 0.05) along the pipeline, suggesting multiplication and hence viability. No seasonality in the decrease in chlorine residual along the pipelines was observed. This suggests that the combination of reduced chlorine residual and increased water temperature promoted the presence of these opportunistic pathogens.

  18. Case-Based Capture and Reuse of Aerospace Design Rationale

    NASA Technical Reports Server (NTRS)

    Leake, David B.

    2001-01-01

    The goal of this project was to apply artificial intelligence techniques to facilitate capture and reuse of aerospace design rationale. The project combined case-based reasoning (CBR) and concept maps (CMaps) to develop methods for capturing, organizing, and interactively accessing records of experiences encapsulating the methods and rationale underlying expert aerospace design, in order to bring the captured knowledge to bear to support future reasoning. The project's results contribute both principles and methods for effective design-aiding systems that aid capture and access of useful design knowledge. The project has been guided by the tenets that design-aiding systems must: (1) Leverage a designer's knowledge, rather than attempting to replace it; (2) Be able to reflect different designers' differing conceptualizations of the design task, and to clarify those conceptualizations to others; (3) Include capabilities to capture information both by interactive knowledge modeling and during normal use; and (4) Integrate into normal designer tasks as naturally and unobtrusive as possible.

  19. Modular Architecture for Integrated Model-Based Decision Support.

    PubMed

    Gaebel, Jan; Schreiber, Erik; Oeser, Alexander; Oeltze-Jafra, Steffen

    2018-01-01

    Model-based decision support systems promise to be a valuable addition to oncological treatments and the implementation of personalized therapies. For the integration and sharing of decision models, the involved systems must be able to communicate with each other. In this paper, we propose a modularized architecture of dedicated systems for the integration of probabilistic decision models into existing hospital environments. These systems interconnect via web services and provide model sharing and processing capabilities for clinical information systems. Along the lines of IHE integration profiles from other disciplines and the meaningful reuse of routinely recorded patient data, our approach aims for the seamless integration of decision models into hospital infrastructure and the physicians' daily work.

  20. Architectural Visualization of C/C++ Source Code for Program Comprehension

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Panas, T; Epperly, T W; Quinlan, D

    2006-09-01

    Structural and behavioral visualization of large-scale legacy systems to aid program comprehension is still a major challenge. The challenge is even greater when applications are implemented in flexible and expressive languages such as C and C++. In this paper, we consider visualization of static and dynamic aspects of large-scale scientific C/C++ applications. For our investigation, we reuse and integrate specialized analysis and visualization tools. Furthermore, we present a novel layout algorithm that permits a compressive architectural view of a large-scale software system. Our layout is unique in that it allows traditional program visualizations, i.e., graph structures, to be seen inmore » relation to the application's file structure.« less

  1. Renaissance: A revolutionary approach for providing low-cost ground data systems

    NASA Technical Reports Server (NTRS)

    Butler, Madeline J.; Perkins, Dorothy C.; Zeigenfuss, Lawrence B.

    1996-01-01

    The NASA is changing its attention from large missions to a greater number of smaller missions with reduced development schedules and budgets. In relation to this, the Renaissance Mission Operations and Data Systems Directorate systems engineering process is presented. The aim of the Renaissance approach is to improve system performance, reduce cost and schedules and meet specific customer needs. The approach includes: the early involvement of the users to define the mission requirements and system architectures; the streamlining of management processes; the development of a flexible cost estimation capability, and the ability to insert technology. Renaissance-based systems demonstrate significant reuse of commercial off-the-shelf building blocks in an integrated system architecture.

  2. Advanced Software Development Workstation Project

    NASA Technical Reports Server (NTRS)

    Lee, Daniel

    1989-01-01

    The Advanced Software Development Workstation Project, funded by Johnson Space Center, is investigating knowledge-based techniques for software reuse in NASA software development projects. Two prototypes have been demonstrated and a third is now in development. The approach is to build a foundation that provides passive reuse support, add a layer that uses domain-independent programming knowledge, add a layer that supports the acquisition of domain-specific programming knowledge to provide active support, and enhance maintainability and modifiability through an object-oriented approach. The development of new application software would use specification-by-reformulation, based on a cognitive theory of retrieval from very long-term memory in humans, and using an Ada code library and an object base. Current tasks include enhancements to the knowledge representation of Ada packages and abstract data types, extensions to support Ada package instantiation knowledge acquisition, integration with Ada compilers and relational databases, enhancements to the graphical user interface, and demonstration of the system with a NASA contractor-developed trajectory simulation package. Future work will focus on investigating issues involving scale-up and integration.

  3. Knowledge Production in an Architectural Practice and a University Architectural Department

    ERIC Educational Resources Information Center

    Winberg, Chris

    2006-01-01

    Processes of knowledge production by professional architects and architects-in-training were studied and compared. Both professionals and students were involved in the production of knowledge about the architectural heritage of historical buildings in Cape Town. In a study of the artefacts produced, observations of the processes by means of which…

  4. A Knowledge Conversion Model Based on the Cognitive Load Theory for Architectural Design Education

    ERIC Educational Resources Information Center

    Wu, Yun-Wu; Liao, Shin; Wen, Ming-Hui; Weng, Kuo-Hua

    2017-01-01

    The education of architectural design requires balanced curricular arrangements of respectively theoretical knowledge and practical skills to really help students build their knowledge structures, particularly helping them in solving the problems of cognitive load. The purpose of this study is to establish an architectural design knowledge…

  5. Cross-Platform Development Techniques for Mobile Devices

    DTIC Science & Technology

    2013-09-01

    many other platforms including Windows, Blackberry , and Symbian. Each of these platforms has their own distinct architecture and programming language...sales of iPhones and the increasing use of Android-based devices have forced less successful competitors such as Microsoft, Blackberry , and Symbian... Blackberry and Windows Phone are planned [12] in this tool’s attempt to reuse code with a unified JavaScript API while at the same time supporting unique

  6. Multi-resolution extension for transmission of geodata in a mobile context

    NASA Astrophysics Data System (ADS)

    Follin, Jean-Michel; Bouju, Alain; Bertrand, Frédéric; Boursier, Patrice

    2005-03-01

    A solution is proposed for the management of multi-resolution vector data in a mobile spatial information visualization system. The client-server architecture and the models of data and transfer of the system are presented first. The aim of this system is to reduce data exchanged between client and server by reusing data already present on the client side. Then, an extension of this system to multi-resolution data is proposed. Our solution is based on the use of increments in a multi-scale database. A database architecture where data sets for different predefined scales are precomputed and stored on the server side is adopted. In this model, each object representing the same real world entities at different levels of detail has to be linked beforehand. Increments correspond to the difference between two datasets with different levels of detail. They are transmitted in order to increase (or decrease) the detail to the client upon request. They include generalization and refinement operators allowing transitions between the different levels. Finally, a framework suited to the transfer of multi-resolution data in a mobile context is presented. This allows reuse of data locally available at different levels of detail and, in this way, reduces the amount of data transferred between client and server.

  7. Multi-threaded Sparse Matrix Sparse Matrix Multiplication for Many-Core and GPU Architectures.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Deveci, Mehmet; Trott, Christian Robert; Rajamanickam, Sivasankaran

    Sparse Matrix-Matrix multiplication is a key kernel that has applications in several domains such as scientific computing and graph analysis. Several algorithms have been studied in the past for this foundational kernel. In this paper, we develop parallel algorithms for sparse matrix- matrix multiplication with a focus on performance portability across different high performance computing architectures. The performance of these algorithms depend on the data structures used in them. We compare different types of accumulators in these algorithms and demonstrate the performance difference between these data structures. Furthermore, we develop a meta-algorithm, kkSpGEMM, to choose the right algorithm and datamore » structure based on the characteristics of the problem. We show performance comparisons on three architectures and demonstrate the need for the community to develop two phase sparse matrix-matrix multiplication implementations for efficient reuse of the data structures involved.« less

  8. Multi-threaded Sparse Matrix-Matrix Multiplication for Many-Core and GPU Architectures.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Deveci, Mehmet; Rajamanickam, Sivasankaran; Trott, Christian Robert

    Sparse Matrix-Matrix multiplication is a key kernel that has applications in several domains such as scienti c computing and graph analysis. Several algorithms have been studied in the past for this foundational kernel. In this paper, we develop parallel algorithms for sparse matrix-matrix multiplication with a focus on performance portability across different high performance computing architectures. The performance of these algorithms depend on the data structures used in them. We compare different types of accumulators in these algorithms and demonstrate the performance difference between these data structures. Furthermore, we develop a meta-algorithm, kkSpGEMM, to choose the right algorithm and datamore » structure based on the characteristics of the problem. We show performance comparisons on three architectures and demonstrate the need for the community to develop two phase sparse matrix-matrix multiplication implementations for efficient reuse of the data structures involved.« less

  9. Novel WRM-based architecture of hybrid PON featuring online access and full-fiber-fault protection for smart grid

    NASA Astrophysics Data System (ADS)

    Li, Xingfeng; Gan, Chaoqin; Liu, Zongkang; Yan, Yuqi; Qiao, HuBao

    2018-01-01

    In this paper, a novel architecture of hybrid PON for smart grid is proposed by introducing a wavelength-routing module (WRM). By using conventional optical passive components, a WRM with M ports is designed. The symmetry and passivity of the WRM makes it be easily integrated and very cheap in practice. Via the WRM, two types of network based on different ONU-interconnected manner can realize online access. Depending on optical switches and interconnecting fibers, full-fiber-fault protection and dynamic bandwidth allocation are realized in these networks. With the help of amplitude modulation, DPSK modulation and RSOA technology, wavelength triple-reuse is achieved. By means of injecting signals into left and right branches in access ring simultaneously, the transmission delay is decreased. Finally, the performance analysis and simulation of the network verifies the feasibility of the proposed architecture.

  10. A RESTful Service Oriented Architecture for Science Data Processing

    NASA Astrophysics Data System (ADS)

    Duggan, B.; Tilmes, C.; Durbin, P.; Masuoka, E.

    2012-12-01

    The Atmospheric Composition Processing System is an implementation of a RESTful Service Oriented Architecture which handles incoming data from the Ozone Monitoring Instrument and the Ozone Monitoring and Profiler Suite aboard the Aura and NPP spacecrafts respectively. The system has been built entirely from open source components, such as Postgres, Perl, and SQLite and has leveraged the vast resources of the Comprehensive Perl Archive Network (CPAN). The modular design of the system also allows for many of the components to be easily released and integrated into the CPAN ecosystem and reused independently. At minimal expense, the CPAN infrastructure and community provide peer review, feedback and continuous testing in a wide variety of environments and architectures. A well defined set of conventions also facilitates dependency management, packaging, and distribution of code. Test driven development also provides a way to ensure stability despite a continuously changing base of dependencies.

  11. An Evaluation of the High Level Architecture (HLA) as a Framework for NASA Modeling and Simulation

    NASA Technical Reports Server (NTRS)

    Reid, Michael R.; Powers, Edward I. (Technical Monitor)

    2000-01-01

    The High Level Architecture (HLA) is a current US Department of Defense and an industry (IEEE-1516) standard architecture for modeling and simulations. It provides a framework and set of functional rules and common interfaces for integrating separate and disparate simulators into a larger simulation. The goal of the HLA is to reduce software costs by facilitating the reuse of simulation components and by providing a runtime infrastructure to manage the simulations. In order to evaluate the applicability of the HLA as a technology for NASA space mission simulations, a Simulations Group at Goddard Space Flight Center (GSFC) conducted a study of the HLA and developed a simple prototype HLA-compliant space mission simulator. This paper summarizes the prototyping effort and discusses the potential usefulness of the HLA in the design and planning of future NASA space missions with a focus on risk mitigation and cost reduction.

  12. Evolutionary Telemetry and Command Processor (TCP) architecture

    NASA Technical Reports Server (NTRS)

    Schneider, John R.

    1992-01-01

    A low cost, modular, high performance, and compact Telemetry and Command Processor (TCP) is being built as the foundation of command and data handling subsystems for the next generation of satellites. The TCP product line will support command and telemetry requirements for small to large spacecraft and from low to high rate data transmission. It is compatible with the latest TDRSS, STDN and SGLS transponders and provides CCSDS protocol communications in addition to standard TDM formats. Its high performance computer provides computing resources for hosted flight software. Layered and modular software provides common services using standardized interfaces to applications thereby enhancing software re-use, transportability, and interoperability. The TCP architecture is based on existing standards, distributed networking, distributed and open system computing, and packet technology. The first TCP application is planned for the 94 SDIO SPAS 3 mission. The architecture enhances rapid tailoring of functions thereby reducing costs and schedules developed for individual spacecraft missions.

  13. Diamond Eye: a distributed architecture for image data mining

    NASA Astrophysics Data System (ADS)

    Burl, Michael C.; Fowlkes, Charless; Roden, Joe; Stechert, Andre; Mukhtar, Saleem

    1999-02-01

    Diamond Eye is a distributed software architecture, which enables users (scientists) to analyze large image collections by interacting with one or more custom data mining servers via a Java applet interface. Each server is coupled with an object-oriented database and a computational engine, such as a network of high-performance workstations. The database provides persistent storage and supports querying of the 'mined' information. The computational engine provides parallel execution of expensive image processing, object recognition, and query-by-content operations. Key benefits of the Diamond Eye architecture are: (1) the design promotes trial evaluation of advanced data mining and machine learning techniques by potential new users (all that is required is to point a web browser to the appropriate URL), (2) software infrastructure that is common across a range of science mining applications is factored out and reused, and (3) the system facilitates closer collaborations between algorithm developers and domain experts.

  14. Rapidly Re-Configurable Flight Simulator Tools for Crew Vehicle Integration Research and Design

    NASA Technical Reports Server (NTRS)

    Schutte, Paul C.; Trujillo, Anna; Pritchett, Amy R.

    2000-01-01

    While simulation is a valuable research and design tool, the time and difficulty required to create new simulations (or re-use existing simulations) often limits their application. This report describes the design of the software architecture for the Reconfigurable Flight Simulator (RFS), which provides a robust simulation framework that allows the simulator to fulfill multiple research and development goals. The core of the architecture provides the interface standards for simulation components, registers and initializes components, and handles the communication between simulation components. The simulation components are each a pre-compiled library 'plug-in' module. This modularity allows independent development and sharing of individual simulation components. Additional interfaces can be provided through the use of Object Data/Method Extensions (OD/ME). RFS provides a programmable run-time environment for real-time access and manipulation, and has networking capabilities using the High Level Architecture (HLA).

  15. Re-Using of the Historical Buildings in the Context of Sustainablity: AN Architectural Design Studio Study on Old Girls Teacher Training School

    NASA Astrophysics Data System (ADS)

    Ulusoy, M.; Erdogan, E.; Erdogan, H. A.; Oral, M.

    2013-07-01

    Refunctioning is a widely used method for protecting historical structures. However, throughout architectural education, functioning historical structures and producing new designs in terms of historical pattern do not attract great attention within the framework of design studios. It is a fact that in such schools that abovementioned items are more popular, the connection between protection oriented studio and design studio is pretty weak. In this study refunctioning was discussed as a design studio topic in relation to the old girls' teacher training school and its immediate surroundings. The primary objective of this design studio is to increase architecture students' awareness in terms of visual and perceptual levels of project designs in historical patterns. Within the context of this manuscript, the experiences gained during design studio process were transferred and discussed.

  16. Rapidly Re-Configurable Flight Simulator Tools for Crew Vehicle Integration Research and Design

    NASA Technical Reports Server (NTRS)

    Pritchett, Amy R.

    2002-01-01

    While simulation is a valuable research and design tool, the time and difficulty required to create new simulations (or re-use existing simulations) often limits their application. This report describes the design of the software architecture for the Reconfigurable Flight Simulator (RFS), which provides a robust simulation framework that allows the simulator to fulfill multiple research and development goals. The core of the architecture provides the interface standards for simulation components, registers and initializes components, and handles the communication between simulation components. The simulation components are each a pre-compiled library 'plugin' module. This modularity allows independent development and sharing of individual simulation components. Additional interfaces can be provided through the use of Object Data/Method Extensions (OD/ME). RFS provides a programmable run-time environment for real-time access and manipulation, and has networking capabilities using the High Level Architecture (HLA).

  17. Advancing data reuse in phyloinformatics using an ontology-driven Semantic Web approach.

    PubMed

    Panahiazar, Maryam; Sheth, Amit P; Ranabahu, Ajith; Vos, Rutger A; Leebens-Mack, Jim

    2013-01-01

    Phylogenetic analyses can resolve historical relationships among genes, organisms or higher taxa. Understanding such relationships can elucidate a wide range of biological phenomena, including, for example, the importance of gene and genome duplications in the evolution of gene function, the role of adaptation as a driver of diversification, or the evolutionary consequences of biogeographic shifts. Phyloinformaticists are developing data standards, databases and communication protocols (e.g. Application Programming Interfaces, APIs) to extend the accessibility of gene trees, species trees, and the metadata necessary to interpret these trees, thus enabling researchers across the life sciences to reuse phylogenetic knowledge. Specifically, Semantic Web technologies are being developed to make phylogenetic knowledge interpretable by web agents, thereby enabling intelligently automated, high-throughput reuse of results generated by phylogenetic research. This manuscript describes an ontology-driven, semantic problem-solving environment for phylogenetic analyses and introduces artefacts that can promote phyloinformatic efforts to promote accessibility of trees and underlying metadata. PhylOnt is an extensible ontology with concepts describing tree types and tree building methodologies including estimation methods, models and programs. In addition we present the PhylAnt platform for annotating scientific articles and NeXML files with PhylOnt concepts. The novelty of this work is the annotation of NeXML files and phylogenetic related documents with PhylOnt Ontology. This approach advances data reuse in phyloinformatics.

  18. Image Understanding Architecture

    DTIC Science & Technology

    1991-09-01

    architecture to support real-time, knowledge -based image understanding , and develop the software support environment that will be needed to utilize...NUMBER OF PAGES Image Understanding Architecture, Knowledge -Based Vision, AI Real-Time Computer Vision, Software Simulator, Parallel Processor IL PRICE... information . In addition to sensory and knowledge -based processing it is useful to introduce a level of symbolic processing. Thus, vision researchers

  19. Life's Lessons in the Lab: A Summer of Learning from Undergraduate Research Experiences

    ERIC Educational Resources Information Center

    Nadelson, Louis S.; Warner, Don; Brown, Eric

    2015-01-01

    Research experiences for undergraduates (REUs) seek to increase the participating students' knowledge and perceptions of scientific research through engagement in laboratory research and related activities. Various REU outcomes have been investigated including influence on participants' content knowledge, career plans, and general perceptions of…

  20. An Ontology for Learning Services on the Shop Floor

    ERIC Educational Resources Information Center

    Ullrich, Carsten

    2016-01-01

    An ontology expresses a common understanding of a domain that serves as a basis of communication between people or systems, and enables knowledge sharing, reuse of domain knowledge, reasoning and thus problem solving. In Technology-Enhanced Learning, especially in Intelligent Tutoring Systems and Adaptive Learning Environments, ontologies serve as…

  1. Presenting an Approach for Conducting Knowledge Architecture within Large-Scale Organizations

    PubMed Central

    Varaee, Touraj; Habibi, Jafar; Mohaghar, Ali

    2015-01-01

    Knowledge architecture (KA) establishes the basic groundwork for the successful implementation of a short-term or long-term knowledge management (KM) program. An example of KA is the design of a prototype before a new vehicle is manufactured. Due to a transformation to large-scale organizations, the traditional architecture of organizations is undergoing fundamental changes. This paper explores the main strengths and weaknesses in the field of KA within large-scale organizations and provides a suitable methodology and supervising framework to overcome specific limitations. This objective was achieved by applying and updating the concepts from the Zachman information architectural framework and the information architectural methodology of enterprise architecture planning (EAP). The proposed solution may be beneficial for architects in knowledge-related areas to successfully accomplish KM within large-scale organizations. The research method is descriptive; its validity is confirmed by performing a case study and polling the opinions of KA experts. PMID:25993414

  2. Presenting an Approach for Conducting Knowledge Architecture within Large-Scale Organizations.

    PubMed

    Varaee, Touraj; Habibi, Jafar; Mohaghar, Ali

    2015-01-01

    Knowledge architecture (KA) establishes the basic groundwork for the successful implementation of a short-term or long-term knowledge management (KM) program. An example of KA is the design of a prototype before a new vehicle is manufactured. Due to a transformation to large-scale organizations, the traditional architecture of organizations is undergoing fundamental changes. This paper explores the main strengths and weaknesses in the field of KA within large-scale organizations and provides a suitable methodology and supervising framework to overcome specific limitations. This objective was achieved by applying and updating the concepts from the Zachman information architectural framework and the information architectural methodology of enterprise architecture planning (EAP). The proposed solution may be beneficial for architects in knowledge-related areas to successfully accomplish KM within large-scale organizations. The research method is descriptive; its validity is confirmed by performing a case study and polling the opinions of KA experts.

  3. The NASA Technical Report Server

    NASA Technical Reports Server (NTRS)

    Nelson, Michael L.; Gottlich, Gretchen L.; Bianco, David J.; Paulson, Sharon S.; Binkley, Robert L.; Kellogg, Yvonne D.; Beaumont, Chris J.; Schmunk, Robert B.; Kurtz, Michael J.; Accomazzi, Alberto

    1995-01-01

    The National Aeronautics and Space Act of 1958 established NASA and charged it to "provide for the widest practicable and appropriate dissemination of information concerning its activities and the results thereof." The search for innovative methods to distribute NASA's information lead a grass-roots team to create the NASA Technical Report Server (NTRS), which uses the World Wide Web and other popular Internet-based information systems as search engines. The NTRS is an inter-center effort which provides uniform access to various distributed publication servers residing on the Internet. Users have immediate desktop access to technical publications from NASA centers and institutes. The NTRS is comprised of several units, some constructed especially for inclusion in NTRS, and others that are existing NASA publication services that NTRS reuses. This paper presents the NTRS architecture, usage metrics, and the lessons learned while implementing and maintaining the service. The NTRS is largely constructed with freely available software running on existing hardware. NTRS builds upon existing hardware and software, and the resulting additional exposure for the body of literature contained ensures that NASA's institutional knowledge base will continue to receive the widest practicable and appropriate dissemination.

  4. EON: a component-based approach to automation of protocol-directed therapy.

    PubMed Central

    Musen, M A; Tu, S W; Das, A K; Shahar, Y

    1996-01-01

    Provision of automated support for planning protocol-directed therapy requires a computer program to take as input clinical data stored in an electronic patient-record system and to generate as output recommendations for therapeutic interventions and laboratory testing that are defined by applicable protocols. This paper presents a synthesis of research carried out at Stanford University to model the therapy-planning task and to demonstrate a component-based architecture for building protocol-based decision-support systems. We have constructed general-purpose software components that (1) interpret abstract protocol specifications to construct appropriate patient-specific treatment plans; (2) infer from time-stamped patient data higher-level, interval-based, abstract concepts; (3) perform time-oriented queries on a time-oriented patient database; and (4) allow acquisition and maintenance of protocol knowledge in a manner that facilitates efficient processing both by humans and by computers. We have implemented these components in a computer system known as EON. Each of the components has been developed, evaluated, and reported independently. We have evaluated the integration of the components as a composite architecture by implementing T-HELPER, a computer-based patient-record system that uses EON to offer advice regarding the management of patients who are following clinical trial protocols for AIDS or HIV infection. A test of the reuse of the software components in a different clinical domain demonstrated rapid development of a prototype application to support protocol-based care of patients who have breast cancer. PMID:8930854

  5. Publication, discovery and interoperability of Clinical Decision Support Systems: A Linked Data approach.

    PubMed

    Marco-Ruiz, Luis; Pedrinaci, Carlos; Maldonado, J A; Panziera, Luca; Chen, Rong; Bellika, J Gustav

    2016-08-01

    The high costs involved in the development of Clinical Decision Support Systems (CDSS) make it necessary to share their functionality across different systems and organizations. Service Oriented Architectures (SOA) have been proposed to allow reusing CDSS by encapsulating them in a Web service. However, strong barriers in sharing CDS functionality are still present as a consequence of lack of expressiveness of services' interfaces. Linked Services are the evolution of the Semantic Web Services paradigm to process Linked Data. They aim to provide semantic descriptions over SOA implementations to overcome the limitations derived from the syntactic nature of Web services technologies. To facilitate the publication, discovery and interoperability of CDS services by evolving them into Linked Services that expose their interfaces as Linked Data. We developed methods and models to enhance CDS SOA as Linked Services that define a rich semantic layer based on machine interpretable ontologies that powers their interoperability and reuse. These ontologies provided unambiguous descriptions of CDS services properties to expose them to the Web of Data. We developed models compliant with Linked Data principles to create a semantic representation of the components that compose CDS services. To evaluate our approach we implemented a set of CDS Linked Services using a Web service definition ontology. The definitions of Web services were linked to the models developed in order to attach unambiguous semantics to the service components. All models were bound to SNOMED-CT and public ontologies (e.g. Dublin Core) in order to count on a lingua franca to explore them. Discovery and analysis of CDS services based on machine interpretable models was performed reasoning over the ontologies built. Linked Services can be used effectively to expose CDS services to the Web of Data by building on current CDS standards. This allows building shared Linked Knowledge Bases to provide machine interpretable semantics to the CDS service description alleviating the challenges on interoperability and reuse. Linked Services allow for building 'digital libraries' of distributed CDS services that can be hosted and maintained in different organizations. Copyright © 2016 Elsevier Inc. All rights reserved.

  6. A knowledge-base generating hierarchical fuzzy-neural controller.

    PubMed

    Kandadai, R M; Tien, J M

    1997-01-01

    We present an innovative fuzzy-neural architecture that is able to automatically generate a knowledge base, in an extractable form, for use in hierarchical knowledge-based controllers. The knowledge base is in the form of a linguistic rule base appropriate for a fuzzy inference system. First, we modify Berenji and Khedkar's (1992) GARIC architecture to enable it to automatically generate a knowledge base; a pseudosupervised learning scheme using reinforcement learning and error backpropagation is employed. Next, we further extend this architecture to a hierarchical controller that is able to generate its own knowledge base. Example applications are provided to underscore its viability.

  7. Reusable experiment controllers, case studies

    NASA Astrophysics Data System (ADS)

    Buckley, Brian A.; Gaasbeck, Jim Van

    1996-03-01

    Congress has given NASA and the science community a reality check. The tight and ever shrinking budgets are trimming the fat from many space science programs. No longer can a Principal Investigator (PI) afford to waste development dollars on re-inventing spacecraft controllers, experiment/payload controllers, ground control systems, or test sets. Inheritance of the Ground Support Equipment (GSE) from one program to another is not a significant re-use of technology to develop a science mission in these times. Reduction of operational staff and highly autonomous experiments are needed to reduce the sustaining cost of a mission. The re-use of an infrastructure from one program to another is needed to truly attain the cost and time savings required. Interface and Control Systems, Inc. (ICS) has a long history of re-usable software. Navy, Air Force, and NASA programs have benefited from the re-use of a common control system from program to program. Several standardization efforts in the AIAA have adopted the Spacecraft Command Language (SCL) architecture as a point solution to satisfy requirements for re-use and autonomy. The Environmental Research Institute of Michigan (ERIM) has been a long-standing customer of ICS and are working on their 4th generation system using SCL. Much of the hardware and software infrastructure has been re-used from mission to mission with little cost for re-hosting a new experiment. The same software infrastructure has successfully been used on Clementine, and an end-to-end system is being deployed for the Far Ultraviolet Spectroscopic Explorer (FUSE) for Johns Hopkins University. A case study of the ERIM programs, Clementine and FUSE will be detailed in this paper.

  8. Metadata Repository for Improved Data Sharing and Reuse Based on HL7 FHIR.

    PubMed

    Ulrich, Hannes; Kock, Ann-Kristin; Duhm-Harbeck, Petra; Habermann, Jens K; Ingenerf, Josef

    2016-01-01

    Unreconciled data structures and formats are a common obstacle to the urgently required sharing and reuse of data within healthcare and medical research. Within the North German Tumor Bank of Colorectal Cancer, clinical and sample data, based on a harmonized data set, is collected and can be pooled by using a hospital-integrated Research Data Management System supporting biobank and study management. Adding further partners who are not using the core data set requires manual adaptations and mapping of data elements. Facing this manual intervention and focusing the reuse of heterogeneous healthcare instance data (value level) and data elements (metadata level), a metadata repository has been developed. The metadata repository is an ISO 11179-3 conformant server application built for annotating and mediating data elements. The implemented architecture includes the translation of metadata information about data elements into the FHIR standard using the FHIR Data Element resource with the ISO 11179 Data Element Extensions. The FHIR-based processing allows exchange of data elements with clinical and research IT systems as well as with other metadata systems. With increasingly annotated and harmonized data elements, data quality and integration can be improved for successfully enabling data analytics and decision support.

  9. A Framework for Performing V&V within Reuse-Based Software Engineering

    NASA Technical Reports Server (NTRS)

    Addy, Edward A.

    1996-01-01

    Verification and validation (V&V) is performed during application development for many systems, especially safety-critical and mission-critical systems. The V&V process is intended to discover errors, especially errors related to critical processing, as early as possible during the development process. Early discovery is important in order to minimize the cost and other impacts of correcting these errors. In order to provide early detection of errors, V&V is conducted in parallel with system development, often beginning with the concept phase. In reuse-based software engineering, however, decisions on the requirements, design and even implementation of domain assets can be made prior to beginning development of a specific system. In this case, V&V must be performed during domain engineering in order to have an impact on system development. This paper describes a framework for performing V&V within architecture-centric, reuse-based software engineering. This framework includes the activities of traditional application-level V&V, and extends these activities into domain engineering and into the transition between domain engineering and application engineering. The framework includes descriptions of the types of activities to be performed during each of the life-cycle phases, and provides motivation for the activities.

  10. Integrating Existing Simulation Components into a Cohesive Simulation System

    NASA Technical Reports Server (NTRS)

    McLaughlin, Brian J.; Barrett, Larry K.

    2012-01-01

    A tradition of leveraging the re-use of components to help manage costs has evolved in the development of complex system. This tradition continues on in the Joint Polar Satellite System (JPSS) Program with the cloning of the Suomi National Polar-orbiting Partnership (NPP) satellite for the JPSS-1 mission, including the instrument complement. One benefit of re-use on a mission is the availability of existing simulation assets from the systems that were previously built. An issue arises in the continual shift of technology over a long mission, or multi-mission, lifecycle. As the missions mature, the requirements for the observatory simulations evolve. The challenge in this environment becomes re-using the existing components in that ever-changing landscape. To meet this challenge, the system must: establish an operational architecture that minimizes impacts on the implementation of individual components, consolidate the satisfaction of new high-impact requirements into system-level infrastructure, and build in a long-term view of system adaptation that spans the full lifecycle of the simulation system. The Flight Vehicle Test Suite (FVTS) within the JPSS Program is defining and executing this approach to ensure a robust simulation capability for the JPSS multi-mission environment

  11. Reference Avionics Architecture for Lunar Surface Systems

    NASA Technical Reports Server (NTRS)

    Somervill, Kevin M.; Lapin, Jonathan C.; Schmidt, Oron L.

    2010-01-01

    Developing and delivering infrastructure capable of supporting long-term manned operations to the lunar surface has been a primary objective of the Constellation Program in the Exploration Systems Mission Directorate. Several concepts have been developed related to development and deployment lunar exploration vehicles and assets that provide critical functionality such as transportation, habitation, and communication, to name a few. Together, these systems perform complex safety-critical functions, largely dependent on avionics for control and behavior of system functions. These functions are implemented using interchangeable, modular avionics designed for lunar transit and lunar surface deployment. Systems are optimized towards reuse and commonality of form and interface and can be configured via software or component integration for special purpose applications. There are two core concepts in the reference avionics architecture described in this report. The first concept uses distributed, smart systems to manage complexity, simplify integration, and facilitate commonality. The second core concept is to employ extensive commonality between elements and subsystems. These two concepts are used in the context of developing reference designs for many lunar surface exploration vehicles and elements. These concepts are repeated constantly as architectural patterns in a conceptual architectural framework. This report describes the use of these architectural patterns in a reference avionics architecture for Lunar surface systems elements.

  12. Comparing Acquisition Strategies: Open Architecture versus Product Lines

    DTIC Science & Technology

    2010-04-30

    software • New SOW language for accepting software deliveries – Enables third-party reuse • Additional SOW language regarding conducting software code walkthroughs and for using integrated development environments ...change the business environment must be the primary factor that drives the technical approach. Accordingly, there are business case decisions to be...elements of a system design should be made available to the customer to observe throughout the design process. Electronic access to the design environment

  13. Training Plan. Central Archive for Reusable Defense Software (CARDS)

    DTIC Science & Technology

    1994-01-29

    Modeling Software Reuse Technology: Feature Oriented Domain Analysis ( FODA ). SEI, Carnegie Mellon University, May 1992. 8. Component Provider’s...events to the services of the domain. 4. Feature Oriented Domain Analysis ( FODA ) [COHEN92] The FODA method produces feature models. Feature models provide...Architecture FODA Feature-Oriented Domain Analysis GOTS Government-Off-The-Shelf Pap A-49 STARS-VC-B003/001/00 29 imaty 1994 MS Master of Science NEC

  14. Component Verification and Certification in NASA Missions

    NASA Technical Reports Server (NTRS)

    Giannakopoulou, Dimitra; Penix, John; Norvig, Peter (Technical Monitor)

    2001-01-01

    Software development for NASA missions is a particularly challenging task. Missions are extremely ambitious scientifically, have very strict time frames, and must be accomplished with a maximum degree of reliability. Verification technologies must therefore be pushed far beyond their current capabilities. Moreover, reuse and adaptation of software architectures and components must be incorporated in software development within and across missions. This paper discusses NASA applications that we are currently investigating from these perspectives.

  15. Reusable design: A proposed approach to Public Health Informatics system design

    PubMed Central

    2011-01-01

    Background Since it was first defined in 1995, Public Health Informatics (PHI) has become a recognized discipline, with a research agenda, defined domain-specific competencies and a specialized corpus of technical knowledge. Information systems form a cornerstone of PHI research and implementation, representing significant progress for the nascent field. However, PHI does not advocate or incorporate standard, domain-appropriate design methods for implementing public health information systems. Reusable design is generalized design advice that can be reused in a range of similar contexts. We propose that PHI create and reuse information design knowledge by taking a systems approach that incorporates design methods from the disciplines of Human-Computer Interaction, Interaction Design and other related disciplines. Discussion Although PHI operates in a domain with unique characteristics, many design problems in public health correspond to classic design problems, suggesting that existing design methods and solution approaches are applicable to the design of public health information systems. Among the numerous methodological frameworks used in other disciplines, we identify scenario-based design and participatory design as two widely-employed methodologies that are appropriate for adoption as PHI standards. We make the case that these methods show promise to create reusable design knowledge in PHI. Summary We propose the formalization of a set of standard design methods within PHI that can be used to pursue a strategy of design knowledge creation and reuse for cost-effective, interoperable public health information systems. We suggest that all public health informaticians should be able to use these design methods and the methods should be incorporated into PHI training. PMID:21333000

  16. Awareness about biomedical waste management and knowledge of effective recycling of dental materials among dental students.

    PubMed

    Ranjan, Rajeev; Pathak, Ruchi; Singh, Dhirendra K; Jalaluddin, Md; Kore, Shobha A; Kore, Abhijeet R

    2016-01-01

    Biomedical waste management has become a concern with increasing number of dental practitioners in India. Being health care professionals, dentists should be aware regarding safe disposal of biomedical waste and recycling of dental materials to minimize biohazards to the environment. The aim of the present study was to assess awareness regarding biomedical waste management as well as knowledge of effective recycling and reuse of dental materials among dental students. This cross-sectional study was conducted among dental students belonging from all dental colleges of Bhubaneswar, Odisha (India) from February 2016 to April 2016. A total of 500 students (208 males and 292 females) participated in the study, which was conducted in two phases. A questionnaire was distributed to assess the awareness of biomedical waste management and knowledge of effective recycling of dental materials, and collected data was examined on a 5-point unipolar scale in percentages to assess the relative awareness regarding these two different categorizes. The Statistical Package for Social Sciences was used to analyzed collected data. Forty-four percent of the dental students were not at all aware about the management of biomedical waste, 22% were moderately aware, 21% slightly aware, 7% very aware, and 5% fell in extremely aware category. Similarly, a higher percentage of participants (61%) were completely unaware regarding recycling and reusing of biomedical waste. There is lack of sufficient knowledge among dental students regarding management of biomedical waste and recycling or reusing of dental materials. Considering its impact on the environment, biomedical waste management requires immediate academic assessment to increase the awareness during training courses.

  17. Taking advantage of ground data systems attributes to achieve quality results in testing software

    NASA Technical Reports Server (NTRS)

    Sigman, Clayton B.; Koslosky, John T.; Hageman, Barbara H.

    1994-01-01

    During the software development life cycle process, basic testing starts with the development team. At the end of the development process, an acceptance test is performed for the user to ensure that the deliverable is acceptable. Ideally, the delivery is an operational product with zero defects. However, the goal of zero defects is normally not achieved but is successful to various degrees. With the emphasis on building low cost ground support systems while maintaining a quality product, a key element in the test process is simulator capability. This paper reviews the Transportable Payload Operations Control Center (TPOCC) Advanced Spacecraft Simulator (TASS) test tool that is used in the acceptance test process for unmanned satellite operations control centers. The TASS is designed to support the development, test and operational environments of the Goddard Space Flight Center (GSFC) operations control centers. The TASS uses the same basic architecture as the operations control center. This architecture is characterized by its use of distributed processing, industry standards, commercial off-the-shelf (COTS) hardware and software components, and reusable software. The TASS uses much of the same TPOCC architecture and reusable software that the operations control center developer uses. The TASS also makes use of reusable simulator software in the mission specific versions of the TASS. Very little new software needs to be developed, mainly mission specific telemetry communication and command processing software. By taking advantage of the ground data system attributes, successful software reuse for operational systems provides the opportunity to extend the reuse concept into the test area. Consistency in test approach is a major step in achieving quality results.

  18. A Knowledge Engineering Approach to Develop Domain Ontology

    ERIC Educational Resources Information Center

    Yun, Hongyan; Xu, Jianliang; Xiong, Jing; Wei, Moji

    2011-01-01

    Ontologies are one of the most popular and widespread means of knowledge representation and reuse. A few research groups have proposed a series of methodologies for developing their own standard ontologies. However, because this ontological construction concerns special fields, there is no standard method to build domain ontology. In this paper,…

  19. [E-Learning in radiology; the practical use of the content management system ILIAS].

    PubMed

    Schütze, B; Mildenberger, P; Kämmerer, M

    2006-05-01

    Due to the possibility of using different kinds of visualization, e-learning has the advantage of allowing individualized learning. A check should be performed to determine whether the use of the web-based content management system ILIAS simplifies the writing and production of electronic learning modules in radiology. Internet-based e-learning provides access to existing learning modules regardless of time and location, since fast Internet connections are readily available. Web Content Management Systems (WCMS) are suitable platforms for imparting radiology-related information (visual abilities like the recognition of patterns as well as interdisciplinary specialized knowledge). The open source product ILIAS is a free WCMS. It is used by many universities and is accepted by both students and lecturers. Its modular and object-oriented software architecture makes it easy to adapt and enlarge the platform. The employment of e-learning standards such as LOM and SCORM within ILIAS makes it possible to reuse contents, even if the platform has to be changed. ILIAS renders it possible to provide students with texts, images, or files of any other kind within a learning context which is defined by the lecturer. Students can check their acquired knowledge via online testing and receive direct performance feedback. The significant interest that students have shown in ILIAS proves that e-learning can be a useful addition to conventional learning methods.

  20. IsoMAP (Isoscape Modeling, Analysis, and Prediction)

    NASA Astrophysics Data System (ADS)

    Miller, C. C.; Bowen, G. J.; Zhang, T.; Zhao, L.; West, J. B.; Liu, Z.; Rapolu, N.

    2009-12-01

    IsoMAP is a TeraGrid-based web portal aimed at building the infrastructure that brings together distributed multi-scale and multi-format geospatial datasets to enable statistical analysis and modeling of environmental isotopes. A typical workflow enabled by the portal includes (1) data source exploration and selection, (2) statistical analysis and model development; (3) predictive simulation of isotope distributions using models developed in (1) and (2); (4) analysis and interpretation of simulated spatial isotope distributions (e.g., comparison with independent observations, pattern analysis). The gridded models and data products created by one user can be shared and reused among users within the portal, enabling collaboration and knowledge transfer. This infrastructure and the research it fosters can lead to fundamental changes in our knowledge of the water cycle and ecological and biogeochemical processes through analysis of network-based isotope data, but it will be important A) that those with whom the data and models are shared can be sure of the origin, quality, inputs, and processing history of these products, and B) the system is agile and intuitive enough to facilitate this sharing (rather than just ‘allow’ it). IsoMAP researchers are therefore building into the portal’s architecture several components meant to increase the amount of metadata about users’ products and to repurpose those metadata to make sharing and discovery more intuitive and robust to both expected, professional users as well as unforeseeable populations from other sectors.

  1. Establishing and testing the "reuse potential" indicator for managing wastes as resources.

    PubMed

    Park, Joo Young; Chertow, Marian R

    2014-05-01

    This study advances contemporary ideas promoting the importance of managing wastes as resources such as closed-loop or circular material economies, and sustainable materials management by reinforcing the notion of a resource-based paradigm rather than a waste-based one. It features the creation of a quantitative tool, the "reuse potential indicator" to specify how "resource-like" versus how "waste-like" specific materials are on a continuum. Even with increasing attention to waste reuse and resource conservation, constant changes in product composition and complexity have left material managers without adequate guidance to make decisions about what is technically feasible to recover from the discard stream even before markets can be considered. The reuse potential indicator is developed to aid management decision-making about waste based not on perception but more objectively on the technical ability of the materials to be reused in commerce. This new indicator is based on the extent of technological innovation and commercial application of actual reuse approaches identified and cataloged. Coal combustion by-products (CCBs) provide the test case for calculating the reuse potential indicator. While CCBs are often perceived as wastes and then isolated in landfills or surface impoundments, there is also a century-long history in the industry of developing technologies to reuse CCBs. The recent statistics show that most CCBs generated in Europe and Japan are reused (90-95%), but only 40-45% of CCBs are used in the United States. According to the reuse potential calculation, however, CCBs in the United States have high technical reusability. Of the four CCBs examined under three different regulatory schemes, reuse potential for boiler slag and flue-gas desulfurization gypsum maintains a value greater than 0.8 on a 0-1 scale, indicating they are at least 80% resource-like. Under current regulation in the United States, both fly ash and bottom ash are 80-90% resource-like. Very strict regulation would remove many reuse options decreasing potential for these two CCBs to 30% resource-like. A more holistic view of waste and broad application of the new indicator would make clear what technologies are available and assist public and private decision makers in setting quantitative material reuse targets from a new knowledge base that reinforces a resource-based paradigm. Copyright © 2014 Elsevier Ltd. All rights reserved.

  2. Using knowledge management practices to develop a state-of-the-art digital library.

    PubMed

    Williams, Annette M; Giuse, Nunzia Bettinsoli; Koonce, Taneya Y; Kou, Qinghua; Giuse, Dario A

    2004-01-01

    Diffusing knowledge management practices within an organization encourages and facilitates reuse of the institution's knowledge commodity. Following knowledge management practices, the Eskind Biomedical Library (EBL) has created a Digital Library that uses a holistic approach for integration of information and skills to best represent both explicit and tacit knowledge inherent in libraries. EBL's Digital Library exemplifies a clear attempt to organize institutional knowledge in the field of librarianship, in an effort to positively impact clinical, research, and educational processes in the medical center.

  3. Modeling Real-Time Applications with Reusable Design Patterns

    NASA Astrophysics Data System (ADS)

    Rekhis, Saoussen; Bouassida, Nadia; Bouaziz, Rafik

    Real-Time (RT) applications, which manipulate important volumes of data, need to be managed with RT databases that deal with time-constrained data and time-constrained transactions. In spite of their numerous advantages, RT databases development remains a complex task, since developers must study many design issues related to the RT domain. In this paper, we tackle this problem by proposing RT design patterns that allow the modeling of structural and behavioral aspects of RT databases. We show how RT design patterns can provide design assistance through architecture reuse of reoccurring design problems. In addition, we present an UML profile that represents patterns and facilitates further their reuse. This profile proposes, on one hand, UML extensions allowing to model the variability of patterns in the RT context and, on another hand, extensions inspired from the MARTE (Modeling and Analysis of Real-Time Embedded systems) profile.

  4. Candidate Mission from Planet Earth control and data delivery system architecture

    NASA Technical Reports Server (NTRS)

    Shapiro, Phillip; Weinstein, Frank C.; Hei, Donald J., Jr.; Todd, Jacqueline

    1992-01-01

    Using a structured, experienced-based approach, Goddard Space Flight Center (GSFC) has assessed the generic functional requirements for a lunar mission control and data delivery (CDD) system. This analysis was based on lunar mission requirements outlined in GSFC-developed user traffic models. The CDD system will facilitate data transportation among user elements, element operations, and user teams by providing functions such as data management, fault isolation, fault correction, and link acquisition. The CDD system for the lunar missions must not only satisfy lunar requirements but also facilitate and provide early development of data system technologies for Mars. Reuse and evolution of existing data systems can help to maximize system reliability and minimize cost. This paper presents a set of existing and currently planned NASA data systems that provide the basic functionality. Reuse of such systems can have an impact on mission design and significantly reduce CDD and other system development costs.

  5. JPL Project Information Management: A Continuum Back to the Future

    NASA Technical Reports Server (NTRS)

    Reiz, Julie M.

    2009-01-01

    This slide presentation reviews the practices and architecture that support information management at JPL. This practice has allowed concurrent use and reuse of information by primary and secondary users. The use of this practice is illustrated in the evolution of the Mars Rovers from the Mars Pathfinder to the development of the Mars Science Laboratory. The recognition of the importance of information management during all phases of a project life cycle has resulted in the design of an information system that includes metadata, has reduced the risk of information loss through the use of an in-process appraisal, shaping of project's appreciation for capturing and managing the information on one project for re-use by future projects as a natural outgrowth of the process. This process has also assisted in connection of geographically disbursed partners into a team through sharing information, common tools and collaboration.

  6. A software bus for thread objects

    NASA Technical Reports Server (NTRS)

    Callahan, John R.; Li, Dehuai

    1995-01-01

    The authors have implemented a software bus for lightweight threads in an object-oriented programming environment that allows for rapid reconfiguration and reuse of thread objects in discrete-event simulation experiments. While previous research in object-oriented, parallel programming environments has focused on direct communication between threads, our lightweight software bus, called the MiniBus, provides a means to isolate threads from their contexts of execution by restricting communications between threads to message-passing via their local ports only. The software bus maintains a topology of connections between these ports. It routes, queues, and delivers messages according to this topology. This approach allows for rapid reconfiguration and reuse of thread objects in other systems without making changes to the specifications or source code. A layered approach that provides the needed transparency to developers is presented. Examples of using the MiniBus are given, and the value of bus architectures in building and conducting simulations of discrete-event systems is discussed.

  7. A "Knowledge Trading Game" for Collaborative Design Learning in an Architectural Design Studio

    ERIC Educational Resources Information Center

    Wang, Wan-Ling; Shih, Shen-Guan; Chien, Sheng-Fen

    2010-01-01

    Knowledge-sharing and resource exchange are the key to the success of collaborative design learning. In an architectural design studio, design knowledge entails learning efforts that need to accumulate and recombine dispersed and complementary pieces of knowledge. In this research, firstly, "Knowledge Trading Game" is proposed to be a way for…

  8. Reusablility in ESOC mission control systems developments - the SMART-1 mission case

    NASA Astrophysics Data System (ADS)

    Pignède, Max; Davies, Kevin

    2002-07-01

    The European Space Operations Centre (ESOC) have a long experience in spacecraft mission control systems developments and use a large number of existing elements for the build up of control systems for new missions. The integration of such elements in a new system covers not only the direct re-use of infrastructure software but also the re-use of concepts and work methodology. Applying reusability is a major asset in ESOC's strategy, especially for low cost space missions. This paper describes re-use of existing elements in the ESOC production of the SMART-1 mission control system (S1MCS) and explores the following areas: The most significant (and major cost-saving contributors) re-used elements are the Spacecraft Control and Operations System (SCOS-2000) and the Network Control and TM/TC Router System (NCTRS) infrastructure systems. These systems are designed precisely for allowing all general mission parameters to be configured easily without any change in the software (in particular the NCTRS configuration for SMART-1 was time and cost effective). Further, large parts of the ESOC ROSETTA and INTEGRAL software systems (also SCOS-2000 based) were directly re-used, such as the on-board command schedule maintenance and modelling subsystem (OBQ), the time correlator (TCO) and the external file transfer subsystem (FTS). The INTEGRAL spacecraft database maintenance system (both the editors and configuration control mechanism) and its export facilities into the S1MCS runtime system are directly reused. A special kind of re-use concerns the ENVISAT approach to both the telemetry (TM) and telecommanding (TC) context saving in the redundant server system in order to enable smooth support of operations in case of prime server failure. In this case no software or tools can be re-used because the S1MCS is based on a much more modern technology than the ENVISAT mission control system as well as on largely differing workstations architectures but the ENVISAT validated capabilities to support hot-standby system reconfiguration and machines and data resynchronisation following failures for all mission phases make them a good candidate for re-use by newer missions. Common methods and tools for requirements production, test plan production and problem tracking which are used by most of the other ESOC missions development teams in their daily work are also re-used without any changes. Finally conclusions are drawn about reusability in perspective with the latest state of the S1MCS and about benefits to other SCOS-2000 based "client" missions. Lessons learned for ESOC space missions (whether for mission control systems currently under development or up-and-coming space missions) and also related considerations for the wider space community are made, reflecting ESOC skills and expertise in mission operations and control.

  9. Reuse of the Cloud Analytics and Collaboration Environment within Tactical Applications (TacApps): A Feasibility Analysis

    DTIC Science & Technology

    2016-03-01

    Representational state transfer  Java messaging service  Java application programming interface (API)  Internet relay chat (IRC)/extensible messaging and...JBoss application server or an Apache Tomcat servlet container instance. The relational database management system can be either PostgreSQL or MySQL ... Java library called direct web remoting. This library has been part of the core CACE architecture for quite some time; however, there have not been

  10. STARS Conceptual Framework for Reuse Process (CFRP). Volume 1. Definition. Version 3.0

    DTIC Science & Technology

    1993-10-25

    Command, USAF Hanscom AFB, MA 01731-5000 DTIC QUALITY IN ,,P.’±U4) D Prepared by: The Boeing Company , IBM, Defense & Space Group, Federal Systems... Company , Unisys Corporation, P.O. Box 3999, MS 87-37 800 N. Frederick Pike, 12010 Sunrise Valley Drive, Seattle, WA 98124-2499 Gaithersburg, MD 20879...34 3.2.1.1 Domain Analysis and Modeling Process Category ............ 38 3.2.1.2 Domain Architecture Development Process

  11. Factors affecting the energy cost of level running at submaximal speed.

    PubMed

    Lacour, Jean-René; Bourdin, Muriel

    2015-04-01

    Metabolic measurement is still the criterion for investigation of the efficiency of mechanical work and for analysis of endurance performance in running. Metabolic demand may be expressed either as the energy spent per unit distance (energy cost of running, C r) or as energy demand at a given running speed (running economy). Systematic studies showed a range of costs of about 20 % between runners. Factors affecting C r include body dimensions: body mass and leg architecture, mostly calcaneal tuberosity length, responsible for 60-80 % of the variability. Children show a higher C r than adults. Higher resting metabolism and lower leg length/stature ratio are the main putative factors responsible for the difference. Elastic energy storage and reuse also contribute to the variability of C r. The increase in C r with increasing running speed due to increase in mechanical work is blunted till 6-7 m s(-1) by the increase in vertical stiffness and the decrease in ground contact time. Fatigue induced by prolonged or intense running is associated with up to 10 % increased C r; the contribution of metabolic and biomechanical factors remains unclear. Women show a C r similar to men of similar body mass, despite differences in gait pattern. The superiority of black African runners is presumably related to their leg architecture and better elastic energy storage and reuse.

  12. Design and Architecture of Collaborative Online Communities: A Quantitative Analysis

    ERIC Educational Resources Information Center

    Aviv, Reuven; Erlich, Zippy; Ravid, Gilad

    2004-01-01

    This paper considers four aspects of online communities. Design, mechanisms, architecture, and the constructed knowledge. We hypothesize that different designs of communities drive different mechanisms, which give rise to different architectures, which in turn result in different levels of collaborative knowledge construction. To test this chain…

  13. Architectural Innovation: The Reconfiguration of Existing Product Technologies and the Failure of Established Firms.

    ERIC Educational Resources Information Center

    Henderson, Rebecca M.; Clark, Kim B.

    1990-01-01

    Using an empirical study of the semiconductor photolithographic alignment equipment industry, this paper shows that architectural innovations destroy the usefulness of established firms' architectural knowledge. Because this knowledge is embedded in the firms' structure and information-processing procedures, the destruction is hard to detect.…

  14. Sharing and re-use of phylogenetic trees (and associated data) to facilitate synthesis.

    PubMed

    Stoltzfus, Arlin; O'Meara, Brian; Whitacre, Jamie; Mounce, Ross; Gillespie, Emily L; Kumar, Sudhir; Rosauer, Dan F; Vos, Rutger A

    2012-10-22

    Recently, various evolution-related journals adopted policies to encourage or require archiving of phylogenetic trees and associated data. Such attention to practices that promote sharing of data reflects rapidly improving information technology, and rapidly expanding potential to use this technology to aggregate and link data from previously published research. Nevertheless, little is known about current practices, or best practices, for publishing trees and associated data so as to promote re-use. Here we summarize results of an ongoing analysis of current practices for archiving phylogenetic trees and associated data, current practices of re-use, and current barriers to re-use. We find that the technical infrastructure is available to support rudimentary archiving, but the frequency of archiving is low. Currently, most phylogenetic knowledge is not easily re-used due to a lack of archiving, lack of awareness of best practices, and lack of community-wide standards for formatting data, naming entities, and annotating data. Most attempts at data re-use seem to end in disappointment. Nevertheless, we find many positive examples of data re-use, particularly those that involve customized species trees generated by grafting to, and pruning from, a much larger tree. The technologies and practices that facilitate data re-use can catalyze synthetic and integrative research. However, success will require engagement from various stakeholders including individual scientists who produce or consume shareable data, publishers, policy-makers, technology developers and resource-providers. The critical challenges for facilitating re-use of phylogenetic trees and associated data, we suggest, include: a broader commitment to public archiving; more extensive use of globally meaningful identifiers; development of user-friendly technology for annotating, submitting, searching, and retrieving data and their metadata; and development of a minimum reporting standard (MIAPA) indicating which kinds of data and metadata are most important for a re-useable phylogenetic record.

  15. An Object-Oriented Architecture for Intelligent Tutoring Systems. Technical Report No. LSP-3.

    ERIC Educational Resources Information Center

    Bonar, Jeffrey; And Others

    This technical report describes a generic architecture for building intelligent tutoring systems which is developed around objects that represent the knowledge elements to be taught by the tutor. Each of these knowledge elements, called "bites," inherits both a knowledge organization describing the kind of knowledge represented and…

  16. A Community-Driven Workflow Recommendations and Reuse Infrastructure

    NASA Astrophysics Data System (ADS)

    Zhang, J.; Votava, P.; Lee, T. J.; Lee, C.; Xiao, S.; Nemani, R. R.; Foster, I.

    2013-12-01

    Aiming to connect the Earth science community to accelerate the rate of discovery, NASA Earth Exchange (NEX) has established an online repository and platform, so that researchers can publish and share their tools and models with colleagues. In recent years, workflow has become a popular technique at NEX for Earth scientists to define executable multi-step procedures for data processing and analysis. The ability to discover and reuse knowledge (sharable workflows or workflow) is critical to the future advancement of science. However, as reported in our earlier study, the reusability of scientific artifacts at current time is very low. Scientists often do not feel confident in using other researchers' tools and utilities. One major reason is that researchers are often unaware of the existence of others' data preprocessing processes. Meanwhile, researchers often do not have time to fully document the processes and expose them to others in a standard way. These issues cannot be overcome by the existing workflow search technologies used in NEX and other data projects. Therefore, this project aims to develop a proactive recommendation technology based on collective NEX user behaviors. In this way, we aim to promote and encourage process and workflow reuse within NEX. Particularly, we focus on leveraging peer scientists' best practices to support the recommendation of artifacts developed by others. Our underlying theoretical foundation is rooted in the social cognitive theory, which declares people learn by watching what others do. Our fundamental hypothesis is that sharable artifacts have network properties, much like humans in social networks. More generally, reusable artifacts form various types of social relationships (ties), and may be viewed as forming what organizational sociologists who use network analysis to study human interactions call a 'knowledge network.' In particular, we will tackle two research questions: R1: What hidden knowledge may be extracted from usage history to help Earth scientists better understand existing artifacts and how to use them in a proper manner? R2: Informed by insights derived from their computing contexts, how could such hidden knowledge be used to facilitate artifact reuse by Earth scientists? Our study of the two research questions will provide answers to three technical questions aiming to assist NEX users during workflow development: 1) How to determine what topics interest the researcher? 2) How to find appropriate artifacts? and 3) How to advise the researcher in artifact reuse? In this paper, we report our on-going efforts of leveraging social networking theory and analysis techniques to provide dynamic advice on artifact reuse to NEX users based on their surrounding contexts. As a proof of concept, we have designed and developed a plug-in to the VisTrails workflow design tool. When users develop workflows using VisTrails, our plug-in will proactively recommend most relevant sub-workflows to the users.

  17. An architecture for rule based system explanation

    NASA Technical Reports Server (NTRS)

    Fennel, T. R.; Johannes, James D.

    1990-01-01

    A system architecture is presented which incorporate both graphics and text into explanations provided by rule based expert systems. This architecture facilitates explanation of the knowledge base content, the control strategies employed by the system, and the conclusions made by the system. The suggested approach combines hypermedia and inference engine capabilities. Advantages include: closer integration of user interface, explanation system, and knowledge base; the ability to embed links to deeper knowledge underlying the compiled knowledge used in the knowledge base; and allowing for more direct control of explanation depth and duration by the user. User models are suggested to control the type, amount, and order of information presented.

  18. Systems and technologies for high-speed inter-office/datacenter interface

    NASA Astrophysics Data System (ADS)

    Sone, Y.; Nishizawa, H.; Yamamoto, S.; Fukutoku, M.; Yoshimatsu, T.

    2017-01-01

    Emerging requirements for inter-office/inter-datacenter short reach links for data center interconnects (DCI) and metro transport networks have led to various inter-office and inter-datacenter optical interface technologies. These technologies are bringing significant changes to systems and network architectures. In this paper, we present a system and ZR optical interface technologies for DCI and metro transport networks, then introduce the latest challenges facing the system framework. There are two trends in reach extension; one is to use Ethernet and the other is to use digital coherent technologies. The first approach achieves reach extension while using as many existing Ethernet components as possible. It offers low costs as reuses the cost-effective components created for the large Ethernet market. The second approach adopts low-cost and low power coherent DSPs that implement the minimal set long haul transmission functions. This paper introduces an architecture that integrates both trends. The architecture satisfies both datacom and telecom needs with a common control and management interface and automated configuration.

  19. A low-cost approach to the exploration of Mars through a robotic technology demonstrator mission

    NASA Astrophysics Data System (ADS)

    Ellery, Alex; Richter, Lutz; Parnell, John; Baker, Adam

    2003-11-01

    We present a proposed robotic mission to Mars - Vanguard - for the Aurora Arrow programme which combines an extensive technology demonstrator with a high scientific return. The novel aspect of this technology demonstrator is the demonstration of "water mining" capabilities for in-situ resource utilisation in conjunction with high-value astrobiological investigation within a low mass lander package of 70 kg. The basic architecture comprises a small lander, a micro-rover and a number of ground-penetrating moles. This basic architecture offers the possibility of testing a wide variety of generic technologies associated with space systems and planetary exploration. The architecture provides for the demonstration of specific technologies associated with planetary surface exploration, and with the Aurora programme specifically. Technology demonstration of in-situ resource utilisation will be a necessary precursor to any future human mission to Mars. Furthermore, its modest mass overhead allows the reuse of the already built Mars Express bus, making it a very low cost option.

  20. A low-cost approach to the exploration of Mars through a robotic technology demonstrator mission

    NASA Astrophysics Data System (ADS)

    Ellery, Alex; Richter, Lutz; Parnell, John; Baker, Adam

    2006-10-01

    We present a proposed robotic mission to Mars—Vanguard—for the Aurora Arrow programme which combines an extensive technology demonstrator with a high scientific return. The novel aspect of this technology demonstrator is the demonstration of “water mining” capabilities for in situ resource utilisation (ISRU) in conjunction with high-value astrobiological investigation within a low-mass lander package of 70 kg. The basic architecture comprises a small lander, a micro-rover and a number of ground-penetrating moles. This basic architecture offers the possibility of testing a wide variety of generic technologies associated with space systems and planetary exploration. The architecture provides for the demonstration of specific technologies associated with planetary surface exploration, and with the Aurora programme specifically. Technology demonstration of ISRU will be a necessary precursor to any future human mission to Mars. Furthermore, its modest mass overhead allows the re-use of the already built Mars Express bus, making it a very low-cost option.

  1. Automating the design of scientific computing software

    NASA Technical Reports Server (NTRS)

    Kant, Elaine

    1992-01-01

    SINAPSE is a domain-specific software design system that generates code from specifications of equations and algorithm methods. This paper describes the system's design techniques (planning in a space of knowledge-based refinement and optimization rules), user interaction style (user has option to control decision making), and representation of knowledge (rules and objects). It also summarizes how the system knowledge has evolved over time and suggests some issues in building software design systems to facilitate reuse.

  2. Awareness about biomedical waste management and knowledge of effective recycling of dental materials among dental students

    PubMed Central

    Ranjan, Rajeev; Pathak, Ruchi; Singh, Dhirendra K.; Jalaluddin, Md.; Kore, Shobha A.; Kore, Abhijeet R.

    2016-01-01

    Aims and Objectives: Biomedical waste management has become a concern with increasing number of dental practitioners in India. Being health care professionals, dentists should be aware regarding safe disposal of biomedical waste and recycling of dental materials to minimize biohazards to the environment. The aim of the present study was to assess awareness regarding biomedical waste management as well as knowledge of effective recycling and reuse of dental materials among dental students. Materials and Methods: This cross-sectional study was conducted among dental students belonging from all dental colleges of Bhubaneswar, Odisha (India) from February 2016 to April 2016. A total of 500 students (208 males and 292 females) participated in the study, which was conducted in two phases. A questionnaire was distributed to assess the awareness of biomedical waste management and knowledge of effective recycling of dental materials, and collected data was examined on a 5-point unipolar scale in percentages to assess the relative awareness regarding these two different categorizes. The Statistical Package for Social Sciences was used to analyzed collected data. Results: Forty-four percent of the dental students were not at all aware about the management of biomedical waste, 22% were moderately aware, 21% slightly aware, 7% very aware, and 5% fell in extremely aware category. Similarly, a higher percentage of participants (61%) were completely unaware regarding recycling and reusing of biomedical waste. Conclusion: There is lack of sufficient knowledge among dental students regarding management of biomedical waste and recycling or reusing of dental materials. Considering its impact on the environment, biomedical waste management requires immediate academic assessment to increase the awareness during training courses. PMID:27891315

  3. Critical early mission design considerations for lunar data systems architecture

    NASA Technical Reports Server (NTRS)

    Hei, Donald J., Jr.; Stephens, Elaine

    1992-01-01

    This paper outlines recent early mission design activites for a lunar data systems architecture. Each major functional element is shown to be strikingly similar when viewed in a common reference system. While this similarity probably deviates with lower levels of decomposition, the sub-functions can always be arranged into similar and dissimilar categories. Similar functions can be implemented as objects - implemented once and reused several times like today's advanced integrated circuits. This approach to mission data systems, applied to other NASA programs, may result in substantial agency implementation and maintenance savings. In today's zero-sum-game budgetary environment, this approach could help to enable a lunar exploration program in the next decade. Several early mission studies leading to such an object-oriented data systems design are recommended.

  4. SDR/STRS Flight Experiment and the Role of SDR-Based Communication and Navigation Systems

    NASA Technical Reports Server (NTRS)

    Reinhart, Richard C.

    2008-01-01

    This presentation describes an open architecture SDR (software defined radio) infrastructure, suitable for space-based radios and operations, entitled Space Telecommunications Radio System (STRS). SDR technologies will endow space and planetary exploration systems with dramatically increased capability, reduced power consumption, and less mass than conventional systems, at costs reduced by vigorous competition, hardware commonality, dense integration, minimizing the impact of parts obsolescence, improved interoperability, and software re-use. To advance the SDR architecture technology and demonstrate its applicability in space, NASA is developing a space experiment of multiple SDRs each with various waveforms to communicate with NASA s TDRSS satellite and ground networks, and the GPS constellation. An experiments program will investigate S-band and Ka-band communications, navigation, and networking technologies and operations.

  5. A Novel Architecture for E-Learning Knowledge Assessment Systems

    ERIC Educational Resources Information Center

    Gierlowski, Krzysztof; Nowicki, Krzysztof

    2009-01-01

    In this article we propose a novel e-learning system, dedicated strictly to knowledge assessment tasks. In its functioning it utilizes web-based technologies, but its design differs radically from currently popular e-learning solutions which rely mostly on thin-client architecture. Our research proved that such architecture, while well suited for…

  6. Exploration Space Suit Architecture and Destination Environmental-Based Technology Development

    NASA Technical Reports Server (NTRS)

    Hill, Terry R.; McFarland, Shane M.; Korona, F. Adam

    2013-01-01

    This paper continues forward where EVA Space Suit Architecture: Low Earth Orbit Vs. Moon Vs. Mars left off in the development of a space suit architecture that is modular in design and could be reconfigured prior to launch or during any given mission depending on the tasks or destination. This space suit system architecture and technologies required based on human exploration (EVA) destinations will be discussed, and how these systems should evolve to meet the future exploration EVA needs of the US human space flight program. A series of exercises and analyses provided a strong indication that the Constellation Program space suit architecture, with its maximum reuse of technology and functionality across a range of mission profiles and destinations, is postured to provide a viable solution for future space exploration missions. The destination environmental analysis demonstrates that the modular architecture approach could provide the lowest mass and mission cost for the protection of the crew, given any human mission outside of low-Earth orbit. Additionally, some of the high-level trades presented here provide a review of the environmental and nonenvironmental design drivers that will become increasingly important as humans venture farther from Earth. The presentation of destination environmental data demonstrates a logical clustering of destination design environments that allows a focused approach to technology prioritization, development, and design that will maximize the return on investment, largely independent of any particular design reference mission.

  7. Exploration Space Suit Architecture and Destination Environmental-Based Technology Development

    NASA Technical Reports Server (NTRS)

    Hill, Terry R.; McFarland, Shane M.; Korona, F. Adam

    2013-01-01

    This paper continues forward where EVA Space Suit Architecture: Low Earth Orbit Vs. Moon Vs. Mars1 left off in the development of a space suit architecture that is modular in design and could be reconfigured prior to launch or during any given mission depending on the tasks or destination. This paper addresses the space suit system architecture and technologies required based on human exploration (EVA) destinations, and describes how these systems should evolve to meet the future exploration EVA needs of the US human space flight program. A series of exercises and analyses provided a strong indication that the Constellation Program space suit architecture, with its maximum reuse of technology and functionality across a range of mission profiles and destinations, is postured to provide a viable solution for future space exploration missions. The destination environmental analysis demonstrates that the modular architecture approach could provide the lowest mass and mission cost for the protection of the crew, given any human mission outside of low-Earth orbit. Additionally, some of the high-level trades presented here provide a review of the environmental and non-environmental design drivers that will become increasingly important as humans venture farther from Earth. This paper demonstrates a logical clustering of destination design environments that allows a focused approach to technology prioritization, development, and design that will maximize the return on investment, largely independent of any particular design reference mission.

  8. Utilizing IHE-based Electronic Health Record systems for secondary use.

    PubMed

    Holzer, K; Gall, W

    2011-01-01

    Due to the increasing adoption of Electronic Health Records (EHRs) for primary use, the number of electronic documents stored in such systems will soar in the near future. In order to benefit from this development in secondary fields such as medical research, it is important to define requirements for the secondary use of EHR data. Furthermore, analyses of the extent to which an IHE (Integrating the Healthcare Enterprise)-based architecture would fulfill these requirements could provide further information on upcoming obstacles for the secondary use of EHRs. A catalog of eight core requirements for secondary use of EHR data was deduced from the published literature, the risk analysis of the IHE profile MPQ (Multi-Patient Queries) and the analysis of relevant questions. The IHE-based architecture for cross-domain, patient-centered document sharing was extended to a cross-patient architecture. We propose an IHE-based architecture for cross-patient and cross-domain secondary use of EHR data. Evaluation of this architecture concerning the eight core requirements revealed positive fulfillment of six and the partial fulfillment of two requirements. Although not regarded as a primary goal in modern electronic healthcare, the re-use of existing electronic medical documents in EHRs for research and other fields of secondary application holds enormous potential for the future. Further research in this respect is necessary.

  9. Knowledge Repository for Fmea Related Knowledge

    NASA Astrophysics Data System (ADS)

    Cândea, Gabriela Simona; Kifor, Claudiu Vasile; Cândea, Ciprian

    2014-11-01

    This paper presents innovative usage of knowledge system into Failure Mode and Effects Analysis (FMEA) process using the ontology to represent the knowledge. Knowledge system is built to serve multi-projects work that nowadays are in place in any manufacturing or services provider, and knowledge must be retained and reused at the company level and not only at project level. The system is following the FMEA methodology and the validation of the concept is compliant with the automotive industry standards published by Automotive Industry Action Group, and not only. Collaboration is assured trough web-based GUI that supports multiple users access at any time

  10. Review of pathogen treatment reductions for onsite non ...

    EPA Pesticide Factsheets

    Communities face a challenge when implementing onsite reuse of collected waters for non-potable purposes given the lack of national microbial standards. Quantitative Microbial Risk Assessment (QMRA) can be used to predict the pathogen risks associated with the non-potable reuse of onsite-collected waters; the present work reviewed the relevant QMRA literature to prioritize knowledge gaps and identify health-protective pathogen treatment reduction targets. The review indicated that ingestion of untreated, onsite-collected graywater, rainwater, seepage water and stormwater from a variety of exposure routes resulted in gastrointestinal infection risks greater than the traditional acceptable level of risk. We found no QMRAs that estimated the pathogen risks associated with onsite, non-potable reuse of blackwater. Pathogen treatment reduction targets for non-potable, onsite reuse that included a suite of reference pathogens (i.e., including relevant bacterial, protozoan, and viral hazards) were limited to graywater (for a limited set of domestic uses) and stormwater (for domestic and municipal uses). These treatment reductions corresponded with the health benchmark of a probability of infection or illness of 10−3 per person per year or less. The pathogen treatment reduction targets varied depending on the target health benchmark, reference pathogen, source water, and water reuse application. Overall, there remains a need for pathogen reduction targets that are heal

  11. Biodigester Feasibility and Design for Space & Earth

    NASA Technical Reports Server (NTRS)

    Shutts, Stacy; Ewert, Mike; Bacon, Jack

    2016-01-01

    Anaerobic digestion converts organic waste into methane gas and fertilizer effluent. The ICA-developed prototype system is designed for planetary surface operation. It uses passive hydrostatic control for reliability, and is modular and redundant. The serpentine configuration accommodates tight geometric constraints similar to the ISS ECLSS rack architectures. Its shallow, low-tilt design enables (variable) lower-g convection than standard Earth (1 g) digesters. This technology will reuse and recycle materials including human waste, excess food, as well as packaging (if biodegradable bags are used).

  12. Achieving design reuse: a case study

    NASA Astrophysics Data System (ADS)

    Young, Peter J.; Nielsen, Jon J.; Roberts, William H.; Wilson, Greg M.

    2008-08-01

    The RSAA CICADA data acquisition and control software package uses an object-oriented approach to model astronomical instrumentation and a layered architecture for implementation. Emphasis has been placed on building reusable C++ class libraries and on the use of attribute/value tables for dynamic configuration. This paper details how the approach has been successfully used in the construction of the instrument control software for the Gemini NIFS and GSAOI instruments. The software is again being used for the new RSAA SkyMapper and WiFeS instruments.

  13. SMART: Analyzing the Reuse Potential of Legacy Systems in Service- Oriented Architecture (SOA) Environments

    DTIC Science & Technology

    2009-04-09

    technical faculty for the Master in Software Engineering program at CMU. Grace holds a B.Sc. in Systems Engineering and an Executive MBA from Icesi...University in Cali, Colombia ; and a Master in Software Engineering from Carnegie Mellon University. 3 Version 1.7.3—SEI Webinar—April 2009 © 2009 Carnegie...Resources and Training SMART Report • http://www.sei.cmu.edu/publications/documents/08.reports/08tn008.html Public Courses • Migration of Legacy

  14. Issues and Techniques of CASE Integration With Configuration Management

    DTIC Science & Technology

    1992-03-01

    all four!) process architecture classes. For example, Frame Technology’s FrameMaker is a client/server tool because it provides server functions for... FrameMaker clients; it is a parent/child tool since a top-level control panel is used to "fork" child FrameMaker sessions; the "forked" FrameMaker ...sessions are persistent tools since they may be reused to create and modify any number of FrameMaker documents. Despite this, however, these process

  15. Elaboration on an Integrated Architecture and Requirement Practice: Prototyping with Quality Attribute Focus

    DTIC Science & Technology

    2013-05-01

    release level prototyping as:  The R&D prototype is typically funded by the organization, rather than the client .  The work is done in an R&D...performance) with hopes that this capability could be offered to multiple clients . The clustering prototype is developed in the organization’s R&D...ICSE Conference 2013) [5] A. Martini, L. Pareto , and J. Bosch, “Enablers and inhibitors for speed with reuse,” Proceedings of the 16th Software

  16. Can Inferred Provenance and Its Visualisation Be Used to Detect Erroneous Annotation? A Case Study Using UniProtKB

    PubMed Central

    Bell, Michael J.; Collison, Matthew; Lord, Phillip

    2013-01-01

    A constant influx of new data poses a challenge in keeping the annotation in biological databases current. Most biological databases contain significant quantities of textual annotation, which often contains the richest source of knowledge. Many databases reuse existing knowledge; during the curation process annotations are often propagated between entries. However, this is often not made explicit. Therefore, it can be hard, potentially impossible, for a reader to identify where an annotation originated from. Within this work we attempt to identify annotation provenance and track its subsequent propagation. Specifically, we exploit annotation reuse within the UniProt Knowledgebase (UniProtKB), at the level of individual sentences. We describe a visualisation approach for the provenance and propagation of sentences in UniProtKB which enables a large-scale statistical analysis. Initially levels of sentence reuse within UniProtKB were analysed, showing that reuse is heavily prevalent, which enables the tracking of provenance and propagation. By analysing sentences throughout UniProtKB, a number of interesting propagation patterns were identified, covering over sentences. Over sentences remain in the database after they have been removed from the entries where they originally occurred. Analysing a subset of these sentences suggest that approximately are erroneous, whilst appear to be inconsistent. These results suggest that being able to visualise sentence propagation and provenance can aid in the determination of the accuracy and quality of textual annotation. Source code and supplementary data are available from the authors website at http://homepages.cs.ncl.ac.uk/m.j.bell1/sentence_analysis/. PMID:24143170

  17. LEGOS: Object-based software components for mission-critical systems. Final report, June 1, 1995--December 31, 1997

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    1998-08-01

    An estimated 85% of the installed base of software is a custom application with a production quantity of one. In practice, almost 100% of military software systems are custom software. Paradoxically, the marginal costs of producing additional units are near zero. So why hasn`t the software market, a market with high design costs and low productions costs evolved like other similar custom widget industries, such as automobiles and hardware chips? The military software industry seems immune to market pressures that have motivated a multilevel supply chain structure in other widget industries: design cost recovery, improve quality through specialization, and enablemore » rapid assembly from purchased components. The primary goal of the ComponentWare Consortium (CWC) technology plan was to overcome barriers to building and deploying mission-critical information systems by using verified, reusable software components (Component Ware). The adoption of the ComponentWare infrastructure is predicated upon a critical mass of the leading platform vendors` inevitable adoption of adopting emerging, object-based, distributed computing frameworks--initially CORBA and COM/OLE. The long-range goal of this work is to build and deploy military systems from verified reusable architectures. The promise of component-based applications is to enable developers to snap together new applications by mixing and matching prefabricated software components. A key result of this effort is the concept of reusable software architectures. A second important contribution is the notion that a software architecture is something that can be captured in a formal language and reused across multiple applications. The formalization and reuse of software architectures provide major cost and schedule improvements. The Unified Modeling Language (UML) is fast becoming the industry standard for object-oriented analysis and design notation for object-based systems. However, the lack of a standard real-time distributed object operating system, lack of a standard Computer-Aided Software Environment (CASE) tool notation and lack of a standard CASE tool repository has limited the realization of component software. The approach to fulfilling this need is the software component factory innovation. The factory approach takes advantage of emerging standards such as UML, CORBA, Java and the Internet. The key technical innovation of the software component factory is the ability to assemble and test new system configurations as well as assemble new tools on demand from existing tools and architecture design repositories.« less

  18. DataHub knowledge based assistance for science visualization and analysis using large distributed databases

    NASA Technical Reports Server (NTRS)

    Handley, Thomas H., Jr.; Collins, Donald J.; Doyle, Richard J.; Jacobson, Allan S.

    1991-01-01

    Viewgraphs on DataHub knowledge based assistance for science visualization and analysis using large distributed databases. Topics covered include: DataHub functional architecture; data representation; logical access methods; preliminary software architecture; LinkWinds; data knowledge issues; expert systems; and data management.

  19. Software Architecture Evaluation in Global Software Development Projects

    NASA Astrophysics Data System (ADS)

    Salger, Frank

    Due to ever increasing system complexity, comprehensive methods for software architecture evaluation become more and more important. This is further stressed in global software development (GSD), where the software architecture acts as a central knowledge and coordination mechanism. However, existing methods for architecture evaluation do not take characteristics of GSD into account. In this paper we discuss what aspects are specific for architecture evaluations in GSD. Our experiences from GSD projects at Capgemini sd&m indicate, that architecture evaluations differ in how rigorously one has to assess modularization, architecturally relevant processes, knowledge transfer and process alignment. From our project experiences, we derive nine good practices, the compliance to which should be checked in architecture evaluations in GSD. As an example, we discuss how far the standard architecture evaluation method used at Capgemini sd&m already considers the GSD-specific good practices, and outline what extensions are necessary to achieve a comprehensive architecture evaluation framework for GSD.

  20. OER (Re)Use and Language Teachers' Tacit Professional Knowledge: Three Vignettes

    ERIC Educational Resources Information Center

    Beaven, Tita

    2015-01-01

    The pedagogic practical knowledge that teachers use in their lessons is very difficult to make visible and often remains tacit. This chapter draws on data from a recent study and closely analyses a number of Open Educational Resources used by three language teachers at the UK Open University in order to try to capture how their use of the…

  1. Study on establishment of Body of Knowledge of Taiwan's Traditional Wooden Structure Technology

    NASA Astrophysics Data System (ADS)

    Huang, M. T.; Chiou, S. C.; Hsu, T. W.; Su, P. C.

    2015-08-01

    The timber technology of the Taiwan traditional architecture is brought by the immigrants in the Southern Fujian of China in the early, which has been inherited for a hundred years. In the past, these traditional timber technologies were taught by mentoring, however, due to the change of the social form, the construction of the traditional architecture was faded away, and what is gradually replaced is the repair work of the traditional architecture, therefore, the construction method of the timber technology, use form of the tool and other factors are very different from previous one, and the core technology is faced with the dilemma of endangered loss. There are many relevant studies on architectural style, construction method of technology, schools of craftsman, technical capacity of craftsman and other timber technologies, or the technology preservation is carried out by dictating the historical record, studying the skills and other ways, but for the timber craftsman repairing the traditional architecture on the front line, there is still space for discussing whether to maintain the original construction method and maintain the due repair quality for the core technology. This paper classified the timber technology knowledge with the document analysis method and expert interview method, carried out the architecture analysis of knowledge hierarchy, and finally, built the preliminary framework of the timber technology knowledge system of the Taiwan traditional architecture, and built the standard formulation available for craftsman training and skills identification by virtue of the knowledge system, so that the craftsman did not affect the technical capacity due to the change of the knowledge instruction system, thus, affecting the repair quality of the traditional architecture; and in addition, the building of the database system can also be derived by means of the knowledge structure, so as to integrate the consistency of the contents of core technical capacity. It can be used as the interpretation data; the knowledge is standardized and the authority file is established, which is regarded as a technical specification, so that the technology is standardized, thus, avoid loss or distort.

  2. Knowledge Acquisition and Management for the NASA Earth Exchange (NEX)

    NASA Astrophysics Data System (ADS)

    Votava, P.; Michaelis, A.; Nemani, R. R.

    2013-12-01

    NASA Earth Exchange (NEX) is a data, computing and knowledge collaboratory that houses NASA satellite, climate and ancillary data where a focused community can come together to share modeling and analysis codes, scientific results, knowledge and expertise on a centralized platform with access to large supercomputing resources. As more and more projects are being executed on NEX, we are increasingly focusing on capturing the knowledge of the NEX users and provide mechanisms for sharing it with the community in order to facilitate reuse and accelerate research. There are many possible knowledge contributions to NEX, it can be a wiki entry on the NEX portal contributed by a developer, information extracted from a publication in an automated way, or a workflow captured during code execution on the supercomputing platform. The goal of the NEX knowledge platform is to capture and organize this information and make it easily accessible to the NEX community and beyond. The knowledge acquisition process consists of three main faucets - data and metadata, workflows and processes, and web-based information. Once the knowledge is acquired, it is processed in a number of ways ranging from custom metadata parsers to entity extraction using natural language processing techniques. The processed information is linked with existing taxonomies and aligned with internal ontology (which heavily reuses number of external ontologies). This forms a knowledge graph that can then be used to improve users' search query results as well as provide additional analytics capabilities to the NEX system. Such a knowledge graph will be an important building block in creating a dynamic knowledge base for the NEX community where knowledge is both generated and easily shared.

  3. Terminological reference of a knowledge-based system: the data dictionary.

    PubMed

    Stausberg, J; Wormek, A; Kraut, U

    1995-01-01

    The development of open and integrated knowledge bases makes new demands on the definition of the used terminology. The definition should be realized in a data dictionary separated from the knowledge base. Within the works done at a reference model of medical knowledge, a data dictionary has been developed and used in different applications: a term definition shell, a documentation tool and a knowledge base. The data dictionary includes that part of terminology, which is largely independent of a certain knowledge model. For that reason, the data dictionary can be used as a basis for integrating knowledge bases into information systems, for knowledge sharing and reuse and for modular development of knowledge-based systems.

  4. NASA Enterprise Architecture and Its Use in Transition of Research Results to Operations

    NASA Astrophysics Data System (ADS)

    Frisbie, T. E.; Hall, C. M.

    2006-12-01

    Enterprise architecture describes the design of the components of an enterprise, their relationships and how they support the objectives of that enterprise. NASA Stennis Space Center leads several projects involving enterprise architecture tools used to gather information on research assets within NASA's Earth Science Division. In the near future, enterprise architecture tools will link and display the relevant requirements, parameters, observatories, models, decision systems, and benefit/impact information relationships and map to the Federal Enterprise Architecture Reference Models. Components configured within the enterprise architecture serving the NASA Applied Sciences Program include the Earth Science Components Knowledge Base, the Systems Components database, and the Earth Science Architecture Tool. The Earth Science Components Knowledge Base systematically catalogues NASA missions, sensors, models, data products, model products, and network partners appropriate for consideration in NASA Earth Science applications projects. The Systems Components database is a centralized information warehouse of NASA's Earth Science research assets and a critical first link in the implementation of enterprise architecture. The Earth Science Architecture Tool is used to analyze potential NASA candidate systems that may be beneficial to decision-making capabilities of other Federal agencies. Use of the current configuration of NASA enterprise architecture (the Earth Science Components Knowledge Base, the Systems Components database, and the Earth Science Architecture Tool) has far exceeded its original intent and has tremendous potential for the transition of research results to operational entities.

  5. Modeling and Improving Information Flows in the Development of Large Business Applications

    NASA Astrophysics Data System (ADS)

    Schneider, Kurt; Lübke, Daniel

    Designing a good architecture for an application is a wicked problem. Therefore, experience and knowledge are considered crucial for informing work in software architecture. However, many organizations do not pay sufficient attention to experience exploitation and architectural learning. Many users of information systems are not aware of the options and the needs to report problems and requirements. They often do not have time to describe a problem encountered in sufficient detail for developers to remove it. And there may be a lengthy process for providing feedback. Hence, the knowledge about problems and potential solutions is not shared effectively. Architectural knowledge needs to include evaluative feedback as well as decisions and their reasons (rationale).

  6. The Widest Practicable Dissemination: The NASA Technical Report Server

    NASA Technical Reports Server (NTRS)

    Nelson, Michael L.; Gottlich, Gretchen L.; Bianco, David J.; Binkley, Robert L.; Kellogg, Yvonne D.; Paulson, Sharon S.; Beaumont, Chris J.; Schmunk, Robert B.; Kurtz, Michael J.; Accomazzi, Alberto

    1995-01-01

    The National Aeronautics and Space Act of 1958 established NASA and charged it to "provide for the widest practicable and appropriate dissemination of information concerning [...] its activities and the results thereof." The search for innovative methods to distribute NASA s information lead a grass-roots team to create the NASA Technical Report Server (NTRS), which uses the World Wide Web and other popular Internet-based information systems as search engines. The NTRS is an inter-center effort which provides uniform access to various distributed publication servers residing on the Internet. Users have immediate desktop access to technical publications from NASA centers and institutes. The NTRS is comprised of several units, some constructed especially for inclusion in NTRS, and others that are existing NASA publication services that NTRS reuses. This paper presents the NTRS architecture, usage metrics, and the lessons learned while implementing and maintaining the services over the initial 6-month period. The NTRS is largely constructed with freely available software running on existing hardware. NTRS builds upon existing hardware and software, and the resulting additional exposure for the body of literature contained will allow NASA to ensure that its institutional knowledge base will continue to receive the widest practicable and appropriate dissemination.

  7. The widest practicable dissemination: The NASA technical report server

    NASA Technical Reports Server (NTRS)

    Nelson, Michael L.; Gottlich, Gretchen L.; Bianco, David J.; Binkley, Robert L.; Kellogg, Yvonne D.; Paulson, Sharon S.; Beaumont, Chris J.; Schmunk, Robert B.; Kurtz, Michael J.; Accomazzi, Alberto

    1995-01-01

    The National Aeronautics and Space Act of 1958 established NASA and charged it to 'provide for the widest practicable and appropriate dissemination of information concerning...its activities and the results thereof.' The search for innovative methods to distribute NASA's information lead a grass-roots team to create the NASA Technical Report Server (NTRS), which uses the World Wide Web and other popular Internet-based information systems as search engines. The NTRS is an inter-center effort which provides uniform access to various distributed publication servers residing on the Internet. Users have immediate desktop access to technical publications from NASA centers and institutes. The NTRS is comprised of several units, some constructed especially for inclusion in NTRS, and others that are existing NASA publication services that NTRS reuses. This paper presents the NTRS architecture, usage metrics, and the lessons learned while implementing and maintaining the services over the initial six-month period. The NTRS is largely constructed with freely available software running on existing hardware. NTRS builds upon existing hardware and software, and the resulting additional exposure for the body of literature contained will allow NASA to ensure that its institutional knowledge base will continue to receive the widest practicable and appropriate dissemination.

  8. Integrated urban drainage, status and perspectives.

    PubMed

    Harremoës, P

    2002-01-01

    This paper summarises the status of urban storm drainage as an integrated professional discipline, including the management-policy interface, by which the goals of society are implemented. The paper assesses the development of the discipline since the INTERURBA conference in 1992 and includes aspects of the papers presented at the INTERURBA-II conference in 2001 and the discussions during the conference. Tools for integrated analysis have been developed, but there is less implementation than could be expected. That is due to lack of adequate knowledge about important mechanisms, coupled with a significant conservatism in the business. However, significant integrated analyses have been reported. Most of them deal with the sewer system and the treatment plant, while few incorporate the receiving water as anything but the object of the loads to be minimised by engineering measures up-stream. Important measures are local infiltration, source control, storage basins, local treatment and real time control. New paradigms have been introduced: risk of pollution due to system failure, technology for water reuse, sustainability, new architecture and greener up-stream solutions as opposed to down-stream concrete solutions. The challenge is to combine the inherited approaches with the new approaches by flexibility and adaptability.

  9. ABSTRACTION FOR DATA INTEGRATION: FUSING MAMMALIAN MOLECULAR, CELLULAR AND PHENOTYPE BIG DATASETS FOR BETTER KNOWLEDGE EXTRACTION

    PubMed Central

    Rouillard, Andrew D.; Wang, Zichen; Ma’ayan, Avi

    2015-01-01

    With advances in genomics, transcriptomics, metabolomics and proteomics, and more expansive electronic clinical record monitoring, as well as advances in computation, we have entered the Big Data era in biomedical research. Data gathering is growing rapidly while only a small fraction of this data is converted to useful knowledge or reused in future studies. To improve this, an important concept that is often overlooked is data abstraction. To fuse and reuse biomedical datasets from diverse resources, data abstraction is frequently required. Here we summarize some of the major Big Data biomedical research resources for genomics, proteomics and phenotype data, collected from mammalian cells, tissues and organisms. We then suggest simple data abstraction methods for fusing this diverse but related data. Finally, we demonstrate examples of the potential utility of such data integration efforts, while warning about the inherit biases that exist within such data. PMID:26101093

  10. Management of Knowledge Representation Standards Activities

    NASA Technical Reports Server (NTRS)

    Patil, Ramesh S. (Principal Investigator)

    1993-01-01

    This report describes the efforts undertaken over the last two years to identify the issues underlying the current difficulties in sharing and reuse, and a community wide initiative to overcome them. First, we discuss four bottlenecks to sharing and reuse, present a vision of a future in which these bottlenecks have been ameliorated, and describe the efforts of the initiative's four working groups to address these bottlenecks. We then address the supporting technology and infrastructure that is critical to enabling the vision of the future. Finally, we consider topics of longer-range interest by reviewing some of the research issues raised by our vision.

  11. Wastewater reuse in a cascade based system of a petrochemical industry for the replacement of losses in cooling towers.

    PubMed

    Hansen, Everton; Rodrigues, Marco Antônio Siqueira; Aquim, Patrice Monteiro de

    2016-10-01

    This article discusses the mapping of opportunities for the water reuse in a cascade based system in a petrochemical industry in southern Brazil. This industrial sector has a large demand for water for its operation. In the studied industry, for example, approximately 24 million cubic meters of water were collected directly from the source in 2014. The objective of this study was to evaluate the implementation of the reuse of water in cascade in a petrochemical industry, focusing on the reuse of aqueous streams to replenish losses in the cooling towers. This is an industrial scale case study with real data collected during the years 2014 and 2015. Water reuse was performed using heuristic approach based on the exploitation of knowledge acquired during the search process. The methodology of work consisted of the construction of a process map identifying the stages of production and water consumption, as well as the characterization of the aqueous streams involved in the process. For the application of the industrial water reuse as cooling water, mass balances were carried out considering the maximum concentration levels of turbidity, pH, conductivity, alkalinity, calcium hardness, chlorides, sulfates, silica, chemical oxygen demand and suspended solids as parameters turbidity, pH, conductivity, alkalinity, calcium hardness, chlorides, sulfates, silica, chemical oxygen demand and suspended solids as parameters. The adopted guideline was the fulfillment of the water quality criteria for each application in the industrial process. The study showed the feasibility for the reuse of internal streams as makeup water in cooling towers, and the implementation of the reuse presented in this paper totaled savings of 385,440 m(3)/year of water, which means a sufficient volume to supply 6350 inhabitants for a period of one year, considering the average water consumption per capita in Brazil; in addition to 201,480 m(3)/year of wastewater that would no longer be generated. Copyright © 2016 Elsevier Ltd. All rights reserved.

  12. Lizard movement tracks: variation in path re-use behaviour is consistent with a scent-marking function

    PubMed Central

    Jackson, Grant; Roddick, John F.; Bull, C. Michael

    2016-01-01

    Individual movement influences the spatial and social structuring of a population. Animals regularly use the same paths to move efficiently to familiar places, or to patrol and mark home ranges. We found that Australian sleepy lizards (Tiliqua rugosa), a monogamous species with stable pair-bonds, repeatedly used the same paths within their home ranges and investigated whether path re-use functions as a scent-marking behaviour, or whether it is influenced by site familiarity. Lizards can leave scent trails on the substrate when moving through the environment and have a well-developed vomeronasal system to detect and respond to those scents. Path re-use would allow sleepy lizards to concentrate scent marks along these well-used trails, advertising their presence. Hypotheses of mate attraction and mating competition predict that sleepy lizard males, which experience greater intra-sexual competition, mark more strongly. Consistent with those hypotheses, males re-used their paths more than females, and lizards that showed pairing behaviour with individuals of the opposite sex re-used paths more than unpaired lizards, particularly among females. Hinterland marking is most economic when home ranges are large and mobility is low, as is the case in the sleepy lizard. Consistent with this strategy, re-used paths were predominantly located in the inner 50% home range areas. Together, our detailed movement analyses suggest that path re-use is a scent marking behaviour in the sleepy lizard. We also investigated but found less support for alternative explanations of path re-use behaviour, such as site familiarity and spatial knowledge. Lizards established the same number of paths, and used them as often, whether they had occupied their home ranges for one or for more years. We discuss our findings in relation to maintenance of the monogamous mating system of this species, and the spatial and social structuring of the population. PMID:27019790

  13. Lizard movement tracks: variation in path re-use behaviour is consistent with a scent-marking function.

    PubMed

    Leu, Stephan T; Jackson, Grant; Roddick, John F; Bull, C Michael

    2016-01-01

    Individual movement influences the spatial and social structuring of a population. Animals regularly use the same paths to move efficiently to familiar places, or to patrol and mark home ranges. We found that Australian sleepy lizards (Tiliqua rugosa), a monogamous species with stable pair-bonds, repeatedly used the same paths within their home ranges and investigated whether path re-use functions as a scent-marking behaviour, or whether it is influenced by site familiarity. Lizards can leave scent trails on the substrate when moving through the environment and have a well-developed vomeronasal system to detect and respond to those scents. Path re-use would allow sleepy lizards to concentrate scent marks along these well-used trails, advertising their presence. Hypotheses of mate attraction and mating competition predict that sleepy lizard males, which experience greater intra-sexual competition, mark more strongly. Consistent with those hypotheses, males re-used their paths more than females, and lizards that showed pairing behaviour with individuals of the opposite sex re-used paths more than unpaired lizards, particularly among females. Hinterland marking is most economic when home ranges are large and mobility is low, as is the case in the sleepy lizard. Consistent with this strategy, re-used paths were predominantly located in the inner 50% home range areas. Together, our detailed movement analyses suggest that path re-use is a scent marking behaviour in the sleepy lizard. We also investigated but found less support for alternative explanations of path re-use behaviour, such as site familiarity and spatial knowledge. Lizards established the same number of paths, and used them as often, whether they had occupied their home ranges for one or for more years. We discuss our findings in relation to maintenance of the monogamous mating system of this species, and the spatial and social structuring of the population.

  14. An architecture for intelligent task interruption

    NASA Technical Reports Server (NTRS)

    Sharma, D. D.; Narayan, Srini

    1990-01-01

    In the design of real time systems the capability for task interruption is often considered essential. The problem of task interruption in knowledge-based domains is examined. It is proposed that task interruption can be often avoided by using appropriate functional architectures and knowledge engineering principles. Situations for which task interruption is indispensable, a preliminary architecture based on priority hierarchies is described.

  15. Unbiased Protein Association Study on the Public Human Proteome Reveals Biological Connections between Co-Occurring Protein Pairs

    PubMed Central

    2017-01-01

    Mass-spectrometry-based, high-throughput proteomics experiments produce large amounts of data. While typically acquired to answer specific biological questions, these data can also be reused in orthogonal ways to reveal new biological knowledge. We here present a novel method for such orthogonal data reuse of public proteomics data. Our method elucidates biological relationships between proteins based on the co-occurrence of these proteins across human experiments in the PRIDE database. The majority of the significantly co-occurring protein pairs that were detected by our method have been successfully mapped to existing biological knowledge. The validity of our novel method is substantiated by the extremely few pairs that can be mapped to existing knowledge based on random associations between the same set of proteins. Moreover, using literature searches and the STRING database, we were able to derive meaningful biological associations for unannotated protein pairs that were detected using our method, further illustrating that as-yet unknown associations present highly interesting targets for follow-up analysis. PMID:28480704

  16. A prototype knowledge-based decision support system for industrial waste management. Part 1: The decision support system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boyle, C.A.; Baetz, B.W.

    1998-12-31

    Although there are a number of expert systems available which are designed to assist in resolving environmental problems, there is still a need for a system which would assist managers in determining waste management options for all types of wastes from one or more industrial plants, giving priority to sustainable use of resources, reuse and recycling. A prototype model was developed to determine the potentials for reuse and recycling of waste materials, to select the treatments needed to recycle waste materials or for treatment before disposal, and to determine potentials for co-treatment of wastes. A knowledge-based decision support system wasmore » then designed using this model. This paper describes the prototype model, the developed knowledge-based decision support system, the input and storage of data within the system and the inference engine developed for the system to determine the treatment options for the wastes. Options for sorting and selecting treatment trains are described, along with a discussion of the limitations of the approach and future developments needed for the system.« less

  17. Establishment of a Digital Knowledge Conversion Architecture Design Learning with High User Acceptance

    ERIC Educational Resources Information Center

    Wu, Yun-Wu; Weng, Apollo; Weng, Kuo-Hua

    2017-01-01

    The purpose of this study is to design a knowledge conversion and management digital learning system for architecture design learning, helping students to share, extract, use and create their design knowledge through web-based interactive activities based on socialization, internalization, combination and externalization process in addition to…

  18. An Object-Oriented Software Architecture for the Explorer-2 Knowledge Management Environment

    PubMed Central

    Tarabar, David B.; Greenes, Robert A.; Slosser, Eric T.

    1989-01-01

    Explorer-2 is a workstation based environment to facilitate knowledge management. It provides consistent access to a broad range of knowledge on the basis of purpose, not type. We have developed a software architecture based on Object-Oriented programming for Explorer-2. We have defined three classes of program objects: Knowledge ViewFrames, Knowledge Resources, and Knowledge Bases. This results in knowledge management at three levels: the screen level, the disk level and the meta-knowledge level. We have applied this design to several knowledge bases, and believe that there is a broad applicability of this design.

  19. Lunar Commercial Mining Logistics

    NASA Astrophysics Data System (ADS)

    Kistler, Walter P.; Citron, Bob; Taylor, Thomas C.

    2008-01-01

    Innovative commercial logistics is required for supporting lunar resource recovery operations and assisting larger consortiums in lunar mining, base operations, camp consumables and the future commercial sales of propellant over the next 50 years. To assist in lowering overall development costs, ``reuse'' innovation is suggested in reusing modified LTS in-space hardware for use on the moon's surface, developing product lines for recovered gases, regolith construction materials, surface logistics services, and other services as they evolve, (Kistler, Citron and Taylor, 2005) Surface logistics architecture is designed to have sustainable growth over 50 years, financed by private sector partners and capable of cargo transportation in both directions in support of lunar development and resource recovery development. The author's perspective on the importance of logistics is based on five years experience at remote sites on Earth, where remote base supply chain logistics didn't always work, (Taylor, 1975a). The planning and control of the flow of goods and materials to and from the moon's surface may be the most complicated logistics challenges yet to be attempted. Affordability is tied to the innovation and ingenuity used to keep the transportation and surface operations costs as low as practical. Eleven innovations are proposed and discussed by an entrepreneurial commercial space startup team that has had success in introducing commercial space innovation and reducing the cost of space operations in the past. This logistics architecture offers NASA and other exploring nations a commercial alternative for non-essential cargo. Five transportation technologies and eleven surface innovations create the logistics transportation system discussed.

  20. Influential aspects of leader’s Bourdieu capitals on Malaysian landscape architecture subordinates’ creativity

    NASA Astrophysics Data System (ADS)

    Zahari, R.; Ariffin, M. H.; Othman, N.

    2018-02-01

    Free Trade Agreements as implemented by Malaysian government calls out local businesses such as landscape architecture consultant firm to explore internationally and strengthen their performance to compete locally. Performance of landscape architecture firm as a design firm depends entirely on creativity of the subordinates in the firm. Past research has neglected studying the influence of a leader’s capitals on subordinates’ creativity, especially in Malaysian landscape architecture firms. The aim of this research is to investigate the influence of subordinates’ perceptions of the leader’s Bourdieu capitals towards promoting subordinate’s creative behaviours in Malaysian Landscape Architecture firms. The sample chosen for this research are subordinates in registered landscape architecture firm. Data was collected using qualitative semi-structured interviews with 13 respondents and analysed using Qualitative Category Coding. Aspects of the leader’s social capital (i.e. knowledge acquisition, problem solving, motivation boosting), human capital (guidance, demotivating leadership, experiential knowledge, knowledge acquisition), and emotional capital (chemistry with leader, respect, knowledge acquisition, trust, understanding, self-inflicted demotivation) that influence subordinates’ creativity were uncovered from the data. The main finding is that the leader’s capitals promote the subordinate landscape architects or assistant landscape architect to be more creative based on three main things, first is knowledge acquisition, motivation, and ability for the leader to influence through positive relationship. The finding contributes to a new way of understanding the leader’s characteristics that influence subordinates’ creativity.

  1. Framework for a clinical information system.

    PubMed

    Van de Velde, R

    2000-01-01

    The current status of our work towards the design and implementation of a reference architecture for a Clinical Information System is presented. This architecture has been developed and implemented based on components following a strong underlying conceptual and technological model. Common Object Request Broker and n-tier technology featuring centralised and departmental clinical information systems as the back-end store for all clinical data are used. Servers located in the 'middle' tier apply the clinical (business) model and application rules to communicate with so-called 'thin client' workstations. The main characteristics are the focus on modelling and reuse of both data and business logic as there is a shift away from data and functional modelling towards object modelling. Scalability as well as adaptability to constantly changing requirements via component driven computing are the main reasons for that approach.

  2. Parallel Conjugate Gradient: Effects of Ordering Strategies, Programming Paradigms, and Architectural Platforms

    NASA Technical Reports Server (NTRS)

    Oliker, Leonid; Heber, Gerd; Biswas, Rupak

    2000-01-01

    The Conjugate Gradient (CG) algorithm is perhaps the best-known iterative technique to solve sparse linear systems that are symmetric and positive definite. A sparse matrix-vector multiply (SPMV) usually accounts for most of the floating-point operations within a CG iteration. In this paper, we investigate the effects of various ordering and partitioning strategies on the performance of parallel CG and SPMV using different programming paradigms and architectures. Results show that for this class of applications, ordering significantly improves overall performance, that cache reuse may be more important than reducing communication, and that it is possible to achieve message passing performance using shared memory constructs through careful data ordering and distribution. However, a multi-threaded implementation of CG on the Tera MTA does not require special ordering or partitioning to obtain high efficiency and scalability.

  3. A National Medical Information System for Senegal: Architecture and Services.

    PubMed

    Camara, Gaoussou; Diallo, Al Hassim; Lo, Moussa; Tendeng, Jacques-Noël; Lo, Seynabou

    2016-01-01

    In Senegal, great amounts of data are daily generated by medical activities such as consultation, hospitalization, blood test, x-ray, birth, death, etc. These data are still recorded in register, printed images, audios and movies which are manually processed. However, some medical organizations have their own software for non-standardized patient record management, appointment, wages, etc. without any possibility of sharing these data or communicating with other medical structures. This leads to lots of limitations in reusing or sharing these data because of their possible structural and semantic heterogeneity. To overcome these problems we have proposed a National Medical Information System for Senegal (SIMENS). As an integrated platform, SIMENS provides an EHR system that supports healthcare activities, a mobile version and a web portal. The SIMENS architecture proposes also a data and application integration services for supporting interoperability and decision making.

  4. Secure Network-Centric Aviation Communication (SNAC)

    NASA Technical Reports Server (NTRS)

    Nelson, Paul H.; Muha, Mark A.; Sheehe, Charles J.

    2017-01-01

    The existing National Airspace System (NAS) communications capabilities are largely unsecured, are not designed for efficient use of spectrum and collectively are not capable of servicing the future needs of the NAS with the inclusion of new operators in Unmanned Aviation Systems (UAS) or On Demand Mobility (ODM). SNAC will provide a ubiquitous secure, network-based communications architecture that will provide new service capabilities and allow for the migration of current communications to SNAC over time. The necessary change in communication technologies to digital domains will allow for the adoption of security mechanisms, sharing of link technologies, large increase in spectrum utilization, new forms of resilience and redundancy and the possibly of spectrum reuse. SNAC consists of a long term open architectural approach with increasingly capable designs used to steer research and development and enable operating capabilities that run in parallel with current NAS systems.

  5. Improved cache performance in Monte Carlo transport calculations using energy banding

    NASA Astrophysics Data System (ADS)

    Siegel, A.; Smith, K.; Felker, K.; Romano, P.; Forget, B.; Beckman, P.

    2014-04-01

    We present an energy banding algorithm for Monte Carlo (MC) neutral particle transport simulations which depend on large cross section lookup tables. In MC codes, read-only cross section data tables are accessed frequently, exhibit poor locality, and are typically too much large to fit in fast memory. Thus, performance is often limited by long latencies to RAM, or by off-node communication latencies when the data footprint is very large and must be decomposed on a distributed memory machine. The proposed energy banding algorithm allows maximal temporal reuse of data in band sizes that can flexibly accommodate different architectural features. The energy banding algorithm is general and has a number of benefits compared to the traditional approach. In the present analysis we explore its potential to achieve improvements in time-to-solution on modern cache-based architectures.

  6. Open access and preservation of data on the coupled geosphere-biosphere system: the case of the H2020 Project ECOPOTENTIAL

    NASA Astrophysics Data System (ADS)

    Provenzale, Antonello; Nativi, Stefano

    2016-04-01

    The H2020 ECOPOTENTIAL Project addresses the entire chain of ecosystem-related services, by focusing on the interaction between the biotic and abiotic components of ecosystems (geosphere-biosphere interactions), developing ecosystem data services with special emphasis on Copernicus services, implementing model output services to distribute the results of the modelling activities, and estimating current and future ecosystem services and benefits combining ecosystem functions (supply) with beneficiaries needs (demand). In ECOPOTENTIAL all data, model results and acquired knowledge will be made available on common and open platforms, coherent with the Global Earth Observation System of Systems (GEOSS) data sharing principles and fully interoperable with the GEOSS Common Infrastructure (GCI). ECOPOTENTIAL will be conducted in the context of the implementation of the Copernicus EO Component and in synergy with the ESA Climate Change Initiative. The project activities will contribute to Copernicus and non-Copernicus contexts for ecosystems, and will create an Ecosystem Data Service for Copernicus (ECOPERNICUS), a new open-access, smart and user-friendly geospatial data/products retrieval portal and web coverage service using a dedicated online server. ECOPOTENTIAL will make data, scientific results, models and information accessible and available through a cloud-based open platform implementing virtual laboratories. The platform will be a major contribution to the GEOSS Common Infrastructure, reinforcing the GEOSS Data-CORE. By the end of the project, new prototype products and ecosystem services, based on improved access (notably via GEOSS) and long-term storage of ecosystem EO data and information in existing PAs, will be realized. In this contribution, we discuss the approach followed in the project for Open Data access and use. ECOPOTENTIAL introduced a set of architecture and interoperability principles to facilitate data (and the associated software) discovery, access, (re-)use, and preservation. According to these principles, ECOPOTENTIAL worked out a Data Management Plan that describes how the different data types (generated and/or collected by the project) are going to be managed in the project; in particular: (1) What standards will be used for these data discoverability, accessibility and (re-)use; (2) How these data will be exploited and/or shared/made accessible for verification and reuse; if data cannot be made available, the reasons will be fully explained; and (3) How these data will be curated and preserved, even after the project duration.

  7. A portable expression resource for engineering cross-species genetic circuits and pathways

    PubMed Central

    Kushwaha, Manish; Salis, Howard M.

    2015-01-01

    Genetic circuits and metabolic pathways can be reengineered to allow organisms to process signals and manufacture useful chemicals. However, their functions currently rely on organism-specific regulatory parts, fragmenting synthetic biology and metabolic engineering into host-specific domains. To unify efforts, here we have engineered a cross-species expression resource that enables circuits and pathways to reuse the same genetic parts, while functioning similarly across diverse organisms. Our engineered system combines mixed feedback control loops and cross-species translation signals to autonomously self-regulate expression of an orthogonal polymerase without host-specific promoters, achieving nontoxic and tuneable gene expression in diverse Gram-positive and Gram-negative bacteria. Combining 50 characterized system variants with mechanistic modelling, we show how the cross-species expression resource's dynamics, capacity and toxicity are controlled by the control loops' architecture and feedback strengths. We also demonstrate one application of the resource by reusing the same genetic parts to express a biosynthesis pathway in both model and non-model hosts. PMID:26184393

  8. Space Software Defined Radio Characterization to Enable Reuse

    NASA Technical Reports Server (NTRS)

    Mortensen, Dale J.; Bishop, Daniel W.; Chelmins, David

    2012-01-01

    NASA's Space Communication and Navigation Testbed is beginning operations on the International Space Station this year. The objective is to promote new software defined radio technologies and associated software application reuse, enabled by this first flight of NASA's Space Telecommunications Radio System architecture standard. The Space Station payload has three software defined radios onboard that allow for a wide variety of communications applications; however, each radio was only launched with one waveform application. By design the testbed allows new waveform applications to be uploaded and tested by experimenters in and outside of NASA. During the system integration phase of the testbed special waveform test modes and stand-alone test waveforms were used to characterize the SDR platforms for the future experiments. Characterization of the Testbed's JPL SDR using test waveforms and specialized ground test modes is discussed in this paper. One of the test waveforms, a record and playback application, can be utilized in a variety of ways, including new satellite on-orbit checkout as well as independent on-board testbed experiments.

  9. STRS Radio Service Software for NASA's SCaN Testbed

    NASA Technical Reports Server (NTRS)

    Mortensen, Dale J.; Bishop, Daniel Wayne; Chelmins, David T.

    2012-01-01

    NASAs Space Communication and Navigation(SCaN) Testbed was launched to the International Space Station in 2012. The objective is to promote new software defined radio technologies and associated software application reuse, enabled by this first flight of NASAs Space Telecommunications Radio System(STRS) architecture standard. Pre-launch testing with the testbeds software defined radios was performed as part of system integration. Radio services for the JPL SDR were developed during system integration to allow the waveform application to operate properly in the space environment, especially considering thermal effects. These services include receiver gain control, frequency offset, IQ modulator balance, and transmit level control. Development, integration, and environmental testing of the radio services will be described. The added software allows the waveform application to operate properly in the space environment, and can be reused by future experimenters testing different waveform applications. Integrating such services with the platform provided STRS operating environment will attract more users, and these services are candidates for interface standardization via STRS.

  10. STRS Radio Service Software for NASA's SCaN Testbed

    NASA Technical Reports Server (NTRS)

    Mortensen, Dale J.; Bishop, Daniel Wayne; Chelmins, David T.

    2013-01-01

    NASA's Space Communication and Navigation(SCaN) Testbed was launched to the International Space Station in 2012. The objective is to promote new software defined radio technologies and associated software application reuse, enabled by this first flight of NASA's Space Telecommunications Radio System (STRS) architecture standard. Pre-launch testing with the testbed's software defined radios was performed as part of system integration. Radio services for the JPL SDR were developed during system integration to allow the waveform application to operate properly in the space environment, especially considering thermal effects. These services include receiver gain control, frequency offset, IQ modulator balance, and transmit level control. Development, integration, and environmental testing of the radio services will be described. The added software allows the waveform application to operate properly in the space environment, and can be reused by future experimenters testing different waveform applications. Integrating such services with the platform provided STRS operating environment will attract more users, and these services are candidates for interface standardization via STRS.

  11. Architecture and Initial Development of a Knowledge-as-a-Service Activator for Computable Knowledge Objects for Health.

    PubMed

    Flynn, Allen J; Boisvert, Peter; Gittlen, Nate; Gross, Colin; Iott, Brad; Lagoze, Carl; Meng, George; Friedman, Charles P

    2018-01-01

    The Knowledge Grid (KGrid) is a research and development program toward infrastructure capable of greatly decreasing latency between the publication of new biomedical knowledge and its widespread uptake into practice. KGrid comprises digital knowledge objects, an online Library to store them, and an Activator that uses them to provide Knowledge-as-a-Service (KaaS). KGrid's Activator enables computable biomedical knowledge, held in knowledge objects, to be rapidly deployed at Internet-scale in cloud computing environments for improved health. Here we present the Activator, its system architecture and primary functions.

  12. Optimization of knowledge sharing through multi-forum using cloud computing architecture

    NASA Astrophysics Data System (ADS)

    Madapusi Vasudevan, Sriram; Sankaran, Srivatsan; Muthuswamy, Shanmugasundaram; Ram, N. Sankar

    2011-12-01

    Knowledge sharing is done through various knowledge sharing forums which requires multiple logins through multiple browser instances. Here a single Multi-Forum knowledge sharing concept is introduced which requires only one login session which makes user to connect multiple forums and display the data in a single browser window. Also few optimization techniques are introduced here to speed up the access time using cloud computing architecture.

  13. Software synthesis using generic architectures

    NASA Technical Reports Server (NTRS)

    Bhansali, Sanjay

    1993-01-01

    A framework for synthesizing software systems based on abstracting software system designs and the design process is described. The result of such an abstraction process is a generic architecture and the process knowledge for customizing the architecture. The customization process knowledge is used to assist a designer in customizing the architecture as opposed to completely automating the design of systems. Our approach using an implemented example of a generic tracking architecture which was customized in two different domains is illustrated. How the designs produced using KASE compare to the original designs of the two systems, and current work and plans for extending KASE to other application areas are described.

  14. Mapping Research in Landscape Architecture: Balancing Supply of Academic Knowledge and Demand of Professional Practice

    ERIC Educational Resources Information Center

    Chen, Zheng; Miller, Patrick A.; Clements, Terry L.; Kim, Mintai

    2017-01-01

    With increasing academic research in the past few decades, the knowledge scope of landscape architecture has expanded from traditional focus on aesthetics to a broad range of ecological, cultural and psychological issues. In order to understand how academic research and knowledge expansion may have redefined the practice, two surveys were…

  15. Development of mobile platform integrated with existing electronic medical records.

    PubMed

    Kim, YoungAh; Kim, Sung Soo; Kang, Simon; Kim, Kyungduk; Kim, Jun

    2014-07-01

    This paper describes a mobile Electronic Medical Record (EMR) platform designed to manage and utilize the existing EMR and mobile application with optimized resources. We structured the mEMR to reuse services of retrieval and storage in mobile app environments that have already proven to have no problem working with EMRs. A new mobile architecture-based mobile solution was developed in four steps: the construction of a server and its architecture; screen layout and storyboard making; screen user interface design and development; and a pilot test and step-by-step deployment. This mobile architecture consists of two parts, the server-side area and the client-side area. In the server-side area, it performs the roles of service management for EMR and documents and for information exchange. Furthermore, it performs menu allocation depending on user permission and automatic clinical document architecture document conversion. Currently, Severance Hospital operates an iOS-compatible mobile solution based on this mobile architecture and provides stable service without additional resources, dealing with dynamic changes of EMR templates. The proposed mobile solution should go hand in hand with the existing EMR system, and it can be a cost-effective solution if a quality EMR system is operated steadily with this solution. Thus, we expect this example to be shared with hospitals that currently plan to deploy mobile solutions.

  16. Development of Mobile Platform Integrated with Existing Electronic Medical Records

    PubMed Central

    Kim, YoungAh; Kang, Simon; Kim, Kyungduk; Kim, Jun

    2014-01-01

    Objectives This paper describes a mobile Electronic Medical Record (EMR) platform designed to manage and utilize the existing EMR and mobile application with optimized resources. Methods We structured the mEMR to reuse services of retrieval and storage in mobile app environments that have already proven to have no problem working with EMRs. A new mobile architecture-based mobile solution was developed in four steps: the construction of a server and its architecture; screen layout and storyboard making; screen user interface design and development; and a pilot test and step-by-step deployment. This mobile architecture consists of two parts, the server-side area and the client-side area. In the server-side area, it performs the roles of service management for EMR and documents and for information exchange. Furthermore, it performs menu allocation depending on user permission and automatic clinical document architecture document conversion. Results Currently, Severance Hospital operates an iOS-compatible mobile solution based on this mobile architecture and provides stable service without additional resources, dealing with dynamic changes of EMR templates. Conclusions The proposed mobile solution should go hand in hand with the existing EMR system, and it can be a cost-effective solution if a quality EMR system is operated steadily with this solution. Thus, we expect this example to be shared with hospitals that currently plan to deploy mobile solutions. PMID:25152837

  17. Software architecture for time-constrained machine vision applications

    NASA Astrophysics Data System (ADS)

    Usamentiaga, Rubén; Molleda, Julio; García, Daniel F.; Bulnes, Francisco G.

    2013-01-01

    Real-time image and video processing applications require skilled architects, and recent trends in the hardware platform make the design and implementation of these applications increasingly complex. Many frameworks and libraries have been proposed or commercialized to simplify the design and tuning of real-time image processing applications. However, they tend to lack flexibility, because they are normally oriented toward particular types of applications, or they impose specific data processing models such as the pipeline. Other issues include large memory footprints, difficulty for reuse, and inefficient execution on multicore processors. We present a novel software architecture for time-constrained machine vision applications that addresses these issues. The architecture is divided into three layers. The platform abstraction layer provides a high-level application programming interface for the rest of the architecture. The messaging layer provides a message-passing interface based on a dynamic publish/subscribe pattern. A topic-based filtering in which messages are published to topics is used to route the messages from the publishers to the subscribers interested in a particular type of message. The application layer provides a repository for reusable application modules designed for machine vision applications. These modules, which include acquisition, visualization, communication, user interface, and data processing, take advantage of the power of well-known libraries such as OpenCV, Intel IPP, or CUDA. Finally, the proposed architecture is applied to a real machine vision application: a jam detector for steel pickling lines.

  18. Enabling Data-as- a-Service (DaaS) - Biggest Challenge of Geoscience Australia

    NASA Astrophysics Data System (ADS)

    Bastrakova, I.; Kemp, C.; Car, N. J.

    2016-12-01

    Geoscience Australia (GA) is recognised and respected as the national repository and steward of multiple national significance data collections that provides geoscience information, services and capability to the Australian Government, industry and stakeholders. Provision of Data-as-a-Service is both GA's key responsibility and core business. Through the Science First Transformation Program GA is undergoing a significant rethinking of its data architecture, curation and access to support the Digital Science capability for which DaaS forms both a dependency and underpins its implementation. DaaS, being a service, means we can deliver its outputs in multiple ways thus providing users with data on demand in ready-for-consumption forms. We can then to reuse prebuilt data constructions to allow self-serviced integration of data underpinned by dynamic query tools. In GA's context examples of DaaS are the Australian Geoscience Data Cube, the Foundation Spatial Data Framework and data served through several Virtual Laboratories. We have implemented a three-layered architecture for DaaS in order to store and manage the data while honouring the semantics of Scientific Data Models defined by subject matter experts and GA's Enterprise Data Architecture as well as retain that delivery flexibility. The foundation layer of DaaS is Canonical Datasets, which are optimised for a long-term data stewardship and curation. Data is well structured, standardised, described and audited. All data creation and editing happen within this layer. The middle Data Transformation layer assists with transformation of data from Canonical Datasets to data integration layer. It provides mechanisms for multi-format and multi-technology data transformation. The top Data Integration layer is optimised for data access. Data can be easily reused and repurposed; data formats made available are optimised for scientific computing and adjusted for access by multiple applications, tools and libraries. Moving to DaaS enables GA to increase data alertness, generate new capabilities and be prepared for emerging technological challengers.

  19. Evolution of a Reconfigurable Processing Platform for a Next Generation Space Software Defined Radio

    NASA Technical Reports Server (NTRS)

    Kacpura, Thomas J.; Downey, Joseph A.; Anderson, Keffery R.; Baldwin, Keith

    2014-01-01

    The National Aeronautics and Space Administration (NASA)Harris Ka-Band Software Defined Radio (SDR) is the first, fully reprogrammable space-qualified SDR operating in the Ka-Band frequency range. Providing exceptionally higher data communication rates than previously possible, this SDR offers in-orbit reconfiguration, multi-waveform operation, and fast deployment due to its highly modular hardware and software architecture. Currently in operation on the International Space Station (ISS), this new paradigm of reconfigurable technology is enabling experimenters to investigate navigation and networking in the space environment.The modular SDR and the NASA developed Space Telecommunications Radio System (STRS) architecture standard are the basis for Harris reusable, digital signal processing space platform trademarked as AppSTAR. As a result, two new space radio products are a synthetic aperture radar payload and an Automatic Detection Surveillance Broadcast (ADS-B) receiver. In addition, Harris is currently developing many new products similar to the Ka-Band software defined radio for other applications. For NASAs next generation flight Ka-Band radio development, leveraging these advancements could lead to a more robust and more capable software defined radio.The space environment has special considerations different from terrestrial applications that must be considered for any system operated in space. Each space mission has unique requirements that can make these systems unique. These unique requirements can make products that are expensive and limited in reuse. Space systems put a premium on size, weight and power. A key trade is the amount of reconfigurability in a space system. The more reconfigurable the hardware platform, the easier it is to adapt to the platform to the next mission, and this reduces the amount of non-recurring engineering costs. However, the more reconfigurable platforms often use more spacecraft resources. Software has similar considerations to hardware. Having an architecture standard promotes reuse of software and firmware. Space platforms have limited processor capability, which makes the trade on the amount of amount of flexibility paramount.

  20. Towards a Conceptual Design of a Cross-Domain Integrative Information System for the Geosciences

    NASA Astrophysics Data System (ADS)

    Zaslavsky, I.; Richard, S. M.; Valentine, D. W.; Malik, T.; Gupta, A.

    2013-12-01

    As geoscientists increasingly focus on studying processes that span multiple research domains, there is an increased need for cross-domain interoperability solutions that can scale to the entire geosciences, bridging information and knowledge systems, models, software tools, as well as connecting researchers and organization. Creating a community-driven cyberinfrastructure (CI) to address the grand challenges of integrative Earth science research and education is the focus of EarthCube, a new research initiative of the U.S. National Science Foundation. We are approaching EarthCube design as a complex socio-technical system of systems, in which communication between various domain subsystems, people and organizations enables more comprehensive, data-intensive research designs and knowledge sharing. In particular, we focus on integrating 'traditional' layered CI components - including information sources, catalogs, vocabularies, services, analysis and modeling tools - with CI components supporting scholarly communication, self-organization and social networking (e.g. research profiles, Q&A systems, annotations), in a manner that follows and enhances existing patterns of data, information and knowledge exchange within and across geoscience domains. We describe an initial architecture design focused on enabling the CI to (a) provide an environment for scientifically sound information and software discovery and reuse; (b) evolve by factoring in the impact of maturing movements like linked data, 'big data', and social collaborations, as well as experience from work on large information systems in other domains; (c) handle the ever increasing volume, complexity and diversity of geoscience information; (d) incorporate new information and analytical requirements, tools, and techniques, and emerging types of earth observations and models; (e) accommodate different ideas and approaches to research and data stewardship; (f) be responsive to the existing and anticipated needs of researchers and organizations representing both established and emerging CI users; and (g) make best use of NSF's current investment in the geoscience CI. The presentation will focus on the challenges and methodology of EarthCube CI design, in particular on supporting social engagement and interaction between geoscientists and computer scientists as a core function of EarthCube architecture. This capability must include mechanisms to not only locate and integrate available geoscience resources, but also engage individuals and projects, research products and publications, and enable efficient communication across many EarthCube stakeholders leading to long-term institutional alignment and trusted collaborations.

  1. Toward a Literature-Driven Definition of Big Data in Healthcare

    PubMed Central

    Baro, Emilie; Degoul, Samuel; Beuscart, Régis; Chazard, Emmanuel

    2015-01-01

    Objective. The aim of this study was to provide a definition of big data in healthcare. Methods. A systematic search of PubMed literature published until May 9, 2014, was conducted. We noted the number of statistical individuals (n) and the number of variables (p) for all papers describing a dataset. These papers were classified into fields of study. Characteristics attributed to big data by authors were also considered. Based on this analysis, a definition of big data was proposed. Results. A total of 196 papers were included. Big data can be defined as datasets with Log⁡(n∗p) ≥ 7. Properties of big data are its great variety and high velocity. Big data raises challenges on veracity, on all aspects of the workflow, on extracting meaningful information, and on sharing information. Big data requires new computational methods that optimize data management. Related concepts are data reuse, false knowledge discovery, and privacy issues. Conclusion. Big data is defined by volume. Big data should not be confused with data reuse: data can be big without being reused for another purpose, for example, in omics. Inversely, data can be reused without being necessarily big, for example, secondary use of Electronic Medical Records (EMR) data. PMID:26137488

  2. Pathogens Assessment in Reclaimed Effluent Used for Industrial Crops Irrigation

    PubMed Central

    Al-Sa’ed, R.

    2007-01-01

    Reuse of treated effluent is a highly valued water source in Palestine, however with limited success due to public health concerns. This paper assesses the potential pathogens in raw, treated and reclaimed wastewater at Albireh urban wastewater treatment facility, and provides scientific knowledge to update the Palestinian reuse guidelines. Laboratory analyses of collected samples over a period of 4 months have indicated that the raw wastewater from Albireh city contained high numbers of fecal coliforms and worm eggs while 31% of the samples were Salmonella positive. Treated effluent suitable for restricted irrigation demonstrated that the plant was efficient in removing indicator bacteria, where fecal coliforms and fecal streptococci removal averaged 99.64% and 93.44%, respectively. Although not disinfected, treated effluent was free of Salmonella and parasites, hence safe for restricted agricultural purposes. All samples of the reclaimed effluent and three samples of irrigated grass were devoid of microbial pathogens indicating a safe use in unrestricted agricultural utilization. Adequate operation of wastewater treatment facilities, scientific updating of reuse guidelines and launching public awareness campaigns are core factors for successful and sustainable large-scale wastewater reuse schemes in Palestine. PMID:17431318

  3. Toward a Literature-Driven Definition of Big Data in Healthcare.

    PubMed

    Baro, Emilie; Degoul, Samuel; Beuscart, Régis; Chazard, Emmanuel

    2015-01-01

    The aim of this study was to provide a definition of big data in healthcare. A systematic search of PubMed literature published until May 9, 2014, was conducted. We noted the number of statistical individuals (n) and the number of variables (p) for all papers describing a dataset. These papers were classified into fields of study. Characteristics attributed to big data by authors were also considered. Based on this analysis, a definition of big data was proposed. A total of 196 papers were included. Big data can be defined as datasets with Log(n∗p) ≥ 7. Properties of big data are its great variety and high velocity. Big data raises challenges on veracity, on all aspects of the workflow, on extracting meaningful information, and on sharing information. Big data requires new computational methods that optimize data management. Related concepts are data reuse, false knowledge discovery, and privacy issues. Big data is defined by volume. Big data should not be confused with data reuse: data can be big without being reused for another purpose, for example, in omics. Inversely, data can be reused without being necessarily big, for example, secondary use of Electronic Medical Records (EMR) data.

  4. Generic Software Architecture for Launchers

    NASA Astrophysics Data System (ADS)

    Carre, Emilien; Gast, Philippe; Hiron, Emmanuel; Leblanc, Alain; Lesens, David; Mescam, Emmanuelle; Moro, Pierre

    2015-09-01

    The definition and reuse of generic software architecture for launchers is not so usual for several reasons: the number of European launcher families is very small (Ariane 5 and Vega for these last decades); the real time constraints (reactivity and determinism needs) are very hard; low levels of versatility are required (implying often an ad hoc development of the launcher mission). In comparison, satellites are often built on a generic platform made up of reusable hardware building blocks (processors, star-trackers, gyroscopes, etc.) and reusable software building blocks (middleware, TM/TC, On Board Control Procedure, etc.). If some of these reasons are still valid (e.g. the limited number of development), the increase of the available CPU power makes today an approach based on a generic time triggered middleware (ensuring the full determinism of the system) and a centralised mission and vehicle management (offering more flexibility in the design and facilitating the long term maintenance) achievable. This paper presents an example of generic software architecture which could be envisaged for future launchers, based on the previously described principles and supported by model driven engineering and automatic code generation.

  5. Design of a decentralized reusable research database architecture to support data acquisition in large research projects.

    PubMed

    Iavindrasana, Jimison; Depeursinge, Adrien; Ruch, Patrick; Spahni, Stéphane; Geissbuhler, Antoine; Müller, Henning

    2007-01-01

    The diagnostic and therapeutic processes, as well as the development of new treatments, are hindered by the fragmentation of information which underlies them. In a multi-institutional research study database, the clinical information system (CIS) contains the primary data input. An important part of the money of large scale clinical studies is often paid for data creation and maintenance. The objective of this work is to design a decentralized, scalable, reusable database architecture with lower maintenance costs for managing and integrating distributed heterogeneous data required as basis for a large-scale research project. Technical and legal aspects are taken into account based on various use case scenarios. The architecture contains 4 layers: data storage and access are decentralized at their production source, a connector as a proxy between the CIS and the external world, an information mediator as a data access point and the client side. The proposed design will be implemented inside six clinical centers participating in the @neurIST project as part of a larger system on data integration and reuse for aneurism treatment.

  6. Critical Technology Determination for Future Human Space Flight

    NASA Technical Reports Server (NTRS)

    Mercer, Carolyn R.; Vangen, Scott D.; Williams-Byrd, Julie A.; Steckleim, Jonette M.; Alexander, Leslie; Rahman, Shamin A.; Rosenthal, Matthew; Wiley, Dianne S.; Davison, Stephan C.; Korsmeyer, David J.; hide

    2012-01-01

    As the National Aeronautics and Space Administration (NASA) prepares to extend human presence throughout the solar system, technical capabilities must be developed to enable long duration flights to destinations such as near Earth asteroids, Mars, and extended stays on the Moon. As part of the NASA Human Spaceflight Architecture Team, a Technology Development Assessment Team has identified a suite of critical technologies needed to support this broad range of missions. Dialog between mission planners, vehicle developers, and technologists was used to identify a minimum but sufficient set of technologies, noting that needs are created by specific mission architecture requirements, yet specific designs are enabled by technologies. Further consideration was given to the re-use of underlying technologies to cover multiple missions to effectively use scarce resources. This suite of critical technologies is expected to provide the needed base capability to enable a variety of possible destinations and missions. This paper describes the methodology used to provide an architecture driven technology development assessment (technology pull), including technology advancement needs identified by trade studies encompassing a spectrum of flight elements and destination design reference missions.

  7. Critical Technology Determination for Future Human Space Flight

    NASA Technical Reports Server (NTRS)

    Mercer, Carolyn R.; Vangen, Scott D.; Williams-Byrd, Julie A.; Stecklein, Jonette M.; Rahman, Shamim A.; Rosenthal, Matthew E.; Hornyak, David M.; Alexander, Leslie; Korsmeyer, David J.; Tu, Eugene L.; hide

    2012-01-01

    As the National Aeronautics and Space Administration (NASA) prepares to extend human presence throughout the solar system, technical capabilities must be developed to enable long duration flights to destinations such as near Earth asteroids, Mars, and extended stays on the Moon. As part of the NASA Human Spaceflight Architecture Team, a Technology Development Assessment Team has identified a suite of critical technologies needed to support this broad range of missions. Dialog between mission planners, vehicle developers, and technologists was used to identify a minimum but sufficient set of technologies, noting that needs are created by specific mission architecture requirements, yet specific designs are enabled by technologies. Further consideration was given to the re-use of underlying technologies to cover multiple missions to effectively use scarce resources. This suite of critical technologies is expected to provide the needed base capability to enable a variety of possible destinations and missions. This paper describes the methodology used to provide an architecture-driven technology development assessment ("technology pull"), including technology advancement needs identified by trade studies encompassing a spectrum of flight elements and destination design reference missions.

  8. The Present of Architectural Psychology Researches in China- Based on the Bibliometric Analysis and Knowledge Mapping

    NASA Astrophysics Data System (ADS)

    Zhu, LeiYe; Wang, Qi; Xu, JunHua; Wu, Qing; Jin, MeiDong; Liao, RongJun; Wang, HaiBin

    2018-03-01

    Architectural Psychology is an interdisciplinary subject of psychology and architecture that focuses on architectural design by using Gestalt psychology, cognitive psychology and other related psychology principles. Researchers from China have achieved fruitful achievements in the field of architectural psychology during past thirty-three years. To reveal the current situation of the field in China, 129 related papers from the China National Knowledge Infrastructure (CNKI) were analyzed by CiteSpace II software. The results show that: (1) the studies of the field in China have been started since 1984 and the annual number of the papers increased dramatically from 2008 and reached a historical peak in 2016. Shanxi Architecture tops the list of contributing publishing journals; Wuhan University, Southwest Jiaotong University and Chongqing University are the best performer among the contributing organizations. (2) “Environmental Psychology”, “Architectural Design” and “Architectural Psychology” are the most frequency keywords. The frontiers of the field in China are “architectural creation” and “environmental psychology” while the popular research topics were“residential environment”, “spatial environment”, “environmental psychology”, “architectural theory” and “architectural psychology”.

  9. Molecular basis of angiosperm tree architecture

    USDA-ARS?s Scientific Manuscript database

    The shoot architecture of trees greatly impacts orchard and forest management methods. Amassing greater knowledge of the molecular genetics behind tree form can benefit these industries as well as contribute to basic knowledge of plant developmental biology. This review covers basic components of ...

  10. Argumentation Text Construction by Japanese as a Foreign Language Writers: A Dynamic View of Transfer

    ERIC Educational Resources Information Center

    Rinnert, Carol; Kobauashi, Hiroe; Katayama, Akemi

    2015-01-01

    This study takes a dynamic view of transfer as reusing and reshaping previous knowledge in new writing contexts to investigate how novice Japanese as a foreign language (JFL) writers draw on knowledge across languages to construct L1 and L2 texts. We analyzed L1 English and L2 Japanese argumentation essays by the same JFL writers (N = 19) and L1…

  11. Enhancing Reuse of Data and Biological Material in Medical Research: From FAIR to FAIR-Health

    PubMed Central

    Kohlmayer, Florian; Prasser, Fabian; Mayrhofer, Michaela Th.; Schlünder, Irene; Martin, Gillian M.; Casati, Sara; Koumakis, Lefteris; Wutte, Andrea; Kozera, Łukasz; Strapagiel, Dominik; Anton, Gabriele; Zanetti, Gianluigi; Sezerman, Osman Ugur; Mendy, Maimuna; Valík, Dalibor; Lavitrano, Marialuisa; Dagher, Georges; Zatloukal, Kurt; van Ommen, GertJan B.; Litton, Jan-Eric

    2018-01-01

    The known challenge of underutilization of data and biological material from biorepositories as potential resources for medical research has been the focus of discussion for over a decade. Recently developed guidelines for improved data availability and reusability—entitled FAIR Principles (Findability, Accessibility, Interoperability, and Reusability)—are likely to address only parts of the problem. In this article, we argue that biological material and data should be viewed as a unified resource. This approach would facilitate access to complete provenance information, which is a prerequisite for reproducibility and meaningful integration of the data. A unified view also allows for optimization of long-term storage strategies, as demonstrated in the case of biobanks. We propose an extension of the FAIR Principles to include the following additional components: (1) quality aspects related to research reproducibility and meaningful reuse of the data, (2) incentives to stimulate effective enrichment of data sets and biological material collections and its reuse on all levels, and (3) privacy-respecting approaches for working with the human material and data. These FAIR-Health principles should then be applied to both the biological material and data. We also propose the development of common guidelines for cloud architectures, due to the unprecedented growth of volume and breadth of medical data generation, as well as the associated need to process the data efficiently. PMID:29359962

  12. Adaptation of Control Center Software to Commerical Real-Time Display Applications

    NASA Technical Reports Server (NTRS)

    Collier, Mark D.

    1994-01-01

    NASA-Marshall Space Flight Center (MSFC) is currently developing an enhanced Huntsville Operation Support Center (HOSC) system designed to support multiple spacecraft missions. The Enhanced HOSC is based upon a distributed computing architecture using graphic workstation hardware and industry standard software including POSIX, X Windows, Motif, TCP/IP, and ANSI C. Southwest Research Institute (SwRI) is currently developing a prototype of the Display Services application for this system. Display Services provides the capability to generate and operate real-time data-driven graphic displays. This prototype is a highly functional application designed to allow system end users to easily generate complex data-driven displays. The prototype is easy to use, flexible, highly functional, and portable. Although this prototype is being developed for NASA-MSFC, the general-purpose real-time display capability can be reused in similar mission and process control environments. This includes any environment depending heavily upon real-time data acquisition and display. Reuse of the prototype will be a straight-forward transition because the prototype is portable, is designed to add new display types easily, has a user interface which is separated from the application code, and is very independent of the specifics of NASA-MSFC's system. Reuse of this prototype in other environments is a excellent alternative to creation of a new custom application, or for environments with a large number of users, to purchasing a COTS package.

  13. Enhancing Reuse of Data and Biological Material in Medical Research: From FAIR to FAIR-Health.

    PubMed

    Holub, Petr; Kohlmayer, Florian; Prasser, Fabian; Mayrhofer, Michaela Th; Schlünder, Irene; Martin, Gillian M; Casati, Sara; Koumakis, Lefteris; Wutte, Andrea; Kozera, Łukasz; Strapagiel, Dominik; Anton, Gabriele; Zanetti, Gianluigi; Sezerman, Osman Ugur; Mendy, Maimuna; Valík, Dalibor; Lavitrano, Marialuisa; Dagher, Georges; Zatloukal, Kurt; van Ommen, GertJan B; Litton, Jan-Eric

    2018-04-01

    The known challenge of underutilization of data and biological material from biorepositories as potential resources for medical research has been the focus of discussion for over a decade. Recently developed guidelines for improved data availability and reusability-entitled FAIR Principles (Findability, Accessibility, Interoperability, and Reusability)-are likely to address only parts of the problem. In this article, we argue that biological material and data should be viewed as a unified resource. This approach would facilitate access to complete provenance information, which is a prerequisite for reproducibility and meaningful integration of the data. A unified view also allows for optimization of long-term storage strategies, as demonstrated in the case of biobanks. We propose an extension of the FAIR Principles to include the following additional components: (1) quality aspects related to research reproducibility and meaningful reuse of the data, (2) incentives to stimulate effective enrichment of data sets and biological material collections and its reuse on all levels, and (3) privacy-respecting approaches for working with the human material and data. These FAIR-Health principles should then be applied to both the biological material and data. We also propose the development of common guidelines for cloud architectures, due to the unprecedented growth of volume and breadth of medical data generation, as well as the associated need to process the data efficiently.

  14. Using Selection Pressure as an Asset to Develop Reusable, Adaptable Software Systems

    NASA Technical Reports Server (NTRS)

    Berrick, Stephen; Lynnes, Christopher

    2007-01-01

    The Goddard Earth Sciences Data and Information Services Center (GES DISC) at NASA has over the years developed and honed several reusable architectural components for supporting large-scale data centers with a large customer base. These include a processing system (S4PM) and an archive system (S4PA) based upon a workflow engine called the Simple Scalable Script based Science Processor (S4P) and an online data visualization and analysis system (Giovanni). These subsystems are currently reused internally in a variety of combinations to implement customized data management on behalf of instrument science teams and other science investigators. Some of these subsystems (S4P and S4PM) have also been reused by other data centers for operational science processing. Our experience has been that development and utilization of robust interoperable and reusable software systems can actually flourish in environments defined by heterogeneous commodity hardware systems the emphasis on value-added customer service and the continual goal for achieving higher cost efficiencies. The repeated internal reuse that is fostered by such an environment encourages and even forces changes to the software that make it more reusable and adaptable. Allowing and even encouraging such selective pressures to software development has been a key factor In the success of S4P and S4PM which are now available to the open source community under the NASA Open source Agreement

  15. A knowledge creation info-structure to acquire and crystallize the tacit knowledge of health-care experts.

    PubMed

    Abidi, Syed Sibte Raza; Cheah, Yu-N; Curran, Janet

    2005-06-01

    Tacit knowledge of health-care experts is an important source of experiential know-how, yet due to various operational and technical reasons, such health-care knowledge is not entirely harnessed and put into professional practice. Emerging knowledge-management (KM) solutions suggest strategies to acquire the seemingly intractable and nonarticulated tacit knowledge of health-care experts. This paper presents a KM methodology, together with its computational implementation, to 1) acquire the tacit knowledge possessed by health-care experts; 2) represent the acquired tacit health-care knowledge in a computational formalism--i.e., clinical scenarios--that allows the reuse of stored knowledge to acquire tacit knowledge; and 3) crystallize the acquired tacit knowledge so that it is validated for health-care decision-support and medical education systems.

  16. Design, Analysis and User Acceptance of Architectural Design Education in Learning System Based on Knowledge Management Theory

    ERIC Educational Resources Information Center

    Wu, Yun-Wu; Lin, Yu-An; Wen, Ming-Hui; Perng, Yeng-Hong; Hsu, I-Ting

    2016-01-01

    The major purpose of this study is to develop an architectural design knowledge management learning system with corresponding learning activities to help the students have meaningful learning and improve their design capability in their learning process. Firstly, the system can help the students to obtain and share useful knowledge. Secondly,…

  17. Software development: A paradigm for the future

    NASA Technical Reports Server (NTRS)

    Basili, Victor R.

    1989-01-01

    A new paradigm for software development that treats software development as an experimental activity is presented. It provides built-in mechanisms for learning how to develop software better and reusing previous experience in the forms of knowledge, processes, and products. It uses models and measures to aid in the tasks of characterization, evaluation and motivation. An organization scheme is proposed for separating the project-specific focus from the organization's learning and reuse focuses of software development. The implications of this approach for corporations, research and education are discussed and some research activities currently underway at the University of Maryland that support this approach are presented.

  18. Applying Knowledge Management to an Organization's Transformation

    NASA Technical Reports Server (NTRS)

    Potter, Shannon; Gill, Tracy; Fritsche, Ralph

    2008-01-01

    Although workers in the information age have more information at their fingertips than ever before, the ability to effectively capture and reuse actual knowledge is still a surmounting challenge for many organizations. As high tech organizations transform from providing complex products and services in an established domain to providing them in new domains, knowledge remains an increasingly valuable commodity. This paper explores the supply and demand elements of the "knowledge market" within the International Space Station and Spacecraft Processing Directorate (ISSSPD) of NASA's Kennedy Space Center (KSC). It examines how knowledge supply and knowledge demand determine the success of an organization's knowledge management (KM) activities, and how the elements of a KM infrastructure (tools, culture, and training), can be used to create and sustain knowledge supply and demand

  19. Software-engineering challenges of building and deploying reusable problem solvers.

    PubMed

    O'Connor, Martin J; Nyulas, Csongor; Tu, Samson; Buckeridge, David L; Okhmatovskaia, Anna; Musen, Mark A

    2009-11-01

    Problem solving methods (PSMs) are software components that represent and encode reusable algorithms. They can be combined with representations of domain knowledge to produce intelligent application systems. A goal of research on PSMs is to provide principled methods and tools for composing and reusing algorithms in knowledge-based systems. The ultimate objective is to produce libraries of methods that can be easily adapted for use in these systems. Despite the intuitive appeal of PSMs as conceptual building blocks, in practice, these goals are largely unmet. There are no widely available tools for building applications using PSMs and no public libraries of PSMs available for reuse. This paper analyzes some of the reasons for the lack of widespread adoptions of PSM techniques and illustrate our analysis by describing our experiences developing a complex, high-throughput software system based on PSM principles. We conclude that many fundamental principles in PSM research are useful for building knowledge-based systems. In particular, the task-method decomposition process, which provides a means for structuring knowledge-based tasks, is a powerful abstraction for building systems of analytic methods. However, despite the power of PSMs in the conceptual modeling of knowledge-based systems, software engineering challenges have been seriously underestimated. The complexity of integrating control knowledge modeled by developers using PSMs with the domain knowledge that they model using ontologies creates a barrier to widespread use of PSM-based systems. Nevertheless, the surge of recent interest in ontologies has led to the production of comprehensive domain ontologies and of robust ontology-authoring tools. These developments present new opportunities to leverage the PSM approach.

  20. Software-engineering challenges of building and deploying reusable problem solvers

    PubMed Central

    O’CONNOR, MARTIN J.; NYULAS, CSONGOR; TU, SAMSON; BUCKERIDGE, DAVID L.; OKHMATOVSKAIA, ANNA; MUSEN, MARK A.

    2012-01-01

    Problem solving methods (PSMs) are software components that represent and encode reusable algorithms. They can be combined with representations of domain knowledge to produce intelligent application systems. A goal of research on PSMs is to provide principled methods and tools for composing and reusing algorithms in knowledge-based systems. The ultimate objective is to produce libraries of methods that can be easily adapted for use in these systems. Despite the intuitive appeal of PSMs as conceptual building blocks, in practice, these goals are largely unmet. There are no widely available tools for building applications using PSMs and no public libraries of PSMs available for reuse. This paper analyzes some of the reasons for the lack of widespread adoptions of PSM techniques and illustrate our analysis by describing our experiences developing a complex, high-throughput software system based on PSM principles. We conclude that many fundamental principles in PSM research are useful for building knowledge-based systems. In particular, the task–method decomposition process, which provides a means for structuring knowledge-based tasks, is a powerful abstraction for building systems of analytic methods. However, despite the power of PSMs in the conceptual modeling of knowledge-based systems, software engineering challenges have been seriously underestimated. The complexity of integrating control knowledge modeled by developers using PSMs with the domain knowledge that they model using ontologies creates a barrier to widespread use of PSM-based systems. Nevertheless, the surge of recent interest in ontologies has led to the production of comprehensive domain ontologies and of robust ontology-authoring tools. These developments present new opportunities to leverage the PSM approach. PMID:23565031

  1. A review on alum sludge reuse with special reference to agricultural applications and future challenges.

    PubMed

    Dassanayake, K B; Jayasinghe, G Y; Surapaneni, A; Hetherington, C

    2015-04-01

    Alum salts are commonly used in the water industry to promote coagulation in the production of clean drinking water, which results in the generation and accumulation of 'waste' by-product 'alum sludge' in large volumes. Effective and efficient management of alum sludge in an economically and environmentally sustainable manner remains a significant social and environmental concern with ever increasing demand for potable water as a result of rapidly escalating world population and urban expansion. Various intensive practices have been employed to reuse the alum sludge in an attempt to figure out how to fill the gap between successful drinking water treatment process and environmentally friendly alum sludge management for over the years. This paper primarily aimed at comprehensive review of the existing literature on alum sludge characteristics, its environmental concerns and their potential utilization, especially in agricultural and horticultural sectors leading to update our recent state of knowledge and formulate a compendium of present and past developments. Different types of alum sludge utilizations in various fields were recognized and examined. The strengths, weaknesses, opportunities and potential risks of alum sludge reuse options with particular reference to agriculture were highlighted and knowledge gaps were identified. Research priorities and future challenges that will support in the development of effective alumsludgemanagement practices in agriculture with multi-prong strategies were discussed. Copyright © 2014 Elsevier Ltd. All rights reserved.

  2. Expediting analog design retargeting by design knowledge re-use and circuit synthesis: a practical example on a Delta-Sigma modulator

    NASA Astrophysics Data System (ADS)

    Webb, Matthew; Tang, Hua

    2016-08-01

    In the past decade or two, due to constant and rapid technology changes, analog design re-use or design retargeting to newer technologies has been brought to the table in order to expedite the design process and improve time-to-market. If properly conducted, analog design retargeting could significantly cut down design cycle compared to designs starting from the scratch. In this article, we present an empirical and general method for efficient analog design retargeting by design knowledge re-use and circuit synthesis (CS). The method first identifies circuit blocks that compose the source system and extracts the performance parameter specifications of each circuit block. Then, for each circuit block, it scales the values of design variables (DV) from the source design to derive an initial design in the target technology. Depending on the performance of this initial target design, a design space is defined for synthesis. Subsequently, each circuit block is automatically synthesised using state-of-art analog synthesis tools based on a combination of global and local optimisation techniques to achieve comparable performance specifications to those extracted from the source system. Finally, the overall system is composed of those synthesised circuit blocks in the target technology. We illustrate the method using a practical example of a complex Delta-Sigma modulator (DSM) circuit.

  3. Using SOA Patterns to promote understanding across disciplines

    NASA Astrophysics Data System (ADS)

    Patterson, A.

    2012-04-01

    The NETMAR consortium is building an open service network for marine environmental data by combining expertise from Ireland, France, the UK and Norway in disciplines such as Semantics, Software Engineering, UI Programming and Service Orchestration. Through the International Coastal Atlas Network, it engages user groups from Europe, Africa, Asia and the Americas. In doing so, it faces challenges in bringing these disciplines and groups together in a way that makes them greater than the sum of their parts. Service Oriented Architecture has been successfully applied in many cases to help build useful systems across organisational and geographic boundaries in order to expose diverse capabilities which can function together through a mutual exchange of value. This should make it ideally suited to a distributed decision making environment without centralised command and control. In theory, SOA should facilitate the building of global and complex infrastructures and the integration of information systems characterized by diverse protocols and interfaces,and with different data policies and security levels. The presentation will discuss a number of approaches used by NETMAR to bring the theory of SOA to bear in a useful way while maintaining the emphasis on keeping multi-disciplinary domain expertise as the primary driver of the project. It will discuss three approaches used: . Populating one or more standard reference models . Trade-off analysis based on business drivers and quality attributes . Documenting design reuse in the form of patterns. The three approaches will be compared in terms of how they succeed in bringing 'just enough' service architecture knowledge into the project. We discuss how the approaches can interact and complement each other. Finally, we present a number of SOA patterns identified as being relevant to NETMAR and explain why they are felt to be particularly effective in gaining consensus on how to build the NETMAR system of systems.

  4. An architecture for automated fault diagnosis. [Space Station Module/Power Management And Distribution

    NASA Technical Reports Server (NTRS)

    Ashworth, Barry R.

    1989-01-01

    A description is given of the SSM/PMAD power system automation testbed, which was developed using a systems engineering approach. The architecture includes a knowledge-based system and has been successfully used in power system management and fault diagnosis. Architectural issues which effect overall system activities and performance are examined. The knowledge-based system is discussed along with its associated automation implications, and interfaces throughout the system are presented.

  5. A Multi-Agent System Architecture for Sensor Networks

    PubMed Central

    Fuentes-Fernández, Rubén; Guijarro, María; Pajares, Gonzalo

    2009-01-01

    The design of the control systems for sensor networks presents important challenges. Besides the traditional problems about how to process the sensor data to obtain the target information, engineers need to consider additional aspects such as the heterogeneity and high number of sensors, and the flexibility of these networks regarding topologies and the sensors in them. Although there are partial approaches for resolving these issues, their integration relies on ad hoc solutions requiring important development efforts. In order to provide an effective approach for this integration, this paper proposes an architecture based on the multi-agent system paradigm with a clear separation of concerns. The architecture considers sensors as devices used by an upper layer of manager agents. These agents are able to communicate and negotiate services to achieve the required functionality. Activities are organized according to roles related with the different aspects to integrate, mainly sensor management, data processing, communication and adaptation to changes in the available devices and their capabilities. This organization largely isolates and decouples the data management from the changing network, while encouraging reuse of solutions. The use of the architecture is facilitated by a specific modelling language developed through metamodelling. A case study concerning a generic distributed system for fire fighting illustrates the approach and the comparison with related work. PMID:22303172

  6. A multi-agent system architecture for sensor networks.

    PubMed

    Fuentes-Fernández, Rubén; Guijarro, María; Pajares, Gonzalo

    2009-01-01

    The design of the control systems for sensor networks presents important challenges. Besides the traditional problems about how to process the sensor data to obtain the target information, engineers need to consider additional aspects such as the heterogeneity and high number of sensors, and the flexibility of these networks regarding topologies and the sensors in them. Although there are partial approaches for resolving these issues, their integration relies on ad hoc solutions requiring important development efforts. In order to provide an effective approach for this integration, this paper proposes an architecture based on the multi-agent system paradigm with a clear separation of concerns. The architecture considers sensors as devices used by an upper layer of manager agents. These agents are able to communicate and negotiate services to achieve the required functionality. Activities are organized according to roles related with the different aspects to integrate, mainly sensor management, data processing, communication and adaptation to changes in the available devices and their capabilities. This organization largely isolates and decouples the data management from the changing network, while encouraging reuse of solutions. The use of the architecture is facilitated by a specific modelling language developed through metamodelling. A case study concerning a generic distributed system for fire fighting illustrates the approach and the comparison with related work.

  7. The Architecture and Application of RAMSES, a CCSDS and ECSS PUS Compliant Test and Control System

    NASA Astrophysics Data System (ADS)

    Battelino, Milan; Svard, Christian; Carlsson, Anna; Carlstedt-Duke, Theresa; Tornqvist, Marcus

    2010-08-01

    SSC, Swedish Space Corporation, has more than 30 years of experience in developing test and control systems for sounding rockets, experimental test modules and satellites. The increasing amount of ongoing projects made SSC to consider developing a test and control system conformant to CCSDS (Consultative Committee for Space Data Systems) and ECSS (European Cooperation for Space Standardization), that with small effort and cost, could be reused between separate projects and products. The foreseen reduction in cost and development time for different future space-related projects made such a reusable control system desirable. This paper will describe the ideas behind the RAMSES (Rocket and Multi-Satellite EMCS Software) system, its architecture and how it has been and is being used in a variety of applications at SSC such as the multi-satellite mission PRISMA and sounding rocket project MAXUS-8.

  8. Implementation of a Space Communications Cognitive Engine

    NASA Technical Reports Server (NTRS)

    Hackett, Timothy M.; Bilen, Sven G.; Ferreira, Paulo Victor R.; Wyglinski, Alexander M.; Reinhart, Richard C.

    2017-01-01

    Although communications-based cognitive engines have been proposed, very few have been implemented in a full system, especially in a space communications system. In this paper, we detail the implementation of a multi-objective reinforcement-learning algorithm and deep artificial neural networks for the use as a radio-resource-allocation controller. The modular software architecture presented encourages re-use and easy modification for trying different algorithms. Various trade studies involved with the system implementation and integration are discussed. These include the choice of software libraries that provide platform flexibility and promote reusability, choices regarding the deployment of this cognitive engine within a system architecture using the DVB-S2 standard and commercial hardware, and constraints placed on the cognitive engine caused by real-world radio constraints. The implemented radio-resource allocation-management controller was then integrated with the larger spaceground system developed by NASA Glenn Research Center (GRC).

  9. Achieving Supportability on Exploration Missions with In-Space Servicing

    NASA Technical Reports Server (NTRS)

    Bacon, Charles; Pellegrino, Joseph F.; McGuire, Jill; Henry, Ross; DeWeese, Keith; Reed, Benjamin; Aranyos, Thomas

    2015-01-01

    One of the long-term exploration goals of NASA is manned missions to Mars and other deep space robotic exploration. These missions would include sending astronauts along with scientific equipment to the surface of Mars for extended stay and returning the crew, science data and surface sample to Earth. In order to achieve this goal, multiple precursor missions are required that would launch the crew, crew habitats, return vehicles and destination systems into space. Some of these payloads would then rendezvous in space for the trip to Mars, while others would be sent directly to the Martian surface. To support such an ambitious mission architecture, NASA must reduce cost, simplify logistics, reuse and/or repurpose flight hardware, and minimize resources needed for refurbishment. In-space servicing is a means to achieving these goals. By designing a mission architecture that utilizes the concept of in-space servicing (robotic and manned), maximum supportability can be achieved.

  10. Delivering a lifelong integrated electronic health record based on a service oriented architecture.

    PubMed

    Katehakis, Dimitrios G; Sfakianakis, Stelios G; Kavlentakis, Georgios; Anthoulakis, Dimitrios N; Tsiknakis, Manolis

    2007-11-01

    Efficient access to a citizen's Integrated Electronic Health Record (I-EHR) is considered to be the cornerstone for the support of continuity of care, the reduction of avoidable mistakes, and the provision of tools and methods to support evidence-based medicine. For the past several years, a number of applications and services (including a lifelong I-EHR) have been installed, and enterprise and regional infrastructure has been developed, in HYGEIAnet, the Regional Health Information Network (RHIN) of the island of Crete, Greece. Through this paper, the technological effort toward the delivery of a lifelong I-EHR by means of World Wide Web Consortium (W3C) technologies, on top of a service-oriented architecture that reuses already existing middleware components is presented and critical issues are discussed. Certain design and development decisions are exposed and explained, laying this way the ground for coordinated, dynamic navigation to personalized healthcare delivery.

  11. A Design for Composing and Extending Vehicle Models

    NASA Technical Reports Server (NTRS)

    Madden, Michael M.; Neuhaus, Jason R.

    2003-01-01

    The Systems Development Branch (SDB) at NASA Langley Research Center (LaRC) creates simulation software products for research. Each product consists of an aircraft model with experiment extensions. SDB treats its aircraft models as reusable components, upon which experiments can be built. SDB has evolved aircraft model design with the following goals: 1. Avoid polluting the aircraft model with experiment code. 2. Discourage the copy and tailor method of reuse. The current evolution of that architecture accomplishes these goals by reducing experiment creation to extend and compose. The architecture mechanizes the operational concerns of the model's subsystems and encapsulates them in an interface inherited by all subsystems. Generic operational code exercises the subsystems through the shared interface. An experiment is thus defined by the collection of subsystems that it creates ("compose"). Teams can modify the aircraft subsystems for the experiment using inheritance and polymorphism to create variants ("extend").

  12. A Fly-Inspired Mushroom Bodies Model for Sensory-Motor Control Through Sequence and Subsequence Learning.

    PubMed

    Arena, Paolo; Calí, Marco; Patané, Luca; Portera, Agnese; Strauss, Roland

    2016-09-01

    Classification and sequence learning are relevant capabilities used by living beings to extract complex information from the environment for behavioral control. The insect world is full of examples where the presentation time of specific stimuli shapes the behavioral response. On the basis of previously developed neural models, inspired by Drosophila melanogaster, a new architecture for classification and sequence learning is here presented under the perspective of the Neural Reuse theory. Classification of relevant input stimuli is performed through resonant neurons, activated by the complex dynamics generated in a lattice of recurrent spiking neurons modeling the insect Mushroom Bodies neuropile. The network devoted to context formation is able to reconstruct the learned sequence and also to trace the subsequences present in the provided input. A sensitivity analysis to parameter variation and noise is reported. Experiments on a roving robot are reported to show the capabilities of the architecture used as a neural controller.

  13. Gregarious Data Re-structuring in a Many Core Architecture

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shrestha, Sunil; Manzano Franco, Joseph B.; Marquez, Andres

    this paper, we have developed a new methodology that takes in consideration the access patterns from a single parallel actor (e.g. a thread), as well as, the access patterns of “grouped” parallel actors that share a resource (e.g. a distributed Level 3 cache). We start with a hierarchical tile code for our target machine and apply a series of transformations at the tile level to improve data residence in a given memory hierarchy level. The contribution of this paper includes (a) collaborative data restructuring for group reuse and (b) low overhead transformation technique to improve access pattern and bring closelymore » connected data elements together. Preliminary results in a many core architecture, Tilera TileGX, shows promising improvements over optimized OpenMP code (up to 31% increase in GFLOPS) and over our own previous work on fine grained runtimes (up to 16%) for selected kernels« less

  14. A single-stage optical load-balanced switch for data centers.

    PubMed

    Huang, Qirui; Yeo, Yong-Kee; Zhou, Luying

    2012-10-22

    Load balancing is an attractive technique to achieve maximum throughput and optimal resource utilization in large-scale switching systems. However current electronic load-balanced switches suffer from severe problems in implementation cost, power consumption and scaling. To overcome these problems, in this paper we propose a single-stage optical load-balanced switch architecture based on an arrayed waveguide grating router (AWGR) in conjunction with fast tunable lasers. By reuse of the fast tunable lasers, the switch achieves both functions of load balancing and switching through the AWGR. With this architecture, proof-of-concept experiments have been conducted to investigate the feasibility of the optical load-balanced switch and to examine its physical performance. Compared to three-stage load-balanced switches, the reported switch needs only half of optical devices such as tunable lasers and AWGRs, which can provide a cost-effective solution for future data centers.

  15. A Core Knowledge Architecture of Visual Working Memory

    ERIC Educational Resources Information Center

    Wood, Justin N.

    2011-01-01

    Visual working memory (VWM) is widely thought to contain specialized buffers for retaining spatial and object information: a "spatial-object architecture." However, studies of adults, infants, and nonhuman animals show that visual cognition builds on core knowledge systems that retain more specialized representations: (1) spatiotemporal…

  16. Development of an Integrated Wastewater Treatment System/water reuse/agriculture model

    NASA Astrophysics Data System (ADS)

    Fox, C. H.; Schuler, A.

    2017-12-01

    Factors like increasing population, urbanization, and climate change have made the management of water resources a challenge for municipalities. By understanding wastewater recycling for agriculture in arid regions, we can expand the supply of water to agriculture and reduce energy use at wastewater treatment plants (WWTPs). This can improve management decisions between WWTPs and water managers. The objective of this research is to develop a prototype integrated model of the wastewater treatment system and nearby agricultural areas linked by water and nutrients, using the Albuquerque Southeast Eastern Reclamation Facility (SWRF) and downstream agricultural system as a case study. Little work has been done to understand how such treatment technology decisions affect the potential for water ruse, nutrient recovery in agriculture, overall energy consumption and agriculture production and water quality. A holistic approach to understanding synergies and tradeoffs between treatment, reuse, and agriculture is needed. For example, critical wastewater treatment process decisions include options to nitrify (oxidize ammonia), which requires large amounts of energy, to operate at low dissolved oxygen concentrations, which requires much less energy, whether to recover nitrogen and phosphorus, chemically in biosolids, or in reuse water for agriculture, whether to generate energy from anaerobic digestion, and whether to develop infrastructure for agricultural reuse. The research first includes quantifying existing and feasible agricultural sites suitable for irrigation by reuse wastewater as well as existing infrastructure such as irrigation canals and piping by using GIS databases. Second, a nutrient and water requirement for common New Mexico crop is being determined. Third, a wastewater treatment model will be utilized to quantify energy usage and nutrient removal under various scenarios. Different agricultural reuse sensors and treatment technologies will be explored. The research will provide scientific knowledge to support the transformation of traditionally `linear' into `recycling' societies capable of making productive gains in water use and reuse while minimizing environmental pollution.

  17. Architecture and Initial Development of a Digital Library Platform for Computable Knowledge Objects for Health.

    PubMed

    Flynn, Allen J; Bahulekar, Namita; Boisvert, Peter; Lagoze, Carl; Meng, George; Rampton, James; Friedman, Charles P

    2017-01-01

    Throughout the world, biomedical knowledge is routinely generated and shared through primary and secondary scientific publications. However, there is too much latency between publication of knowledge and its routine use in practice. To address this latency, what is actionable in scientific publications can be encoded to make it computable. We have created a purpose-built digital library platform to hold, manage, and share actionable, computable knowledge for health called the Knowledge Grid Library. Here we present it with its system architecture.

  18. The SeaView EarthCube project: Lessons Learned from Integrating Across Repositories

    NASA Astrophysics Data System (ADS)

    Diggs, S. C.; Stocks, K. I.; Arko, R. A.; Kinkade, D.; Shepherd, A.; Olson, C. J.; Pham, A.

    2017-12-01

    SeaView is an NSF-funded EarthCube Integrative Activity Project working with 5 existing data repositories* to provide oceanographers with highly integrated thematic data collections in user-requested formats. The project has three complementary goals: Supporting Scientists: SeaView targets scientists' need for easy access to data of interest that are ready to import into their preferred tool. Strengthening Repositories: By integrating data from multiple repositories for science use, SeaView is helping the ocean data repositories align their data and processes and make ocean data more accessible and easily integrated. Informing EarthCube (earthcube.org): SeaView's experience as an integration demonstration can inform the larger NSF EarthCube architecture and design effort. The challenges faced in this small-scale effort are informative to geosciences cyberinfrastructure more generally. Here we focus on the lessons learned that may inform other data facilities and integrative architecture projects. (The SeaView data collections will be presented at the Ocean Sciences 2018 meeting.) One example is the importance of shared semantics, with persistent identifiers, for key integration elements across the data sets (e.g. cruise, parameter, and project/program.) These must allow for revision through time and should have an agreed authority or process for resolving conflicts: aligning identifiers and correcting errors were time consuming and often required both deep domain knowledge and "back end" knowledge of the data facilities. Another example is the need for robust provenance, and tools that support automated or semi-automated data transform pipelines that capture provenance. Multiple copies and versions of data are now flowing into repositories, and onward to long-term archives such as NOAA NCEI and umbrella portals such as DataONE. Exact copies can be identified with hashes (for those that have the skills), but it can be painfully difficult to understand the processing or format changes that differentiates versions. As more sensors are deployed, and data re-use increases, this will only become more challenging. We will discuss these, and additional lessons learned, as well as invite discussion and solutions from others doing similar work. * BCO-DMO, CCHDO, OBIS, OOI, R2R

  19. Generic domain models in software engineering

    NASA Technical Reports Server (NTRS)

    Maiden, Neil

    1992-01-01

    This paper outlines three research directions related to domain-specific software development: (1) reuse of generic models for domain-specific software development; (2) empirical evidence to determine these generic models, namely elicitation of mental knowledge schema possessed by expert software developers; and (3) exploitation of generic domain models to assist modelling of specific applications. It focuses on knowledge acquisition for domain-specific software development, with emphasis on tool support for the most important phases of software development.

  20. WASTE-TO-RESOURCE: NOVEL MEMBRANE SYSTEMS FOR SAFE AND SUSTAINABLE BRINE MANAGEMENT

    EPA Science Inventory

    Decentralized waste-to-reuse systems will be optimized to maximize resource and energy recovery and minimize chemicals and energy use. This research will enhance fundamental knowledge on simultaneous heat and mass transport through membranes, lower process costs, and furthe...

  1. Reprint of "Abstraction for data integration: Fusing mammalian molecular, cellular and phenotype big datasets for better knowledge extraction".

    PubMed

    Rouillard, Andrew D; Wang, Zichen; Ma'ayan, Avi

    2015-12-01

    With advances in genomics, transcriptomics, metabolomics and proteomics, and more expansive electronic clinical record monitoring, as well as advances in computation, we have entered the Big Data era in biomedical research. Data gathering is growing rapidly while only a small fraction of this data is converted to useful knowledge or reused in future studies. To improve this, an important concept that is often overlooked is data abstraction. To fuse and reuse biomedical datasets from diverse resources, data abstraction is frequently required. Here we summarize some of the major Big Data biomedical research resources for genomics, proteomics and phenotype data, collected from mammalian cells, tissues and organisms. We then suggest simple data abstraction methods for fusing this diverse but related data. Finally, we demonstrate examples of the potential utility of such data integration efforts, while warning about the inherit biases that exist within such data. Copyright © 2015 Elsevier Ltd. All rights reserved.

  2. Towards a Ubiquitous User Model for Profile Sharing and Reuse

    PubMed Central

    de Lourdes Martinez-Villaseñor, Maria; Gonzalez-Mendoza, Miguel; Hernandez-Gress, Neil

    2012-01-01

    People interact with systems and applications through several devices and are willing to share information about preferences, interests and characteristics. Social networking profiles, data from advanced sensors attached to personal gadgets, and semantic web technologies such as FOAF and microformats are valuable sources of personal information that could provide a fair understanding of the user, but profile information is scattered over different user models. Some researchers in the ubiquitous user modeling community envision the need to share user model's information from heterogeneous sources. In this paper, we address the syntactic and semantic heterogeneity of user models in order to enable user modeling interoperability. We present a dynamic user profile structure based in Simple Knowledge Organization for the Web (SKOS) to provide knowledge representation for ubiquitous user model. We propose a two-tier matching strategy for concept schemas alignment to enable user modeling interoperability. Our proposal is proved in the application scenario of sharing and reusing data in order to deal with overweight and obesity. PMID:23201995

  3. AKM in Open Source Communities

    NASA Astrophysics Data System (ADS)

    Stamelos, Ioannis; Kakarontzas, George

    Previous chapters in this book have dealt with Architecture Knowledge Management in traditional Closed Source Software (CSS) projects. This chapterwill attempt to examine the ways that knowledge is shared among participants in Free Libre Open Source Software (FLOSS 1) projects and how architectural knowledge is managed w.r.t. CSS. FLOSS projects are organized and developed in a fundamentally different way than CSS projects. FLOSS projects simply do not develop code as CSS projects do. As a consequence, their knowledge management mechanisms are also based on different concepts and tools.

  4. A Case Study on Sustainable Reuse of Abandoned Infrastructure at Seoul Station Overpass as Urban Park for the Design Strategies in Korea

    NASA Astrophysics Data System (ADS)

    Boo, Yeeun; Kwon, Young-Sang

    2018-04-01

    As the 21st century, known for knowledge information era, many industrial infrastructures built as part of the 20th century urban development have been devastated functionally and new alternatives for them have been demanded nowadays. This study aims to discuss the strategies used in the design proposals of the International Competition for ‘Seoullo 7017 Project’, which was recently completed in May 2017, based on the sustainability of the deteriorate infrastructure as urban park. Through the competition brief, each proposal is analysed against the competition brief and the more generic approaches on the adaptive reuse of infrastructure are proposed. By examining the case in Korea, it is expected to explore the possibilities for the sustainability of abandoned infrastructure through adapting reuse as urban park in Korea, to propose design strategies that can be applied to the future adaptive use of deteriorated infrastructure in Korea, and to provide broader academic base to related works.

  5. A Priori Knowledge and Heuristic Reasoning in Architectural Design.

    ERIC Educational Resources Information Center

    Rowe, Peter G.

    1982-01-01

    It is proposed that the various classes of a priori knowledge incorporated in heuristic reasoning processes exert a strong influence over architectural design activity. Some design problems require exercise of some provisional set of rules, inference, or plausible strategy which requires heuristic reasoning. A case study illustrates this concept.…

  6. Information Architecture and the Comic Arts: Knowledge Structure and Access

    ERIC Educational Resources Information Center

    Farmer, Lesley S. J.

    2015-01-01

    This article explains information architecture, focusing on comic arts' features for representing and structuring knowledge. Then it details information design theory and information behaviors relative to this format, also noting visual literacy. Next , applications of comic arts in education are listed. With this background, several research…

  7. The Relationship between User Expertise and Structural Ontology Characteristics

    ERIC Educational Resources Information Center

    Waldstein, Ilya Michael

    2014-01-01

    Ontologies are commonly used to support application tasks such as natural language processing, knowledge management, learning, browsing, and search. Literature recommends considering specific context during ontology design, and highlights that a different context is responsible for problems in ontology reuse. However, there is still no clear…

  8. Tellurium notebooks-An environment for reproducible dynamical modeling in systems biology.

    PubMed

    Medley, J Kyle; Choi, Kiri; König, Matthias; Smith, Lucian; Gu, Stanley; Hellerstein, Joseph; Sealfon, Stuart C; Sauro, Herbert M

    2018-06-01

    The considerable difficulty encountered in reproducing the results of published dynamical models limits validation, exploration and reuse of this increasingly large biomedical research resource. To address this problem, we have developed Tellurium Notebook, a software system for model authoring, simulation, and teaching that facilitates building reproducible dynamical models and reusing models by 1) providing a notebook environment which allows models, Python code, and narrative to be intermixed, 2) supporting the COMBINE archive format during model development for capturing model information in an exchangeable format and 3) enabling users to easily simulate and edit public COMBINE-compliant models from public repositories to facilitate studying model dynamics, variants and test cases. Tellurium Notebook, a Python-based Jupyter-like environment, is designed to seamlessly inter-operate with these community standards by automating conversion between COMBINE standards formulations and corresponding in-line, human-readable representations. Thus, Tellurium brings to systems biology the strategy used by other literate notebook systems such as Mathematica. These capabilities allow users to edit every aspect of the standards-compliant models and simulations, run the simulations in-line, and re-export to standard formats. We provide several use cases illustrating the advantages of our approach and how it allows development and reuse of models without requiring technical knowledge of standards. Adoption of Tellurium should accelerate model development, reproducibility and reuse.

  9. IMPACT: a generic tool for modelling and simulating public health policy.

    PubMed

    Ainsworth, J D; Carruthers, E; Couch, P; Green, N; O'Flaherty, M; Sperrin, M; Williams, R; Asghar, Z; Capewell, S; Buchan, I E

    2011-01-01

    Populations are under-served by local health policies and management of resources. This partly reflects a lack of realistically complex models to enable appraisal of a wide range of potential options. Rising computing power coupled with advances in machine learning and healthcare information now enables such models to be constructed and executed. However, such models are not generally accessible to public health practitioners who often lack the requisite technical knowledge or skills. To design and develop a system for creating, executing and analysing the results of simulated public health and healthcare policy interventions, in ways that are accessible and usable by modellers and policy-makers. The system requirements were captured and analysed in parallel with the statistical method development for the simulation engine. From the resulting software requirement specification the system architecture was designed, implemented and tested. A model for Coronary Heart Disease (CHD) was created and validated against empirical data. The system was successfully used to create and validate the CHD model. The initial validation results show concordance between the simulation results and the empirical data. We have demonstrated the ability to connect health policy-modellers and policy-makers in a unified system, thereby making population health models easier to share, maintain, reuse and deploy.

  10. Improving Reliability of Spectrum Analysis for Software Quality Requirements Using TCM

    NASA Astrophysics Data System (ADS)

    Kaiya, Haruhiko; Tanigawa, Masaaki; Suzuki, Shunichi; Sato, Tomonori; Osada, Akira; Kaijiri, Kenji

    Quality requirements are scattered over a requirements specification, thus it is hard to measure and trace such quality requirements to validate the specification against stakeholders' needs. We proposed a technique called “spectrum analysis for quality requirements” which enabled analysts to sort a requirements specification to measure and track quality requirements in the specification. In the same way as a spectrum in optics, a quality spectrum of a specification shows a quantitative feature of the specification with respect to quality. Therefore, we can compare a specification of a system to another one with respect to quality. As a result, we can validate such a specification because we can check whether the specification has common quality features and know its specific features against specifications of existing similar systems. However, our first spectrum analysis for quality requirements required a lot of effort and knowledge of a problem domain and it was hard to reuse such knowledge to reduce the effort. We thus introduce domain knowledge called term-characteristic map (TCM) to reuse the knowledge for our quality spectrum analysis. Through several experiments, we evaluate our spectrum analysis, and main finding are as follows. First, we confirmed specifications of similar systems have similar quality spectra. Second, results of spectrum analysis using TCM are objective, i.e., different analysts can generate almost the same spectra when they analyze the same specification.

  11. Knowledge management and informatics considerations for comparative effectiveness research: a case-driven exploration.

    PubMed

    Embi, Peter J; Hebert, Courtney; Gordillo, Gayle; Kelleher, Kelly; Payne, Philip R O

    2013-08-01

    As clinical data are increasingly collected and stored electronically, their potential use for comparative effectiveness research (CER) grows. Despite this promise, challenges face those wishing to leverage such data. In this paper we aim to enumerate some of the knowledge management and informatics issues common to such data reuse. After reviewing the current state of knowledge regarding biomedical informatics challenges and best practices related to CER, we then present 2 research projects at our institution. We analyze these and highlight several common themes and challenges related to the conduct of CER studies. Finally, we represent these emergent themes. The informatics challenges commonly encountered by those conducting CER studies include issues related to data information and knowledge management (eg, data reuse, data preparation) as well as those related to people and organizational issues (eg, sociotechnical factors and organizational factors). Examples of these are described in further detail and a formal framework for describing these findings is presented. Significant challenges face researchers attempting to use often diverse and heterogeneous datasets for CER. These challenges must be understood in order to be dealt with successfully and can often be overcome with the appropriate use of informatics best practices. Many research and policy questions remain to be answered in order to realize the full potential of the increasingly electronic clinical data available for such research.

  12. Mayo clinic NLP system for patient smoking status identification.

    PubMed

    Savova, Guergana K; Ogren, Philip V; Duffy, Patrick H; Buntrock, James D; Chute, Christopher G

    2008-01-01

    This article describes our system entry for the 2006 I2B2 contest "Challenges in Natural Language Processing for Clinical Data" for the task of identifying the smoking status of patients. Our system makes the simplifying assumption that patient-level smoking status determination can be achieved by accurately classifying individual sentences from a patient's record. We created our system with reusable text analysis components built on the Unstructured Information Management Architecture and Weka. This reuse of code minimized the development effort related specifically to our smoking status classifier. We report precision, recall, F-score, and 95% exact confidence intervals for each metric. Recasting the classification task for the sentence level and reusing code from other text analysis projects allowed us to quickly build a classification system that performs with a system F-score of 92.64 based on held-out data tests and of 85.57 on the formal evaluation data. Our general medical natural language engine is easily adaptable to a real-world medical informatics application. Some of the limitations as applied to the use-case are negation detection and temporal resolution.

  13. Using Selection Pressure as an Asset to Develop Reusable, Adaptable Software Systems

    NASA Astrophysics Data System (ADS)

    Berrick, S. W.; Lynnes, C.

    2007-12-01

    The Goddard Earth Sciences Data and Information Services Center (GES DISC) at NASA has over the years developed and honed a number of reusable architectural components for supporting large-scale data centers with a large customer base. These include a processing system (S4PM) and an archive system (S4PA) based upon a workflow engine called the Simple, Scalable, Script-based Science Processor (S4P); an online data visualization and analysis system (Giovanni); and the radically simple and fast data search tool, Mirador. These subsystems are currently reused internally in a variety of combinations to implement customized data management on behalf of instrument science teams and other science investigators. Some of these subsystems (S4P and S4PM) have also been reused by other data centers for operational science processing. Our experience has been that development and utilization of robust, interoperable, and reusable software systems can actually flourish in environments defined by heterogeneous commodity hardware systems, the emphasis on value-added customer service, and continual cost reduction pressures. The repeated internal reuse that is fostered by such an environment encourages and even forces changes to the software that make it more reusable and adaptable. Allowing and even encouraging such selective pressures to software development has been a key factor in the success of S4P and S4PM, which are now available to the open source community under the NASA Open Source Agreement.

  14. A Low Cost Bluetooth Low Energy Transceiver for Wireless Sensor Network Applications with a Front-end Receiver-Matching Network-Reusing Power Amplifier Load Inductor.

    PubMed

    Liang, Zhen; Li, Bin; Huang, Mo; Zheng, Yanqi; Ye, Hui; Xu, Ken; Deng, Fangming

    2017-04-19

    In this work, a low cost Bluetooth Low Energy (BLE) transceiver for wireless sensor network (WSN) applications, with a receiver (RX)-matching network-reusing power amplifier (PA) load inductor, is presented. In order to decrease the die area, only two inductors were used in this work. Besides the one used in the voltage control oscillator (VCO), the PA load inductor was reused as the RX impedance matching component in the front-end. Proper controls have been applied to achieve high transmitter (TX) input impedance when the transceiver is in the receiving mode, and vice versa. This allows the TRX-switch/matching network integration without significant performance degradation. The RX adopted a low-IF structure and integrated a single-ended low noise amplifier (LNA), a current bleeding mixer, a 4th complex filter and a delta-sigma continuous time (CT) analog-to-digital converter (ADC). The TX employed a two-point PLL-based architecture with a non-linear PA. The RX achieved a sensitivity of -93 dBm and consumes 9.7 mW, while the TX achieved a 2.97% error vector magnitude (EVM) with 9.4 mW at 0 dBm output power. This design was fabricated in a 0.11 μm complementary metal oxide semiconductor (CMOS) technology and the front-end circuit only occupies 0.24 mm². The measurement results verify the effectiveness and applicability of the proposed BLE transceiver for WSN applications.

  15. Architectural Large Constructed Environment. Modeling and Interaction Using Dynamic Simulations

    NASA Astrophysics Data System (ADS)

    Fiamma, P.

    2011-09-01

    How to use for the architectural design, the simulation coming from a large size data model? The topic is related to the phase coming usually after the acquisition of the data, during the construction of the model and especially after, when designers must have an interaction with the simulation, in order to develop and verify their idea. In the case of study, the concept of interaction includes the concept of real time "flows". The work develops contents and results that can be part of the large debate about the current connection between "architecture" and "movement". The focus of the work, is to realize a collaborative and participative virtual environment on which different specialist actors, client and final users can share knowledge, targets and constraints to better gain the aimed result. The goal is to have used a dynamic micro simulation digital resource that allows all the actors to explore the model in powerful and realistic way and to have a new type of interaction in a complex architectural scenario. On the one hand, the work represents a base of knowledge that can be implemented more and more; on the other hand the work represents a dealt to understand the large constructed architecture simulation as a way of life, a way of being in time and space. The architectural design before, and the architectural fact after, both happen in a sort of "Spatial Analysis System". The way is open to offer to this "system", knowledge and theories, that can support architectural design work for every application and scale. We think that the presented work represents a dealt to understand the large constructed architecture simulation as a way of life, a way of being in time and space. Architecture like a spatial configuration, that can be reconfigurable too through designing.

  16. Service-Learning and Interior Design: A Case Study

    ERIC Educational Resources Information Center

    Sterling, Mary

    2007-01-01

    The case study approach was used to analyze experiential learning through its three components: knowledge, action, and reflection. Two interior design courses were integrated through a university service-learning project. The restoration/adaptive reuse of a 95-year-old library building was to serve as a prototype for future off-campus…

  17. Designing Learning Object Repositories as Systems for Managing Educational Communities Knowledge

    ERIC Educational Resources Information Center

    Sampson, Demetrios G.; Zervas, Panagiotis

    2013-01-01

    Over the past years, a number of international initiatives that recognize the importance of sharing and reusing digital educational resources among educational communities through the use of Learning Object Repositories (LORs) have emerged. Typically, these initiatives focus on collecting digital educational resources that are offered by their…

  18. Impacts of Residential Demolition and the Sustainable Reuse of Vacant Lots (Cleveland, Ohio)

    EPA Science Inventory

    The summarized research takes a comprehensive look at the nature of urban soils by measuring how fast water moves into the soil, taking deep soil cores, and using soil taxonomy and the cores to understand how water moves through various depths. The research expands our knowledge ...

  19. Extending the ARIADNE Web-Based Learning Environment.

    ERIC Educational Resources Information Center

    Van Durm, Rafael; Duval, Erik; Verhoeven, Bart; Cardinaels, Kris; Olivie, Henk

    One of the central notions of the ARIADNE learning platform is a share-and-reuse approach toward the development of digital course material. The ARIADNE infrastructure includes a distributed database called the Knowledge Pool System (KPS), which acts as a repository of pedagogical material, described with standardized IEEE LTSC Learning Object…

  20. A Mission Concept: Re-Entry Hopper-Aero-Space-Craft System on-Mars (REARM-Mars)

    NASA Technical Reports Server (NTRS)

    Davoodi, Faranak

    2013-01-01

    Future missions to Mars that would need a sophisticated lander, hopper, or rover could benefit from the REARM Architecture. The mission concept REARM Architecture is designed to provide unprecedented capabilities for future Mars exploration missions, including human exploration and possible sample-return missions, as a reusable lander, ascend/descend vehicle, refuelable hopper, multiple-location sample-return collector, laboratory, and a cargo system for assets and humans. These could all be possible by adding just a single customized Re-Entry-Hopper-Aero-Space-Craft System, called REARM-spacecraft, and a docking station at the Martian orbit, called REARM-dock. REARM could dramatically decrease the time and the expense required to launch new exploratory missions on Mars by making them less dependent on Earth and by reusing the assets already designed, built, and sent to Mars. REARM would introduce a new class of Mars exploration missions, which could explore much larger expanses of Mars in a much faster fashion and with much more sophisticated lab instruments. The proposed REARM architecture consists of the following subsystems: REARM-dock, REARM-spacecraft, sky-crane, secure-attached-compartment, sample-return container, agile rover, scalable orbital lab, and on-the-road robotic handymen.

  1. Modular Rocket Engine Control Software (MRECS)

    NASA Technical Reports Server (NTRS)

    Tarrant, Charlie; Crook, Jerry

    1997-01-01

    The Modular Rocket Engine Control Software (MRECS) Program is a technology demonstration effort designed to advance the state-of-the-art in launch vehicle propulsion systems. Its emphasis is on developing and demonstrating a modular software architecture for a generic, advanced engine control system that will result in lower software maintenance (operations) costs. It effectively accommodates software requirements changes that occur due to hardware. technology upgrades and engine development testing. Ground rules directed by MSFC were to optimize modularity and implement the software in the Ada programming language. MRECS system software and the software development environment utilize Commercial-Off-the-Shelf (COTS) products. This paper presents the objectives and benefits of the program. The software architecture, design, and development environment are described. MRECS tasks are defined and timing relationships given. Major accomplishment are listed. MRECS offers benefits to a wide variety of advanced technology programs in the areas of modular software, architecture, reuse software, and reduced software reverification time related to software changes. Currently, the program is focused on supporting MSFC in accomplishing a Space Shuttle Main Engine (SSME) hot-fire test at Stennis Space Center and the Low Cost Boost Technology (LCBT) Program.

  2. A SOA-Based Solution to Monitor Vaccination Coverage Among HIV-Infected Patients in Liguria.

    PubMed

    Giannini, Barbara; Gazzarata, Roberta; Sticchi, Laura; Giacomini, Mauro

    2016-01-01

    Vaccination in HIV-infected patients constitutes an essential tool in the prevention of the most common infectious diseases. The Ligurian Vaccination in HIV Program is a proposed vaccination schedule specifically dedicated to this risk group. Selective strategies are proposed within this program, employing ICT (Information and Communication) tools to identify this susceptible target group, to monitor immunization coverage over time and to manage failures and defaulting. The proposal is to connect an immunization registry system to an existing regional platform that allows clinical data re-use among several medical structures, to completely manage the vaccination process. This architecture will adopt a Service Oriented Architecture (SOA) approach and standard HSSP (Health Services Specification Program) interfaces to support interoperability. According to the presented solution, vaccination administration information retrieved from the immunization registry will be structured according to the specifications within the immunization section of the HL7 (Health Level 7) CCD (Continuity of Care Document) document. Immunization coverage will be evaluated through the continuous monitoring of serology and antibody titers gathered from the hospital LIS (Laboratory Information System) structured into a HL7 Version 3 (v3) Clinical Document Architecture Release 2 (CDA R2).

  3. Populating a Library of Reusable H-Boms Assessment of a Feasible Image Based Modeling Workflow

    NASA Astrophysics Data System (ADS)

    Santagati, C.; Lo Turco, M.; D'Agostino, G.

    2017-08-01

    The paper shows the intermediate results of a research activity aimed at populating a library of reusable Historical Building Object Models (H-BOMs) by testing a full digital workflow that takes advantages from using Structure from Motion (SfM) models and is centered on the geometrical/stylistic/materic analysis of the architectural element (portal, window, altar). The aim is to find common (invariant) and uncommon (variant) features in terms of identification of architectural parts and their relationships, geometrical rules, dimensions and proportions, construction materials and measure units, in order to model archetypal shapes from which it is possible to derive all the style variations. At this regard, a set of 14th - 16th century gothic portals of the catalan-aragonese architecture in Etnean area of Eastern Sicily has been studied and used to assess the feasibility of the identified workflow. This approach tries to answer the increasingly demand for guidelines and standards in the field of Cultural Heritage Conservation to create and manage semantic-aware 3D models able to include all the information (both geometrical and alphanumerical ones) concerning historical buildings and able to be reused in several projects.

  4. Roles and applications of biomedical ontologies in experimental animal science.

    PubMed

    Masuya, Hiroshi

    2012-01-01

    A huge amount of experimental data from past studies has played a vital role in the development of new knowledge and technologies in biomedical science. The importance of computational technologies for the reuse of data, data integration, and knowledge discoveries has also increased, providing means of processing large amounts of data. In recent years, information technologies related to "ontologies" have played more significant roles in the standardization, integration, and knowledge representation of biomedical information. This review paper outlines the history of data integration in biomedical science and its recent trends in relation to the field of experimental animal science.

  5. Home medication injection among Latina women in Los Angeles: implications for health education and prevention.

    PubMed

    Flaskerud, J H; Nyamathi, A M

    1996-02-01

    Reuse of needles and syringes after home injection of medications and vitamins may be a risk for transmission of HIV. An exploratory study was done to determine (1) how commonly injectable medications were used in the home; (2) whether needles and syringes were reused; and (3) common practices for cleaning needles and syringes. A survey was conducted of low income Latina women (n = 216) who were attending a Public Health Foundation nutrition programme for women, infants and children (WIC) in Los Angeles. To clarify and expand on the survey findings, focus group interviews were done with an additional 55 women attending WIC. Quantitative data were analysed using descriptive and comparative statistics. Qualitative data were subjected to content analysis. The use of injectable medications purchased in Mexico was fairly common (43.5%); reuse of disposable needles and syringes (48%) and sharing (36%) among injectors were also common. Methods of cleaning needles and syringes were inadequate to CDC recommended guidelines. Injectors and non-injectors differed significantly in ethnicity, religion, and marital status. The only significant predictor of medication injection was educational level. Analysis of qualitative data revealed the reasons that Latina subjects were injecting medication; how they were transporting medicines from Mexico; and how they were cleaning their equipment. The practical implications for health education and prevention programmes should include an awareness that home use and reuse of needles for injection of medications may be common in some social groups and that knowledge of the potential dangers in reuse and sharing of needles may not extend to home medication injection.

  6. Architectural Physics: Lighting.

    ERIC Educational Resources Information Center

    Hopkinson, R. G.

    The author coordinates the many diverse branches of knowledge which have dealt with the field of lighting--physiology, psychology, engineering, physics, and architectural design. Part I, "The Elements of Architectural Physics", discusses the physiological aspects of lighting, visual performance, lighting design, calculations and measurements of…

  7. Architectures Toward Reusable Science Data Systems

    NASA Astrophysics Data System (ADS)

    Moses, J. F.

    2014-12-01

    Science Data Systems (SDS) comprise an important class of data processing systems that support product generation from remote sensors and in-situ observations. These systems enable research into new science data products, replication of experiments and verification of results. NASA has been building ground systems for satellite data processing since the first Earth observing satellites launched and is continuing development of systems to support NASA science research, NOAA's weather satellites and USGS's Earth observing satellite operations. The basic data processing workflows and scenarios continue to be valid for remote sensor observations research as well as for the complex multi-instrument operational satellite data systems being built today. System functions such as ingest, product generation and distribution need to be configured and performed in a consistent and repeatable way with an emphasis on scalability. This paper will examine the key architectural elements of several NASA satellite data processing systems currently in operation and under development that make them suitable for scaling and reuse. Examples of architectural elements that have become attractive include virtual machine environments, standard data product formats, metadata content and file naming, workflow and job management frameworks, data acquisition, search, and distribution protocols. By highlighting key elements and implementation experience the goal is to recognize architectures that will outlast their original application and be readily adaptable for new applications. Concepts and principles are explored that lead to sound guidance for SDS developers and strategists.

  8. Design of a terminal solution for integration of in-home health care devices and services towards the Internet-of-Things

    NASA Astrophysics Data System (ADS)

    Pang, Zhibo; Zheng, Lirong; Tian, Junzhe; Kao-Walter, Sharon; Dubrova, Elena; Chen, Qiang

    2015-01-01

    In-home health care services based on the Internet-of-Things are promising to resolve the challenges caused by the ageing of population. But the existing research is rather scattered and shows lack of interoperability. In this article, a business-technology co-design methodology is proposed for cross-boundary integration of in-home health care devices and services. In this framework, three key elements of a solution (business model, device and service integration architecture and information system integration architecture) are organically integrated and aligned. In particular, a cooperative Health-IoT ecosystem is formulated, and information systems of all stakeholders are integrated in a cooperative health cloud as well as extended to patients' home through the in-home health care station (IHHS). Design principles of the IHHS includes the reuse of 3C platform, certification of the Health Extension, interoperability and extendibility, convenient and trusted software distribution, standardised and secured electrical health care record handling, effective service composition and efficient data fusion. These principles are applied to the design of an IHHS solution called iMedBox. Detailed device and service integration architecture and hardware and software architecture are presented and verified by an implemented prototype. The quantitative performance analysis and field trials have confirmed the feasibility of the proposed design methodology and solution.

  9. Architectures Toward Reusable Science Data Systems

    NASA Technical Reports Server (NTRS)

    Moses, John

    2015-01-01

    Science Data Systems (SDS) comprise an important class of data processing systems that support product generation from remote sensors and in-situ observations. These systems enable research into new science data products, replication of experiments and verification of results. NASA has been building systems for satellite data processing since the first Earth observing satellites launched and is continuing development of systems to support NASA science research and NOAAs Earth observing satellite operations. The basic data processing workflows and scenarios continue to be valid for remote sensor observations research as well as for the complex multi-instrument operational satellite data systems being built today. System functions such as ingest, product generation and distribution need to be configured and performed in a consistent and repeatable way with an emphasis on scalability. This paper will examine the key architectural elements of several NASA satellite data processing systems currently in operation and under development that make them suitable for scaling and reuse. Examples of architectural elements that have become attractive include virtual machine environments, standard data product formats, metadata content and file naming, workflow and job management frameworks, data acquisition, search, and distribution protocols. By highlighting key elements and implementation experience we expect to find architectures that will outlast their original application and be readily adaptable for new applications. Concepts and principles are explored that lead to sound guidance for SDS developers and strategists.

  10. Evolutionary Local Search of Fuzzy Rules through a novel Neuro-Fuzzy encoding method.

    PubMed

    Carrascal, A; Manrique, D; Ríos, J; Rossi, C

    2003-01-01

    This paper proposes a new approach for constructing fuzzy knowledge bases using evolutionary methods. We have designed a genetic algorithm that automatically builds neuro-fuzzy architectures based on a new indirect encoding method. The neuro-fuzzy architecture represents the fuzzy knowledge base that solves a given problem; the search for this architecture takes advantage of a local search procedure that improves the chromosomes at each generation. Experiments conducted both on artificially generated and real world problems confirm the effectiveness of the proposed approach.

  11. "Re-Casting Terra Nullius Design-Blindness": Better Teaching of Indigenous Knowledge and Protocols in Australian Architecture Education

    ERIC Educational Resources Information Center

    Tucker, Richard; Choy, Darryl Low; Heyes, Scott; Revell, Grant; Jones, David

    2018-01-01

    This paper reviews the current status and focus of Australian Architecture programs with respect to Indigenous Knowledge and the extent to which these tertiary programs currently address reconciliation and respect to Indigenous Australians in relation to their professional institutions and accreditation policies. The paper draws upon the findings…

  12. Joint Composable Object Model and LVC Methodology

    NASA Technical Reports Server (NTRS)

    Rheinsmith, Richard; Wallace, Jeffrey; Bizub, Warren; Ceranowicz, Andy; Cutts, Dannie; Powell, Edward T.; Gustavson, Paul; Lutz, Robert; McCloud, Terrell

    2010-01-01

    Within the Department of Defense, multiple architectures are created to serve and fulfill one or several specific service or mission related LVC training goals. Multiple Object Models exist across and within those architectures and it is there that those disparate object models are a major source of interoperability problems when developing and constructing the training scenarios. The two most commonly used architectures are; HLA and TENA, with DIS and CTIA following close behind in terms of the number of users. Although these multiple architectures can share and exchange data the underlying meta-models for runtime data exchange are quite different, requiring gateways/translators to bridge between the different object model representations; while the Department of Defense's use of gateways are generally effective in performing these functions, as the LVC environment increases so too does the cost and complexity of these gateways. Coupled with the wide range of different object models across the various user communities we increase the propensity for run time errors, increased programmer stop gap measures during coordinated exercises, or failure of the system as a whole due to unknown or unforeseen incompatibilities. The Joint Composable Object Model (JCOM) project was established under an M&S Steering Committee (MSSC)-sponsored effort with oversight and control placed under the Joint Forces Command J7 Advanced Concepts Program Directorate. The purpose of this paper is to address the initial and the current progress that has been made in the following areas; the Conceptual Model Development Format, the Common Object Model, the Architecture Neutral Data Exchange Model (ANDEM), and the association methodology to allow the re-use of multiple architecture object models and the development of the prototype persistent reusable library.

  13. Modernism in Belgrade: Classification of Modernist Housing Buildings 1919-1980

    NASA Astrophysics Data System (ADS)

    Dragutinovic, Anica; Pottgiesser, Uta; De Vos, Els; Melenhorst, Michel

    2017-10-01

    Yugoslavian Modernist Architecture, although part of a larger cultural phenomenon, received hardly any international attention, since there are only a few internationally published studies about it. Nevertheless, Modernist Architecture of the Inter-war Yugoslavia (Kingdom of Yugoslavia), and specially Modernist Architecture of the Post-war Yugoslavia (Socialist Federal Republic of Yugoslavia under the “reign” of Tito), represents the most important architectural heritage of the 20th century in former Yugoslavian countries. Belgrade, as the capital city of both newly founded Yugoslavia(s), experienced an immediate economic, political and cultural expansion after the both wars, as well as a large population increase. The construction of sufficient and appropriate new housing was a major undertaking in both periods (1919-1940 and 1948-1980), however conceived and realized with deeply diverging views. The transition from villas and modest apartment buildings, as main housing typologies in the Inter-war period, to the mass housing of the Post-war period, was not only a result of the different socio-political context of the two Yugoslavia(s), but also the country’s industrialization, modernization and technological development. Through the classification of Modernist housing buildings in Belgrade, this paper will investigate on relations between the transformations of the main housing typologies executed under different socio-political contexts on the one side, and development of building technologies, construction systems and materials applied on those buildings on the other side. The paper wants to shed light on the Yugoslavian Modernist Architecture in order to increase the international awareness on its architectural and heritage values. The aim is an integrated re-evaluation of the buildings, presentation of their current condition and potentials for future (re)use with a specific focus on building envelopes and construction.

  14. Exploration Space Suit Architecture and Destination Environmental-Based Technology Development

    NASA Technical Reports Server (NTRS)

    Hill, Terry R.; Korona, F. Adam; McFarland, Shane

    2012-01-01

    This paper continues forward where EVA Space Suit Architecture: Low Earth Orbit Vs. Moon Vs. Mars [1] left off in the development of a space suit architecture that is modular in design and could be reconfigured prior to launch or during any given mission depending on the tasks or destination. This paper will address the space suit system architecture and technologies required based upon human exploration extravehicular activity (EVA) destinations, and describe how they should evolve to meet the future exploration EVA needs of the US human space flight program.1, 2, 3 In looking forward to future US space exploration to a space suit architecture with maximum reuse of technology and functionality across a range of mission profiles and destinations, a series of exercises and analyses have provided a strong indication that the Constellation Program (CxP) space suit architecture is postured to provide a viable solution for future exploration missions4. The destination environmental analysis presented in this paper demonstrates that the modular architecture approach could provide the lowest mass and mission cost for the protection of the crew given any human mission outside of low-Earth orbit (LEO). Additionally, some of the high-level trades presented here provide a review of the environmental and non-environmental design drivers that will become increasingly important the farther away from Earth humans venture. This paper demonstrates a logical clustering of destination design environments that allows a focused approach to technology prioritization, development, and design that will maximize the return on investment, independent of any particular program, and provide architecture and design solutions for space suit systems in time or ahead of need dates for any particular crewed flight program in the future. The approach to space suit design and interface definition discussion will show how the architecture is very adaptable to programmatic and funding changes with minimal redesign effort such that the modular architecture can be quickly and efficiently honed into a specific mission point solution if required. Additionally, the modular system will allow for specific technology incorporation and upgrade as required with minimal redesign of the system.

  15. Exploration Space Suit Architecture: Destination Environmental-Based Technology Development

    NASA Technical Reports Server (NTRS)

    Hill, Terry R.

    2010-01-01

    This paper picks up where EVA Space Suit Architecture: Low Earth Orbit Vs. Moon Vs. Mars (Hill, Johnson, IEEEAC paper #1209) left off in the development of a space suit architecture that is modular in design and interfaces and could be reconfigured to meet the mission or during any given mission depending on the tasks or destination. This paper will walk though the continued development of a space suit system architecture, and how it should evolve to meeting the future exploration EVA needs of the United States space program. In looking forward to future US space exploration and determining how the work performed to date in the CxP and how this would map to a future space suit architecture with maximum re-use of technology and functionality, a series of thought exercises and analysis have provided a strong indication that the CxP space suit architecture is well postured to provide a viable solution for future exploration missions. Through the destination environmental analysis that is presented in this paper, the modular architecture approach provides the lowest mass, lowest mission cost for the protection of the crew given any human mission outside of low Earth orbit. Some of the studies presented here provide a look and validation of the non-environmental design drivers that will become every-increasingly important the further away from Earth humans venture and the longer they are away. Additionally, the analysis demonstrates a logical clustering of design environments that allows a very focused approach to technology prioritization, development and design that will maximize the return on investment independent of any particular program and provide architecture and design solutions for space suit systems in time or ahead of being required for any particular manned flight program in the future. The new approach to space suit design and interface definition the discussion will show how the architecture is very adaptable to programmatic and funding changes with minimal redesign effort required such that the modular architecture can be quickly and efficiently honed into a specific mission point solution if required.

  16. The ASTRI SST-2M telescope prototype for the Cherenkov Telescope Array: camera DAQ software architecture

    NASA Astrophysics Data System (ADS)

    Conforti, Vito; Trifoglio, Massimo; Bulgarelli, Andrea; Gianotti, Fulvio; Fioretti, Valentina; Tacchini, Alessandro; Zoli, Andrea; Malaguti, Giuseppe; Capalbi, Milvia; Catalano, Osvaldo

    2014-07-01

    ASTRI (Astrofisica con Specchi a Tecnologia Replicante Italiana) is a Flagship Project financed by the Italian Ministry of Education, University and Research, and led by INAF, the Italian National Institute of Astrophysics. Within this framework, INAF is currently developing an end-to-end prototype of a Small Size dual-mirror Telescope. In a second phase the ASTRI project foresees the installation of the first elements of the array at CTA southern site, a mini-array of 7 telescopes. The ASTRI Camera DAQ Software is aimed at the Camera data acquisition, storage and display during Camera development as well as during commissioning and operations on the ASTRI SST-2M telescope prototype that will operate at the INAF observing station located at Serra La Nave on the Mount Etna (Sicily). The Camera DAQ configuration and operations will be sequenced either through local operator commands or through remote commands received from the Instrument Controller System that commands and controls the Camera. The Camera DAQ software will acquire data packets through a direct one-way socket connection with the Camera Back End Electronics. In near real time, the data will be stored in both raw and FITS format. The DAQ Quick Look component will allow the operator to display in near real time the Camera data packets. We are developing the DAQ software adopting the iterative and incremental model in order to maximize the software reuse and to implement a system which is easily adaptable to changes. This contribution presents the Camera DAQ Software architecture with particular emphasis on its potential reuse for the ASTRI/CTA mini-array.

  17. An interoperable research data infrastructure to support climate service development

    NASA Astrophysics Data System (ADS)

    De Filippis, Tiziana; Rocchi, Leandro; Rapisardi, Elena

    2018-02-01

    Accessibility, availability, re-use and re-distribution of scientific data are prerequisites to build climate services across Europe. From this perspective the Institute of Biometeorology of the National Research Council (IBIMET-CNR), aiming at contributing to the sharing and integration of research data, has developed a research data infrastructure to support the scientific activities conducted in several national and international research projects. The proposed architecture uses open-source tools to ensure sustainability in the development and deployment of Web applications with geographic features and data analysis functionalities. The spatial data infrastructure components are organized in typical client-server architecture and interact from the data provider download data process to representation of the results to end users. The availability of structured raw data as customized information paves the way for building climate service purveyors to support adaptation, mitigation and risk management at different scales.

    This work is a bottom-up collaborative initiative between different IBIMET-CNR research units (e.g. geomatics and information and communication technology - ICT; agricultural sustainability; international cooperation in least developed countries - LDCs) that embrace the same approach for sharing and re-use of research data and informatics solutions based on co-design, co-development and co-evaluation among different actors to support the production and application of climate services. During the development phase of Web applications, different users (internal and external) were involved in the whole process so as to better define user needs and suggest the implementation of specific custom functionalities. Indeed, the services are addressed to researchers, academics, public institutions and agencies - practitioners who can access data and findings from recent research in the field of applied meteorology and climatology.

  18. Modular System to Enable Extravehicular Activity

    NASA Technical Reports Server (NTRS)

    Sargusingh, Miriam J.

    2011-01-01

    The ability to perform extravehicular activity (EVA), both human and robotic, has been identified as a key component to space missions to support such operations as assembly and maintenance of space system (e.g. construction and maintenance of the International Space Station), and unscheduled activities to repair an element of the transportation and habitation systems that can only be accessed externally and via unpressurized areas. In order to make human transportation beyond lower earth orbit (BLEO) practical, efficiencies must be incorporated into the integrated transportation systems to reduce system mass and operational complexity. Affordability is also a key aspect to be considered in space system development; this could be achieved through commonality, modularity and component reuse. Another key aspect identified for the EVA system was the ability to produce flight worthy hardware quickly to support early missions and near Earth technology demonstrations. This paper details a conceptual architecture for a modular extravehicular activity system (MEVAS) that would meet these stated needs for EVA capability that is affordable, and that could be produced relatively quickly. Operational concepts were developed to elaborate on the defined needs and define the key capabilities, operational and design constraints, and general timelines. The operational concept lead to a high level design concept for a module that interfaces with various space transportation elements and contains the hardware and systems required to support human and telerobotic EVA; the module would not be self-propelled and would rely on an interfacing element for consumable resources. The conceptual architecture was then compared to EVA Systems used in the Shuttle Orbiter, on the International Space Station to develop high level design concepts that incorporate opportunities for cost savings through hardware reuse, and quick production through the use of existing technologies and hardware designs. An upgrade option was included to make use of the developing suitport technologies.

  19. Using multiple-accumulator CMACs to improve efficiency of the X part of an input-buffered FX correlator

    NASA Astrophysics Data System (ADS)

    Lapshev, Stepan; Hasan, S. M. Rezaul

    2017-04-01

    This paper presents the approach of using complex multiplier-accumulators (CMACs) with multiple accumulators to reduce the total number of memory operations in an input-buffered architecture for the X part of an FX correlator. A processing unit of this architecture uses an array of CMACs that are reused for different groups of baselines. The disadvantage of processing correlations in this way is that each input data sample has to be read multiple times from the memory because each input signal is used in many of these baseline groups. While a one-accumulator CMAC cannot switch to a different baseline until it is finished integrating the current one, a multiple-accumulator CMAC can. Thus, the array of multiple-accumulator CMACs can switch between processing different baselines that share some input signals at any moment to reuse the current data in the processing buffers. In this way significant reductions in the number of memory read operations are achieved with only a few accumulators per CMAC. For example, for a large number of input signals three-accumulator CMACs reduce the total number of memory operations by more than a third. Simulated energy measurements of four VLSI designs in a high-performance 28 nm CMOS technology are presented in this paper to demonstrate that using multiple accumulators can also lead to reduced power dissipation of the processing array. Using three accumulators as opposed to one has been found to reduce the overall energy of 8-bit CMACs by 1.4% through the reduction of the switching activity within their circuits, which is in addition to a more than 30% reduction in the memory.

  20. A novel wavelength reused bidirectional RoF-WDM-PON architecture to mitigate reflection and Rayleigh backscattered noise in multi-Gb/s m-QAM OFDM SSB upstream and downstream transmission over a single fiber

    NASA Astrophysics Data System (ADS)

    Patel, Dhananjay; Dalal, U. D.

    2017-05-01

    A novel m-QAM Orthogonal Frequency Division Multiplexing (OFDM) Single Sideband (SSB) architecture is proposed for centralized light source (CLS) bidirectional Radio over Fiber (RoF) - Wavelength Division Multiplexing (WDM) - Passive Optical Network (PON). In bidirectional transmission with carrier reuse over the single fiber, the Rayleigh Backscattering (RB) noise and reflection (RE) interferences from optical components can seriously deteriorate the transmission performance of the fiber optic systems. These interferometric noises can be mitigated by utilizing the optical modulation schemes at the Optical Line Terminal (OLT) and Optical Network Unit (ONU) such that the spectral overlap between the optical data spectrum and the RB and RE noise is minimum. A mathematical model is developed for the proposed architecture to accurately measure the performance of the transmission system and also to analyze the effect of interferometric noise caused by the RB and RE. The model takes into the account the different modulation schemes employed at the OLT and the ONU using a Mach Zehnder Modulator (MZM), the optical launch power and the bit-rates of the downstream and upstream signals, the gain of the amplifiers at the OLT and the ONU, the RB-RE noise, chromatic dispersion of the single mode fiber and optical filter responses. In addition, the model analyzes all the components of the RB-RE noise such as carrier RB, signal RB, carrier RE and signal RE, thus providing the complete representation of all the physical phenomena involved. An optical m-QAM OFDM SSB signal acts as a test signal to validate the model which provides excellent agreement with simulation results. The SSB modulation technique using the MZM at the OLT and the ONU differs in the data transmission technique that takes place through the first-order higher and the lower optical sideband respectively. This spectral gap between the downstream and upstream signals reduces the effect of Rayleigh backscattering and discrete reflections.

  1. Implementation of a metadata architecture and knowledge collection to support semantic interoperability in an enterprise data warehouse.

    PubMed

    Dhaval, Rakesh; Borlawsky, Tara; Ostrander, Michael; Santangelo, Jennifer; Kamal, Jyoti; Payne, Philip R O

    2008-11-06

    In order to enhance interoperability between enterprise systems, and improve data validity and reliability throughout The Ohio State University Medical Center (OSUMC), we have initiated the development of an ontology-anchored metadata architecture and knowledge collection for our enterprise data warehouse. The metadata and corresponding semantic relationships stored in the OSUMC knowledge collection are intended to promote consistency and interoperability across the heterogeneous clinical, research, business and education information managed within the data warehouse.

  2. Design of Ontology-Based Sharing Mechanism for Web Services Recommendation Learning Environment

    NASA Astrophysics Data System (ADS)

    Chen, Hong-Ren

    The number of digital learning websites is growing as a result of advances in computer technology and new techniques in web page creation. These sites contain a wide variety of information but may be a source of confusion to learners who fail to find the information they are seeking. This has led to the concept of recommendation services to help learners acquire information and learning resources that suit their requirements. Learning content like this cannot be reused by other digital learning websites. A successful recommendation service that satisfies a certain learner must cooperate with many other digital learning objects so that it can achieve the required relevance. The study proposes using the theory of knowledge construction in ontology to make the sharing and reuse of digital learning resources possible. The learning recommendation system is accompanied by the recommendation of appropriate teaching materials to help learners enhance their learning abilities. A variety of diverse learning components scattered across the Internet can be organized through an ontological process so that learners can use information by storing, sharing, and reusing it.

  3. Effect of formaldehyde/bleach reprocessing on in vivo performances of high-efficiency cellulose and high-flux polysulfone dialyzers.

    PubMed

    Murthy, B V; Sundaram, S; Jaber, B L; Perrella, C; Meyer, K B; Pereira, B J

    1998-03-01

    Among the several disadvantages of reprocessed dialyzers is the concern that reuse could decrease the clearance of uremic toxins, leading to a decrease in the delivered dose of dialysis. To examine this possibility in the clinical setting, the clearances of small molecular weight solutes (urea and creatinine) and middle molecular weight substances (beta 2 microglobulin) were compared during dialysis with "high-efficiency" cellulose (T220L) and "high-flux" polysulfone (F80B) dialyzers reprocessed with formaldehyde and bleach. In a crossover study, six chronic hemodialysis patients were alternately assigned to undergo 21 dialysis treatments with a single T220L dialyzer or F80B dialyzer. Each patient was studied during first use (0 reuse), 2nd reuse (3rd use), and 5th, 10th, 15th, and 20th reuse of each dialyzer. Urea, creatinine, and beta 2 microglobulin clearances were measured at blood flow rates of 300 ml/min (Qb 300) and 400 ml/min (Qb 400). Total albumin loss into the dialysate was measured during each treatment. Urea or creatinine clearance of new T220L dialyzers was not significantly different from that of new F80B dialyzers at either Qb. Urea clearance of F80B dialyzers at Qb 300 decreased from 241 +/- 2 ml/min for new dialyzers to 221 +/- 5 ml/min after 20 reuses (P < 0.001), and Qb 400 from 280 +/- 4 ml/min for new dialyzers to 253 +/- 7 ml/min after 20 reuses (P = 0.001). Similarly, with reuse, creatinine clearance of F80B dialyzers also decreased at Qb 300 (P = 0.07) and Qb 400 (P = 0.03). In contrast, urea or creatinine clearance of T220L dialyzers did not decrease with reuse at either Qb. Urea clearance of T220L dialyzers was significantly higher than that of F80B at Qb 300 at the 5th, 10th, 15th, and 20th reuse (P < 0.001, = 0.005, = 0.004, and = 0.006, respectively), and Qb 400 at the 2nd, 5th, 10th, 15th, and 20th reuse (P = 0.04, 0.008, 0.03, 0.02, and 0.008, respectively). Beta 2 microglobulin clearance of T220L dialyzers was < 5.0 ml/min across the reuses studied. Beta 2 microglobulin clearance of F80B was < 5.0 ml/min for new dialyzers, but increased to 21.2 +/- 5.3 ml/min (Qb 300) and 23.6 +/- 3.3 ml/min (Qb 400) after 20 reuses (P < 0.001). Throughout the study, albumin was undetectable in the dialysate with T220L dialyzers. With F80B dialyzers, albumin was detected in the dialysate in four instances (total loss during dialysis, 483 mg to 1.467 g). In summary, the results of this study emphasize the greater need for information on dialyzer clearances during clinical dialysis, especially with reprocessed dialyzers. A more accurate knowledge of dialyzer performance in vivo would help to ensure that the dose of dialysis prescribed is indeed delivered to the patients.

  4. Cross-Organizational Knowledge Sharing: Information Reuse in Small Organizations

    ERIC Educational Resources Information Center

    White, Kevin Forsyth

    2010-01-01

    Despite the potential value of leveraging organizational memory and expertise, small organizations have been unable to capitalize on its promised value. Existing solutions have largely side-stepped the unique needs of these organizations, which are relegated to systems designed to take advantage of large pools of experts or to use Internet sources…

  5. Semantic Social Network Portal for Collaborative Online Communities

    ERIC Educational Resources Information Center

    Neumann, Marco; O'Murchu, Ina; Breslin, John; Decker, Stefan; Hogan, Deirdre; MacDonaill, Ciaran

    2005-01-01

    Purpose: The motivation for this investigation is to apply social networking features to a semantic network portal, which supports the efforts in enterprise training units to up-skill the employee in the company, and facilitates the creation and reuse of knowledge in online communities. Design/methodology/approach: The paper provides an overview…

  6. Diy Geospatial Web Service Chains: Geochaining Make it Easy

    NASA Astrophysics Data System (ADS)

    Wu, H.; You, L.; Gui, Z.

    2011-08-01

    It is a great challenge for beginners to create, deploy and utilize a Geospatial Web Service Chain (GWSC). People in Computer Science are usually not familiar with geospatial domain knowledge. Geospatial practitioners may lack the knowledge about web services and service chains. The end users may lack both. However, integrated visual editing interfaces, validation tools, and oneclick deployment wizards may help to lower the learning curve and improve modelling skills so beginners will have a better experience. GeoChaining is a GWSC modelling tool designed and developed based on these ideas. GeoChaining integrates visual editing, validation, deployment, execution etc. into a unified platform. By employing a Virtual Globe, users can intuitively visualize raw data and results produced by GeoChaining. All of these features allow users to easily start using GWSC, regardless of their professional background and computer skills. Further, GeoChaining supports GWSC model reuse, meaning that an entire GWSC model created or even a specific part can be directly reused in a new model. This greatly improves the efficiency of creating a new GWSC, and also contributes to the sharing and interoperability of GWSC.

  7. Evaluating pictogram prediction in a location-aware augmentative and alternative communication system.

    PubMed

    Garcia, Luís Filipe; de Oliveira, Luís Caldas; de Matos, David Martins

    2016-01-01

    This study compared the performance of two statistical location-aware pictogram prediction mechanisms, with an all-purpose (All) pictogram prediction mechanism, having no location knowledge. The All approach had a unique language model under all locations. One of the location-aware alternatives, the location-specific (Spec) approach, made use of specific language models for pictogram prediction in each location of interest. The other location-aware approach resulted from combining the Spec and the All approaches, and was designated the mixed approach (Mix). In this approach, the language models acquired knowledge from all locations, but a higher relevance was assigned to the vocabulary from the associated location. Results from simulations showed that the Mix and Spec approaches could only outperform the baseline in a statistically significant way if pictogram users reuse more than 50% and 75% of their sentences, respectively. Under low sentence reuse conditions there were no statistically significant differences between the location-aware approaches and the All approach. Under these conditions, the Mix approach performed better than the Spec approach in a statistically significant way.

  8. A prototype knowledge-based decision support system for industrial waste management. Part 2: Application to a Trinidadian industrial estate case study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boyle, C.A.; Baetz, B.W.

    1998-09-01

    A knowledge-based decision support system (KBDSS) has been developed to examine the potentials for reuse, co-treatment, recycling and disposal of wastes from different industrial facilities. Four plants on the Point Lisas Industrial Estate in Trinidad were selected to test this KBDSS; a gas processing plant, a methanol plant, a fertilizer/ammonia plant and a steel processing plant. A total of 77 wastes were produced by the plants (51,481,500 t year{sup {minus}1}) with the majority being released into the ocean or emitted into the air. Seventeen wastes were already being recycled off-site so were not included in the database. Using a knowledgemore » base of 25 possible treatment processes, the KBDSS generated over 4,600 treatment train options for managing the plant wastes. The developed system was able to determine treatment options for the wastes which would minimize the number of treatments and the amount of secondary wastes produced and maximize the potential for reuse, recycling and co-treatment of wastes.« less

  9. The Need for Software Architecture Evaluation in the Acquisition of Software-Intensive Sysetms

    DTIC Science & Technology

    2014-01-01

    Function and Performance Specification GIG Global Information Grid ISO International Standard Organisation MDA Model Driven Architecture...architecture and design, which is a key part of knowledge-based economy UNCLASSIFIED DSTO-TR-2936 UNCLASSIFIED 24  Allow Australian SMEs to

  10. Using ArchE in the Classroom: One Experience

    DTIC Science & Technology

    2007-09-01

    The Architecture Expert (ArchE) tool serves as a software architecture design assistant. It embodies knowledge of quality attributes and the relation...between the achievement of quality attribute requirements and architecture design . This technical note describes the use of a pre-alpha release of

  11. The AI Bus architecture for distributed knowledge-based systems

    NASA Technical Reports Server (NTRS)

    Schultz, Roger D.; Stobie, Iain

    1991-01-01

    The AI Bus architecture is layered, distributed object oriented framework developed to support the requirements of advanced technology programs for an order of magnitude improvement in software costs. The consequent need for highly autonomous computer systems, adaptable to new technology advances over a long lifespan, led to the design of an open architecture and toolbox for building large scale, robust, production quality systems. The AI Bus accommodates a mix of knowledge based and conventional components, running on heterogeneous, distributed real world and testbed environment. The concepts and design is described of the AI Bus architecture and its current implementation status as a Unix C++ library or reusable objects. Each high level semiautonomous agent process consists of a number of knowledge sources together with interagent communication mechanisms based on shared blackboards and message passing acquaintances. Standard interfaces and protocols are followed for combining and validating subsystems. Dynamic probes or demons provide an event driven means for providing active objects with shared access to resources, and each other, while not violating their security.

  12. Current practices and barriers to the use of facemasks and respirators among hospital-based health care workers in Vietnam.

    PubMed

    Chughtai, Abrar Ahmad; Seale, Holly; Chi Dung, Tham; Maher, Lisa; Nga, Phan Thi; MacIntyre, C Raina

    2015-01-01

    This study aimed to examine the knowledge, attitudes, and practices towards the use of facemasks among hospital-based health care workers (HCWs) in Hanoi, Vietnam. A qualitative study incorporating 20 focus groups was conducted between August 2010 and May 2011. HCWs from 7 hospitals in Vietnam were invited to participate. Issues associated with the availability of facemasks (medical and cloth masks) and respirators was the strongest theme to emerge from the discussion. Participants reported that it is not unusual for some types of facemasks to be unavailable during nonemergency periods. It was highlighted that the use of facemasks and respirators is not continuous, but rather is limited to selected situations, locations, and patients. Reuse of facemasks and respirators is also common in some settings. Finally, some participants reported believing that the reuse of facemasks, particularly cloth masks, is safe, whereas others believed that the reuse of masks put staff at risk of infection. In low and middle-income countries, access to appropriate levels of personal protective equipment may be restricted owing to competing demands for funding in hospital settings. It is important that issues around reuse and extended use of medical masks/respirators and decontamination of cloth masks are addressed in policy documents to minimize the risk of infection. Copyright © 2015 Association for Professionals in Infection Control and Epidemiology, Inc. Published by Elsevier Inc. All rights reserved.

  13. EXPECT: Explicit Representations for Flexible Acquisition

    NASA Technical Reports Server (NTRS)

    Swartout, BIll; Gil, Yolanda

    1995-01-01

    To create more powerful knowledge acquisition systems, we not only need better acquisition tools, but we need to change the architecture of the knowledge based systems we create so that their structure will provide better support for acquisition. Current acquisition tools permit users to modify factual knowledge but they provide limited support for modifying problem solving knowledge. In this paper, the authors argue that this limitation (and others) stem from the use of incomplete models of problem-solving knowledge and inflexible specification of the interdependencies between problem-solving and factual knowledge. We describe the EXPECT architecture which addresses these problems by providing an explicit representation for problem-solving knowledge and intent. Using this more explicit representation, EXPECT can automatically derive the interdependencies between problem-solving and factual knowledge. By deriving these interdependencies from the structure of the knowledge-based system itself EXPECT supports more flexible and powerful knowledge acquisition.

  14. Molecular basis of angiosperm tree architecture.

    PubMed

    Hollender, Courtney A; Dardick, Chris

    2015-04-01

    The architecture of trees greatly impacts the productivity of orchards and forestry plantations. Amassing greater knowledge on the molecular genetics that underlie tree form can benefit these industries, as well as contribute to basic knowledge of plant developmental biology. This review describes the fundamental components of branch architecture, a prominent aspect of tree structure, as well as genetic and hormonal influences inferred from studies in model plant systems and from trees with non-standard architectures. The bulk of the molecular and genetic data described here is from studies of fruit trees and poplar, as these species have been the primary subjects of investigation in this field of science. No claim to original US Government works. New Phytologist © 2014 New Phytologist Trust.

  15. Experiencing the "SPIRIT of PLACE" as a Design Task: the Street of Hamra in the Heart of Beirut

    NASA Astrophysics Data System (ADS)

    El-Khoury, N.

    2013-07-01

    The aim of this paper is to introduce a subject related to the concept of re-use of historic places and their contemporary architecture settings. In this research, we examine a particular case study of the street of Hamra, located in the heart of Beirut, Lebanon. Hamra street has a particular character with a specific heritage and local spirit, and it is important to preserve this heritage value as part of any new architectural design intervention. In able to test our intentions of strengthening this strong relation between the specific heritage and any new architectural intervention, we used a method that relies on students' experiments in the case of a design studio. It is an experimental approach where the students of the architecture and design school of the Lebanese American University, were asked as a constraint in their concept, to integrate part of the heritage that characterizes Hamra: "the spirit of place" into their design while using information and communication technologies (ICTs). As a first step, they had to get familiar with the concept of heritage in its general meaning, then they studied the history, the heritage of Hamra and the urban fabric. They had to observe and collect data and then to focus on a main idea that will define the concept of their architectural design. As a second step, results were presented and compared. After analyzing the results, this experimental research showed that it is important to consider the "spirit of place" while designing a space.

  16. A flexible software architecture for scalable real-time image and video processing applications

    NASA Astrophysics Data System (ADS)

    Usamentiaga, Rubén; Molleda, Julio; García, Daniel F.; Bulnes, Francisco G.

    2012-06-01

    Real-time image and video processing applications require skilled architects, and recent trends in the hardware platform make the design and implementation of these applications increasingly complex. Many frameworks and libraries have been proposed or commercialized to simplify the design and tuning of real-time image processing applications. However, they tend to lack flexibility because they are normally oriented towards particular types of applications, or they impose specific data processing models such as the pipeline. Other issues include large memory footprints, difficulty for reuse and inefficient execution on multicore processors. This paper presents a novel software architecture for real-time image and video processing applications which addresses these issues. The architecture is divided into three layers: the platform abstraction layer, the messaging layer, and the application layer. The platform abstraction layer provides a high level application programming interface for the rest of the architecture. The messaging layer provides a message passing interface based on a dynamic publish/subscribe pattern. A topic-based filtering in which messages are published to topics is used to route the messages from the publishers to the subscribers interested in a particular type of messages. The application layer provides a repository for reusable application modules designed for real-time image and video processing applications. These modules, which include acquisition, visualization, communication, user interface and data processing modules, take advantage of the power of other well-known libraries such as OpenCV, Intel IPP, or CUDA. Finally, we present different prototypes and applications to show the possibilities of the proposed architecture.

  17. Developing a New Framework for Integration and Teaching of Computer Aided Architectural Design (CAAD) in Nigerian Schools of Architecture

    ERIC Educational Resources Information Center

    Uwakonye, Obioha; Alagbe, Oluwole; Oluwatayo, Adedapo; Alagbe, Taiye; Alalade, Gbenga

    2015-01-01

    As a result of globalization of digital technology, intellectual discourse on what constitutes the basic body of architectural knowledge to be imparted to future professionals has been on the increase. This digital revolution has brought to the fore the need to review the already overloaded architectural education curriculum of Nigerian schools of…

  18. Integrating planning, execution, and learning

    NASA Technical Reports Server (NTRS)

    Kuokka, Daniel R.

    1989-01-01

    To achieve the goal of building an autonomous agent, the usually disjoint capabilities of planning, execution, and learning must be used together. An architecture, called MAX, within which cognitive capabilities can be purposefully and intelligently integrated is described. The architecture supports the codification of capabilities as explicit knowledge that can be reasoned about. In addition, specific problem solving, learning, and integration knowledge is developed.

  19. The architecture of personality.

    PubMed

    Cervone, David

    2004-01-01

    This article presents a theoretical framework for analyzing psychological systems that contribute to the variability, consistency, and cross-situational coherence of personality functioning. In the proposed knowledge-and-appraisal personality architecture (KAPA), personality structures and processes are delineated by combining 2 principles: distinctions (a) between knowledge structures and appraisal processes and (b) among intentional cognitions with varying directions of fit, with the latter distinction differentiating among beliefs, evaluative standards, and aims. Basic principles of knowledge activation and use illuminate relations between knowledge and appraisal, yielding a synthetic account of personality structures and processes. Novel empirical data illustrate the heuristic value of the knowledge/appraisal distinction by showing how self-referent and situational knowledge combine to foster cross-situational coherence in appraisals of self-efficacy.

  20. Component-based integration of chemistry and optimization software.

    PubMed

    Kenny, Joseph P; Benson, Steven J; Alexeev, Yuri; Sarich, Jason; Janssen, Curtis L; McInnes, Lois Curfman; Krishnan, Manojkumar; Nieplocha, Jarek; Jurrus, Elizabeth; Fahlstrom, Carl; Windus, Theresa L

    2004-11-15

    Typical scientific software designs make rigid assumptions regarding programming language and data structures, frustrating software interoperability and scientific collaboration. Component-based software engineering is an emerging approach to managing the increasing complexity of scientific software. Component technology facilitates code interoperability and reuse. Through the adoption of methodology and tools developed by the Common Component Architecture Forum, we have developed a component architecture for molecular structure optimization. Using the NWChem and Massively Parallel Quantum Chemistry packages, we have produced chemistry components that provide capacity for energy and energy derivative evaluation. We have constructed geometry optimization applications by integrating the Toolkit for Advanced Optimization, Portable Extensible Toolkit for Scientific Computation, and Global Arrays packages, which provide optimization and linear algebra capabilities. We present a brief overview of the component development process and a description of abstract interfaces for chemical optimizations. The components conforming to these abstract interfaces allow the construction of applications using different chemistry and mathematics packages interchangeably. Initial numerical results for the component software demonstrate good performance, and highlight potential research enabled by this platform.

  1. Integration of robotic resources into FORCEnet

    NASA Astrophysics Data System (ADS)

    Nguyen, Chinh; Carroll, Daniel; Nguyen, Hoa

    2006-05-01

    The Networked Intelligence, Surveillance, and Reconnaissance (NISR) project integrates robotic resources into Composeable FORCEnet to control and exploit unmanned systems over extremely long distances. The foundations are built upon FORCEnet-the U.S. Navy's process to define C4ISR for net-centric operations-and the Navy Unmanned Systems Common Control Roadmap to develop technologies and standards for interoperability, data sharing, publish-and-subscribe methodology, and software reuse. The paper defines the goals and boundaries for NISR with focus on the system architecture, including the design tradeoffs necessary for unmanned systems in a net-centric model. Special attention is given to two specific scenarios demonstrating the integration of unmanned ground and water surface vehicles into the open-architecture web-based command-and-control information-management system of Composeable FORCEnet. Planned spiral development for NISR will improve collaborative control, expand robotic sensor capabilities, address multiple domains including underwater and aerial platforms, and extend distributive communications infrastructure for battlespace optimization for unmanned systems in net-centric operations.

  2. Application of service oriented architecture for sensors and actuators in district heating substations.

    PubMed

    Gustafsson, Jonas; Kyusakov, Rumen; Mäkitaavola, Henrik; Delsing, Jerker

    2014-08-21

    Hardwired sensor installations using proprietary protocols found in today's district heating substations limit the potential usability of the sensors in and around the substations. If sensor resources can be shared and re-used in a variety of applications, the cost of sensors and installation can be reduced, and their functionality and operability can be increased. In this paper, we present a new concept of district heating substation control and monitoring, where a service oriented architecture (SOA) is deployed in a wireless sensor network (WSN), which is integrated with the substation. IP-networking is exclusively used from sensor to server; hence, no middleware is needed for Internet integration. Further, by enabling thousands of sensors with SOA capabilities, a System of Systems approach can be applied. The results of this paper show that it is possible to utilize SOA solutions with heavily resource-constrained embedded devices in contexts where the real-time constrains are limited, such as in a district heating substation.

  3. NELS 2.0 - A general system for enterprise wide information management

    NASA Technical Reports Server (NTRS)

    Smith, Stephanie L.

    1993-01-01

    NELS, the NASA Electronic Library System, is an information management tool for creating distributed repositories of documents, drawings, and code for use and reuse by the aerospace community. The NELS retrieval engine can load metadata and source files of full text objects, perform natural language queries to retrieve ranked objects, and create links to connect user interfaces. For flexibility, the NELS architecture has layered interfaces between the application program and the stored library information. The session manager provides the interface functions for development of NELS applications. The data manager is an interface between session manager and the structured data system. The center of the structured data system is the Wide Area Information Server. This system architecture provides access to information across heterogeneous platforms in a distributed environment. There are presently three user interfaces that connect to the NELS engine; an X-Windows interface, and ASCII interface and the Spatial Data Management System. This paper describes the design and operation of NELS as an information management tool and repository.

  4. Application of Service Oriented Architecture for Sensors and Actuators in District Heating Substations

    PubMed Central

    Gustafsson, Jonas; Kyusakov, Rumen; Mäkitaavola, Henrik; Delsing, Jerker

    2014-01-01

    Hardwired sensor installations using proprietary protocols found in today's district heating substations limit the potential usability of the sensors in and around the substations. If sensor resources can be shared and re-used in a variety of applications, the cost of sensors and installation can be reduced, and their functionality and operability can be increased. In this paper, we present a new concept of district heating substation control and monitoring, where a service oriented architecture (SOA) is deployed in a wireless sensor network (WSN), which is integrated with the substation. IP-networking is exclusively used from sensor to server; hence, no middleware is needed for Internet integration. Further, by enabling thousands of sensors with SOA capabilities, a System of Systems approach can be applied. The results of this paper show that it is possible to utilize SOA solutions with heavily resource-constrained embedded devices in contexts where the real-time constrains are limited, such as in a district heating substation. PMID:25196165

  5. Integrating reasoning and clinical archetypes using OWL ontologies and SWRL rules.

    PubMed

    Lezcano, Leonardo; Sicilia, Miguel-Angel; Rodríguez-Solano, Carlos

    2011-04-01

    Semantic interoperability is essential to facilitate the computerized support for alerts, workflow management and evidence-based healthcare across heterogeneous electronic health record (EHR) systems. Clinical archetypes, which are formal definitions of specific clinical concepts defined as specializations of a generic reference (information) model, provide a mechanism to express data structures in a shared and interoperable way. However, currently available archetype languages do not provide direct support for mapping to formal ontologies and then exploiting reasoning on clinical knowledge, which are key ingredients of full semantic interoperability, as stated in the SemanticHEALTH report [1]. This paper reports on an approach to translate definitions expressed in the openEHR Archetype Definition Language (ADL) to a formal representation expressed using the Ontology Web Language (OWL). The formal representations are then integrated with rules expressed with Semantic Web Rule Language (SWRL) expressions, providing an approach to apply the SWRL rules to concrete instances of clinical data. Sharing the knowledge expressed in the form of rules is consistent with the philosophy of open sharing, encouraged by archetypes. Our approach also allows the reuse of formal knowledge, expressed through ontologies, and extends reuse to propositions of declarative knowledge, such as those encoded in clinical guidelines. This paper describes the ADL-to-OWL translation approach, describes the techniques to map archetypes to formal ontologies, and demonstrates how rules can be applied to the resulting representation. We provide examples taken from a patient safety alerting system to illustrate our approach. Copyright © 2010 Elsevier Inc. All rights reserved.

  6. A digital protection system incorporating knowledge based learning

    NASA Astrophysics Data System (ADS)

    Watson, Karan; Russell, B. Don; McCall, Kurt

    A digital system architecture used to diagnoses the operating state and health of electric distribution lines and to generate actions for line protection is presented. The architecture is described functionally and to a limited extent at the hardware level. This architecture incorporates multiple analysis and fault-detection techniques utilizing a variety of parameters. In addition, a knowledge-based decision maker, a long-term memory retention and recall scheme, and a learning environment are described. Preliminary laboratory implementations of the system elements have been completed. Enhanced protection for electric distribution feeders is provided by this system. Advantages of the system are enumerated.

  7. Harnessing the Risk-Related Data Supply Chain: An Information Architecture Approach to Enriching Human System Research and Operations Knowledge

    NASA Technical Reports Server (NTRS)

    Buquo, Lynn E.; Johnson-Throop, Kathy A.

    2011-01-01

    An Information Architecture facilitates the understanding and, hence, harnessing of the human system risk-related data supply chain which enhances the ability to securely collect, integrate, and share data assets that improve human system research and operations. By mapping the risk-related data flow from raw data to useable information and knowledge (think of it as a data supply chain), the Human Research Program (HRP) and Space Life Science Directorate (SLSD) are building an information architecture plan to leverage their existing, and often shared, IT infrastructure.

  8. Advancing indigent healthcare services through adaptive reuse: repurposing abandoned buildings as medical clinics for disadvantaged populations.

    PubMed

    Elrod, James K; Fortenberry, John L

    2017-12-13

    Challenges abound for healthcare providers engaged in initiatives directed toward disadvantaged populations, with financial constraints representing one of the most prominent hardships. Society's less fortunate typically lack the means to pay for healthcare services and even when they are covered by government health insurance programs, reimbursement shortcomings often occur, placing funding burdens on the shoulders of establishments dedicated to serving those of limited means. For such charitably-minded organizations, efficiencies are required on all fronts, including one which involves significant operational costs: the physical space required for care provision. Newly constructed buildings, whether owned or leased, are expensive, consuming a significant percentage of funds that otherwise could be directed toward patient care. Such costs can even prohibit the delivery of services to indigent populations altogether. But through adaptive reuse-the practice of repurposing existing, abandoned buildings, placing them back into service in pursuit of new missions-opportunities exist to economize on this front, allowing healthcare providers to acquire operational space at a discount. In an effort to shore up related knowledge, this article profiles Willis-Knighton Health System's development of Project NeighborHealth, an indigent clinic network which was significantly bolstered by the economies associated with adaptive reuse. Despite its potential to bolster healthcare initiatives directed toward the medically underserved by presenting more affordable options for acquiring operational space, adaptive reuse remains relatively obscure, diminishing opportunities for providers to take advantage of its many benefits. By shedding light on this repurposing approach, healthcare providers will have a better understanding of adaptive reuse, enabling them to make use of the practice to improve the depth and breadth of healthcare services available to disadvantaged populations.

  9. Arranging ISO 13606 archetypes into a knowledge base.

    PubMed

    Kopanitsa, Georgy

    2014-01-01

    To enable the efficient reuse of standard based medical data we propose to develop a higher level information model that will complement the archetype model of ISO 13606. This model will make use of the relationships that are specified in UML to connect medical archetypes into a knowledge base within a repository. UML connectors were analyzed for their ability to be applied in the implementation of a higher level model that will establish relationships between archetypes. An information model was developed using XML Schema notation. The model allows linking different archetypes of one repository into a knowledge base. Presently it supports several relationships and will be advanced in future.

  10. A conceptual cognitive architecture for robots to learn behaviors from demonstrations in robotic aid area.

    PubMed

    Tan, Huan; Liang, Chen

    2011-01-01

    This paper proposes a conceptual hybrid cognitive architecture for cognitive robots to learn behaviors from demonstrations in robotic aid situations. Unlike the current cognitive architectures, this architecture puts concentration on the requirements of the safety, the interaction, and the non-centralized processing in robotic aid situations. Imitation learning technologies for cognitive robots have been integrated into this architecture for rapidly transferring the knowledge and skills between human teachers and robots.

  11. Waste Management Using Request-Based Virtual Organizations

    NASA Astrophysics Data System (ADS)

    Katriou, Stamatia Ann; Fragidis, Garyfallos; Ignatiadis, Ioannis; Tolias, Evangelos; Koumpis, Adamantios

    Waste management is on top of the political agenda globally as a high priority environmental issue, with billions spent on it each year. This paper proposes an approach for the disposal, transportation, recycling and reuse of waste. This approach incorporates the notion of Request Based Virtual Organizations (RBVOs) using a Service Oriented Architecture (SOA) and an ontology that serves the definition of waste management requirements. The populated ontology is utilized by a Multi-Agent System which performs negotiations and forms RBVOs. The proposed approach could be used by governments and companies searching for a means to perform such activities in an effective and efficient manner.

  12. Developing a New Thesaurus for Art and Architecture.

    ERIC Educational Resources Information Center

    Petersen, Toni

    1990-01-01

    This description of the development of the Art and Architecture Thesaurus from 1979 to the present explains the processes and policies that were used to construct a language designed to represent knowledge in art and architecture, as well as to be a surrogate for the image and object being described. (EAM)

  13. The Architecture of "Educare": Motion and Emotion in Postwar Educational Spaces

    ERIC Educational Resources Information Center

    Kozlovsky, Roy

    2010-01-01

    This essay explores the interplay between educational and architectural methodologies for analysing the school environment. It historicises the affinity between architectural and educational practices and modes of knowledge pertaining to the child's body during the period of postwar reconstruction in England to argue that educational spaces were…

  14. Frances: A Tool for Understanding Computer Architecture and Assembly Language

    ERIC Educational Resources Information Center

    Sondag, Tyler; Pokorny, Kian L.; Rajan, Hridesh

    2012-01-01

    Students in all areas of computing require knowledge of the computing device including software implementation at the machine level. Several courses in computer science curricula address these low-level details such as computer architecture and assembly languages. For such courses, there are advantages to studying real architectures instead of…

  15. Knowledge management in the engineering design environment

    NASA Technical Reports Server (NTRS)

    Briggs, Hugh C.

    2006-01-01

    The Aerospace and Defense industry is experiencing an increasing loss of knowledge through workforce reductions associated with business consolidation and retirement of senior personnel. Significant effort is being placed on process definition as part of ISO certification and, more recently, CMMI certification. The process knowledge in these efforts represents the simplest of engineering knowledge and many organizations are trying to get senior engineers to write more significant guidelines, best practices and design manuals. A new generation of design software, known as Product Lifecycle Management systems, has many mechanisms for capturing and deploying a wider variety of engineering knowledge than simple process definitions. These hold the promise of significant improvements through reuse of prior designs, codification of practices in workflows, and placement of detailed how-tos at the point of application.

  16. Design and Multicentric Implementation of a Generic Software Architecture for Patient Recruitment Systems Re-Using Existing HIS Tools and Routine Patient Data

    PubMed Central

    Trinczek, B.; Köpcke, F.; Leusch, T.; Majeed, R.W.; Schreiweis, B.; Wenk, J.; Bergh, B.; Ohmann, C.; Röhrig, R.; Prokosch, H.U.; Dugas, M.

    2014-01-01

    Summary Objective (1) To define features and data items of a Patient Recruitment System (PRS); (2) to design a generic software architecture of such a system covering the requirements; (3) to identify implementation options available within different Hospital Information System (HIS) environments; (4) to implement five PRS following the architecture and utilizing the implementation options as proof of concept. Methods Existing PRS were reviewed and interviews with users and developers conducted. All reported PRS features were collected and prioritized according to their published success and user’s request. Common feature sets were combined into software modules of a generic software architecture. Data items to process and transfer were identified for each of the modules. Each site collected implementation options available within their respective HIS environment for each module, provided a prototypical implementation based on available implementation possibilities and supported the patient recruitment of a clinical trial as a proof of concept. Results 24 commonly reported and requested features of a PRS were identified, 13 of them prioritized as being mandatory. A UML version 2 based software architecture containing 5 software modules covering these features was developed. 13 data item groups processed by the modules, thus required to be available electronically, have been identified. Several implementation options could be identified for each module, most of them being available at multiple sites. Utilizing available tools, a PRS could be implemented in each of the five participating German university hospitals. Conclusion A set of required features and data items of a PRS has been described for the first time. The software architecture covers all features in a clear, well-defined way. The variety of implementation options and the prototypes show that it is possible to implement the given architecture in different HIS environments, thus enabling more sites to successfully support patient recruitment in clinical trials. PMID:24734138

  17. Design and multicentric implementation of a generic software architecture for patient recruitment systems re-using existing HIS tools and routine patient data.

    PubMed

    Trinczek, B; Köpcke, F; Leusch, T; Majeed, R W; Schreiweis, B; Wenk, J; Bergh, B; Ohmann, C; Röhrig, R; Prokosch, H U; Dugas, M

    2014-01-01

    (1) To define features and data items of a Patient Recruitment System (PRS); (2) to design a generic software architecture of such a system covering the requirements; (3) to identify implementation options available within different Hospital Information System (HIS) environments; (4) to implement five PRS following the architecture and utilizing the implementation options as proof of concept. Existing PRS were reviewed and interviews with users and developers conducted. All reported PRS features were collected and prioritized according to their published success and user's request. Common feature sets were combined into software modules of a generic software architecture. Data items to process and transfer were identified for each of the modules. Each site collected implementation options available within their respective HIS environment for each module, provided a prototypical implementation based on available implementation possibilities and supported the patient recruitment of a clinical trial as a proof of concept. 24 commonly reported and requested features of a PRS were identified, 13 of them prioritized as being mandatory. A UML version 2 based software architecture containing 5 software modules covering these features was developed. 13 data item groups processed by the modules, thus required to be available electronically, have been identified. Several implementation options could be identified for each module, most of them being available at multiple sites. Utilizing available tools, a PRS could be implemented in each of the five participating German university hospitals. A set of required features and data items of a PRS has been described for the first time. The software architecture covers all features in a clear, well-defined way. The variety of implementation options and the prototypes show that it is possible to implement the given architecture in different HIS environments, thus enabling more sites to successfully support patient recruitment in clinical trials.

  18. Knowledge-Based Systems Research

    DTIC Science & Technology

    1990-08-24

    P. S., Laird, J. E., Newell, A. and McCarl, R. 1991. A Preliminary Analysis of the SOAR Architecture as a Basis for General Intelligence . Artifcial ...on reverse of neceSSjr’y gnd identify by block nhmber) FIELD I GRO’= SUB-C.OROUC Artificial Intelligence , Blackboard Systems, U°nstraint Satisfaction...knowledge acquisition; symbolic simulation; logic-based systems with self-awareness; SOAR, an architecture for general intelligence and learning

  19. GALEN: a third generation terminology tool to support a multipurpose national coding system for surgical procedures.

    PubMed

    Trombert-Paviot, B; Rodrigues, J M; Rogers, J E; Baud, R; van der Haring, E; Rassinoux, A M; Abrial, V; Clavel, L; Idir, H

    2000-09-01

    Generalised architecture for languages, encyclopedia and nomenclatures in medicine (GALEN) has developed a new generation of terminology tools based on a language independent model describing the semantics and allowing computer processing and multiple reuses as well as natural language understanding systems applications to facilitate the sharing and maintaining of consistent medical knowledge. During the European Union 4 Th. framework program project GALEN-IN-USE and later on within two contracts with the national health authorities we applied the modelling and the tools to the development of a new multipurpose coding system for surgical procedures named CCAM in a minority language country, France. On one hand, we contributed to a language independent knowledge repository and multilingual semantic dictionaries for multicultural Europe. On the other hand, we support the traditional process for creating a new coding system in medicine which is very much labour consuming by artificial intelligence tools using a medically oriented recursive ontology and natural language processing. We used an integrated software named CLAW (for classification workbench) to process French professional medical language rubrics produced by the national colleges of surgeons domain experts into intermediate dissections and to the Grail reference ontology model representation. From this language independent concept model representation, on one hand, we generate with the LNAT natural language generator controlled French natural language to support the finalization of the linguistic labels (first generation) in relation with the meanings of the conceptual system structure. On the other hand, the Claw classification manager proves to be very powerful to retrieve the initial domain experts rubrics list with different categories of concepts (second generation) within a semantic structured representation (third generation) bridge to the electronic patient record detailed terminology.

  20. On knowledge transfer management as a learning process for ad hoc teams

    NASA Astrophysics Data System (ADS)

    Iliescu, D.

    2017-08-01

    Knowledge management represents an emerging domain becoming more and more important. Concepts like knowledge codification and personalisation, knowledge life-cycle, social and technological dimensions, knowledge transfer and learning management are integral parts. Focus goes here in the process of knowledge transfer for the case of ad hoc teams. The social dimension of knowledge transfer plays an important role. No single individual actors involved in the process, but a collective one, representing the organisation. It is critically important for knowledge to be managed from the life-cycle point of view. A complex communication network needs to be in place to supports the process of knowledge transfer. Two particular concepts, the bridge tie and transactive memory, would eventually enhance the communication. The paper focuses on an informational communication platform supporting the collaborative work on knowledge transfer. The platform facilitates the creation of a topic language to be used in knowledge modelling, storage and reuse, by the ad hoc teams.

  1. A Meta-Cognitive Tool for Courseware Development, Maintenance, and Reuse

    ERIC Educational Resources Information Center

    Coffey, John W.

    2007-01-01

    Novak and Iuli [Novak, J. D. & Iuli, R. J. (1991). The use of meta-cognitive tools to facilitate knowledge production. In "A paper presented at the fourth Florida AI research symposium (FLAIRS '91)," Pensacola Beach, FL, May, 1991.] discuss the use of Concept Maps as meta-cognitive tools that help people to think about thinking. This work…

  2. Knowledge-based processing for aircraft flight control

    NASA Technical Reports Server (NTRS)

    Painter, John H.; Glass, Emily; Economides, Gregory; Russell, Paul

    1994-01-01

    This Contractor Report documents research in Intelligent Control using knowledge-based processing in a manner dual to methods found in the classic stochastic decision, estimation, and control discipline. Such knowledge-based control has also been called Declarative, and Hybid. Software architectures were sought, employing the parallelism inherent in modern object-oriented modeling and programming. The viewpoint adopted was that Intelligent Control employs a class of domain-specific software architectures having features common over a broad variety of implementations, such as management of aircraft flight, power distribution, etc. As much attention was paid to software engineering issues as to artificial intelligence and control issues. This research considered that particular processing methods from the stochastic and knowledge-based worlds are duals, that is, similar in a broad context. They provide architectural design concepts which serve as bridges between the disparate disciplines of decision, estimation, control, and artificial intelligence. This research was applied to the control of a subsonic transport aircraft in the airport terminal area.

  3. A mini review on the integration of resource recovery from wastewater into sustainability of the green building through phycoremediation

    NASA Astrophysics Data System (ADS)

    Yulistyorini, Anie

    2017-09-01

    Green building implementation is an important assessment for sustainable development to establish a good quality of the environment. To develop the future green building implementation, resource recovery from the building wastewater is significantly important to consider as a part of the green building development. Discharge of urban wastewater into water bodies trigger of eutrophication in the water catchment, accordingly need further treatment to recover the nutrient before it is reused or discharged into receiving water bodies. In this regard, integration of microalgae cultivation in closed photobioreactor as building façade is critically important to be considered in the implementation of the green building. Microalgae offer multi-function as bioremediation (phycoremediation) of the wastewater, production of the biofuels, and important algal bio-products. At the same time, algae façade boost the reduction of the operating cost in forms of light, thermal energy and add the benefit into the building for energy reduction and architecture function. It promises an environmental benefit to support green building spirit through nutrient recovery and wastewater reuse for algae cultivation and to enhance the aesthetic of the building façade.

  4. An approach to define semantics for BPM systems interoperability

    NASA Astrophysics Data System (ADS)

    Rico, Mariela; Caliusco, María Laura; Chiotti, Omar; Rosa Galli, María

    2015-04-01

    This article proposes defining semantics for Business Process Management systems interoperability through the ontology of Electronic Business Documents (EBD) used to interchange the information required to perform cross-organizational processes. The semantic model generated allows aligning enterprise's business processes to support cross-organizational processes by matching the business ontology of each business partner with the EBD ontology. The result is a flexible software architecture that allows dynamically defining cross-organizational business processes by reusing the EBD ontology. For developing the semantic model, a method is presented, which is based on a strategy for discovering entity features whose interpretation depends on the context, and representing them for enriching the ontology. The proposed method complements ontology learning techniques that can not infer semantic features not represented in data sources. In order to improve the representation of these entity features, the method proposes using widely accepted ontologies, for representing time entities and relations, physical quantities, measurement units, official country names, and currencies and funds, among others. When the ontologies reuse is not possible, the method proposes identifying whether that feature is simple or complex, and defines a strategy to be followed. An empirical validation of the approach has been performed through a case study.

  5. A Coupled Simulation Architecture for Agent-Based/Geohydrological Modelling

    NASA Astrophysics Data System (ADS)

    Jaxa-Rozen, M.

    2016-12-01

    The quantitative modelling of social-ecological systems can provide useful insights into the interplay between social and environmental processes, and their impact on emergent system dynamics. However, such models should acknowledge the complexity and uncertainty of both of the underlying subsystems. For instance, the agent-based models which are increasingly popular for groundwater management studies can be made more useful by directly accounting for the hydrological processes which drive environmental outcomes. Conversely, conventional environmental models can benefit from an agent-based depiction of the feedbacks and heuristics which influence the decisions of groundwater users. From this perspective, this work describes a Python-based software architecture which couples the popular NetLogo agent-based platform with the MODFLOW/SEAWAT geohydrological modelling environment. This approach enables users to implement agent-based models in NetLogo's user-friendly platform, while benefiting from the full capabilities of MODFLOW/SEAWAT packages or reusing existing geohydrological models. The software architecture is based on the pyNetLogo connector, which provides an interface between the NetLogo agent-based modelling software and the Python programming language. This functionality is then extended and combined with Python's object-oriented features, to design a simulation architecture which couples NetLogo with MODFLOW/SEAWAT through the FloPy library (Bakker et al., 2016). The Python programming language also provides access to a range of external packages which can be used for testing and analysing the coupled models, which is illustrated for an application of Aquifer Thermal Energy Storage (ATES).

  6. New Methodologies for the Documentation of Fortified Architecture in the State of Ruins

    NASA Astrophysics Data System (ADS)

    Fallavollita, F.; Ugolini, A.

    2017-05-01

    Fortresses and castles are important symbols of social and cultural identity providing tangible evidence of cultural unity in Europe. They are items for which it is always difficult to outline a credible prospect of reuse, their old raison d'être- namely the military, political and economic purposes for which they were built- having been lost. In recent years a Research Unit of the University of Bologna composed of architects from different disciplines has conducted a series of studies on fortified heritage in the Emilia Romagna region (and not only) often characterized by buildings in ruins. The purpose of this study is mainly to document a legacy, which has already been studied in depth by historians, and previously lacked reliable architectural surveys for the definition of a credible as well as sustainable conservation project. Our contribution will focus on different techniques and methods used for the survey of these architectures, the characteristics of which- in the past- have made an effective survey of these buildings difficult, if not impossible. The survey of a ruin requires, much more than the evaluation of an intact building, reading skills and an interpretation of architectural spaces to better manage the stages of documentation and data processing. Through a series of case studies of fortified buildings in ruins, we intend to describe the reasons that guided the choice of the methods and tools used and to highlight the potentials and the limits of these choices in financial terms.

  7. Systems biology driven software design for the research enterprise.

    PubMed

    Boyle, John; Cavnor, Christopher; Killcoyne, Sarah; Shmulevich, Ilya

    2008-06-25

    In systems biology, and many other areas of research, there is a need for the interoperability of tools and data sources that were not originally designed to be integrated. Due to the interdisciplinary nature of systems biology, and its association with high throughput experimental platforms, there is an additional need to continually integrate new technologies. As scientists work in isolated groups, integration with other groups is rarely a consideration when building the required software tools. We illustrate an approach, through the discussion of a purpose built software architecture, which allows disparate groups to reuse tools and access data sources in a common manner. The architecture allows for: the rapid development of distributed applications; interoperability, so it can be used by a wide variety of developers and computational biologists; development using standard tools, so that it is easy to maintain and does not require a large development effort; extensibility, so that new technologies and data types can be incorporated; and non intrusive development, insofar as researchers need not to adhere to a pre-existing object model. By using a relatively simple integration strategy, based upon a common identity system and dynamically discovered interoperable services, a light-weight software architecture can become the focal point through which scientists can both get access to and analyse the plethora of experimentally derived data.

  8. CUDA Optimization Strategies for Compute- and Memory-Bound Neuroimaging Algorithms

    PubMed Central

    Lee, Daren; Dinov, Ivo; Dong, Bin; Gutman, Boris; Yanovsky, Igor; Toga, Arthur W.

    2011-01-01

    As neuroimaging algorithms and technology continue to grow faster than CPU performance in complexity and image resolution, data-parallel computing methods will be increasingly important. The high performance, data-parallel architecture of modern graphical processing units (GPUs) can reduce computational times by orders of magnitude. However, its massively threaded architecture introduces challenges when GPU resources are exceeded. This paper presents optimization strategies for compute- and memory-bound algorithms for the CUDA architecture. For compute-bound algorithms, the registers are reduced through variable reuse via shared memory and the data throughput is increased through heavier thread workloads and maximizing the thread configuration for a single thread block per multiprocessor. For memory-bound algorithms, fitting the data into the fast but limited GPU resources is achieved through reorganizing the data into self-contained structures and employing a multi-pass approach. Memory latencies are reduced by selecting memory resources whose cache performance are optimized for the algorithm's access patterns. We demonstrate the strategies on two computationally expensive algorithms and achieve optimized GPU implementations that perform up to 6× faster than unoptimized ones. Compared to CPU implementations, we achieve peak GPU speedups of 129× for the 3D unbiased nonlinear image registration technique and 93× for the non-local means surface denoising algorithm. PMID:21159404

  9. Modular Rocket Engine Control Software (MRECS)

    NASA Technical Reports Server (NTRS)

    Tarrant, C.; Crook, J.

    1998-01-01

    The Modular Rocket Engine Control Software (MRECS) Program is a technology demonstration effort designed to advance the state-of-the-art in launch vehicle propulsion systems. Its emphasis is on developing and demonstrating a modular software architecture for advanced engine control systems that will result in lower software maintenance (operations) costs. It effectively accommodates software requirement changes that occur due to hardware technology upgrades and engine development testing. Ground rules directed by MSFC were to optimize modularity and implement the software in the Ada programming language. MRECS system software and the software development environment utilize Commercial-Off-the-Shelf (COTS) products. This paper presents the objectives, benefits, and status of the program. The software architecture, design, and development environment are described. MRECS tasks are defined and timing relationships given. Major accomplishments are listed. MRECS offers benefits to a wide variety of advanced technology programs in the areas of modular software architecture, reuse software, and reduced software reverification time related to software changes. MRECS was recently modified to support a Space Shuttle Main Engine (SSME) hot-fire test. Cold Flow and Flight Readiness Testing were completed before the test was cancelled. Currently, the program is focused on supporting NASA MSFC in accomplishing development testing of the Fastrac Engine, part of NASA's Low Cost Technologies (LCT) Program. MRECS will be used for all engine development testing.

  10. CUDA optimization strategies for compute- and memory-bound neuroimaging algorithms.

    PubMed

    Lee, Daren; Dinov, Ivo; Dong, Bin; Gutman, Boris; Yanovsky, Igor; Toga, Arthur W

    2012-06-01

    As neuroimaging algorithms and technology continue to grow faster than CPU performance in complexity and image resolution, data-parallel computing methods will be increasingly important. The high performance, data-parallel architecture of modern graphical processing units (GPUs) can reduce computational times by orders of magnitude. However, its massively threaded architecture introduces challenges when GPU resources are exceeded. This paper presents optimization strategies for compute- and memory-bound algorithms for the CUDA architecture. For compute-bound algorithms, the registers are reduced through variable reuse via shared memory and the data throughput is increased through heavier thread workloads and maximizing the thread configuration for a single thread block per multiprocessor. For memory-bound algorithms, fitting the data into the fast but limited GPU resources is achieved through reorganizing the data into self-contained structures and employing a multi-pass approach. Memory latencies are reduced by selecting memory resources whose cache performance are optimized for the algorithm's access patterns. We demonstrate the strategies on two computationally expensive algorithms and achieve optimized GPU implementations that perform up to 6× faster than unoptimized ones. Compared to CPU implementations, we achieve peak GPU speedups of 129× for the 3D unbiased nonlinear image registration technique and 93× for the non-local means surface denoising algorithm. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.

  11. Integrating clinical research with the Healthcare Enterprise: from the RE-USE project to the EHR4CR platform.

    PubMed

    El Fadly, AbdenNaji; Rance, Bastien; Lucas, Noël; Mead, Charles; Chatellier, Gilles; Lastic, Pierre-Yves; Jaulent, Marie-Christine; Daniel, Christel

    2011-12-01

    There are different approaches for repurposing clinical data collected in the Electronic Healthcare Record (EHR) for use in clinical research. Semantic integration of "siloed" applications across domain boundaries is the raison d'être of the standards-based profiles developed by the Integrating the Healthcare Enterprise (IHE) initiative - an initiative by healthcare professionals and industry promoting the coordinated use of established standards such as DICOM and HL7 to address specific clinical needs in support of optimal patient care. In particular, the combination of two IHE profiles - the integration profile "Retrieve Form for Data Capture" (RFD), and the IHE content profile "Clinical Research Document" (CRD) - offers a straightforward approach to repurposing EHR data by enabling the pre-population of the case report forms (eCRF) used for clinical research data capture by Clinical Data Management Systems (CDMS) with previously collected EHR data. Implement an alternative solution of the RFD-CRD integration profile centered around two approaches: (i) Use of the EHR as the single-source data-entry and persistence point in order to ensure that all the clinical data for a given patient could be found in a single source irrespective of the data collection context, i.e. patient care or clinical research; and (ii) Maximize the automatic pre-population process through the use of a semantic interoperability services that identify duplicate or semantically-equivalent eCRF/EHR data elements as they were collected in the EHR context. The RE-USE architecture and associated profiles are focused on defining a set of scalable, standards-based, IHE-compliant profiles that can enable single-source data collection/entry and cross-system data reuse through semantic integration. Specifically, data reuse is realized through the semantic mapping of data collection fields in electronic Case Report Forms (eCRFs) to data elements previously defined as part of patient care-centric templates in the EHR context. The approach was evaluated in the context of a multi-center clinical trial conducted in a large, multi-disciplinary hospital with an installed EHR. Data elements of seven eCRFs used in a multi-center clinical trial were mapped to data elements of patient care-centric templates in use in the EHR at the George Pompidou hospital. 13.4% of the data elements of the eCRFs were found to be represented in EHR templates and were therefore candidate for pre-population. During the execution phase of the clinical study, the semantic mapping architecture enabled data persisted in the EHR context as part of clinical care to be used to pre-populate eCRFS for use without secondary data entry. To ensure that the pre-populated data is viable for use in the clinical research context, all pre-populated eCRF data needs to be first approved by a trial investigator prior to being persisted in a research data store within a CDMS. Single-source data entry in the clinical care context for use in the clinical research context - a process enabled through the use of the EHR as single point of data entry, can - if demonstrated to be a viable strategy - not only significantly reduce data collection efforts while simultaneously increasing data collection accuracy secondary to elimination of transcription or double-entry errors between the two contexts but also ensure that all the clinical data for a given patient, irrespective of the data collection context, are available in the EHR for decision support and treatment planning. The RE-USE approach used mapping algorithms to identify semantic coherence between clinical care and clinical research data elements and pre-populate eCRFs. The RE-USE project utilized SNOMED International v.3.5 as its "pivot reference terminology" to support EHR-to-eCRF mapping, a decision that likely enhanced the "recall" of the mapping algorithms. The RE-USE results demonstrate the difficult challenges involved in semantic integration between the clinical care and clinical research contexts. Copyright © 2011 Elsevier Inc. All rights reserved.

  12. The organization and contribution of helicases to RNA splicing.

    PubMed

    De, Inessa; Schmitzová, Jana; Pena, Vladimir

    2016-01-01

    Splicing is an essential step of gene expression. It occurs in two consecutive chemical reactions catalyzed by a large protein-RNA complex named the spliceosome. Assembled on the pre-mRNA substrate from five small nuclear proteins, the spliceosome acts as a protein-controlled ribozyme to catalyze the two reactions and finally dissociates into its components, which are re-used for a new round of splicing. Upon following this cyclic pathway, the spliceosome undergoes numerous intermediate stages that differ in composition as well as in their internal RNA-RNA and RNA-protein contacts. The driving forces and control mechanisms of these remodeling processes are provided by specific molecular motors called RNA helicases. While eight spliceosomal helicases are present in all organisms, higher eukaryotes contain five additional ones potentially required to drive a more intricate splicing pathway and link it to an RNA metabolism of increasing complexity. Spliceosomal helicases exhibit a notable structural diversity in their accessory domains and overall architecture, in accordance with the diversity of their task-specific functions. This review summarizes structure-function knowledge about all spliceosomal helicases, including the latter five, which traditionally are treated separately from the conserved ones. The implications of the structural characteristics of helicases for their functions, as well as for their structural communication within the multi-subunits environment of the spliceosome, are pointed out. © 2016 Wiley Periodicals, Inc.

  13. A Tailored Ontology Supporting Sensor Implementation for the Maintenance of Industrial Machines.

    PubMed

    Maleki, Elaheh; Belkadi, Farouk; Ritou, Mathieu; Bernard, Alain

    2017-09-08

    The longtime productivity of an industrial machine is improved by condition-based maintenance strategies. To do this, the integration of sensors and other cyber-physical devices is necessary in order to capture and analyze a machine's condition through its lifespan. Thus, choosing the best sensor is a critical step to ensure the efficiency of the maintenance process. Indeed, considering the variety of sensors, and their features and performance, a formal classification of a sensor's domain knowledge is crucial. This classification facilitates the search for and reuse of solutions during the design of a new maintenance service. Following a Knowledge Management methodology, the paper proposes and develops a new sensor ontology that structures the domain knowledge, covering both theoretical and experimental sensor attributes. An industrial case study is conducted to validate the proposed ontology and to demonstrate its utility as a guideline to ease the search of suitable sensors. Based on the ontology, the final solution will be implemented in a shared repository connected to legacy CAD (computer-aided design) systems. The selection of the best sensor is, firstly, obtained by the matching of application requirements and sensor specifications (that are proposed by this sensor repository). Then, it is refined from the experimentation results. The achieved solution is recorded in the sensor repository for future reuse. As a result, the time and cost of the design process of new condition-based maintenance services is reduced.

  14. Mercury: Reusable software application for Metadata Management, Data Discovery and Access

    NASA Astrophysics Data System (ADS)

    Devarakonda, Ranjeet; Palanisamy, Giri; Green, James; Wilson, Bruce E.

    2009-12-01

    Mercury is a federated metadata harvesting, data discovery and access tool based on both open source packages and custom developed software. It was originally developed for NASA, and the Mercury development consortium now includes funding from NASA, USGS, and DOE. Mercury is itself a reusable toolset for metadata, with current use in 12 different projects. Mercury also supports the reuse of metadata by enabling searching across a range of metadata specification and standards including XML, Z39.50, FGDC, Dublin-Core, Darwin-Core, EML, and ISO-19115. Mercury provides a single portal to information contained in distributed data management systems. It collects metadata and key data from contributing project servers distributed around the world and builds a centralized index. The Mercury search interfaces then allow the users to perform simple, fielded, spatial and temporal searches across these metadata sources. One of the major goals of the recent redesign of Mercury was to improve the software reusability across the projects which currently fund the continuing development of Mercury. These projects span a range of land, atmosphere, and ocean ecological communities and have a number of common needs for metadata searches, but they also have a number of needs specific to one or a few projects To balance these common and project-specific needs, Mercury’s architecture includes three major reusable components; a harvester engine, an indexing system and a user interface component. The harvester engine is responsible for harvesting metadata records from various distributed servers around the USA and around the world. The harvester software was packaged in such a way that all the Mercury projects will use the same harvester scripts but each project will be driven by a set of configuration files. The harvested files are then passed to the Indexing system, where each of the fields in these structured metadata records are indexed properly, so that the query engine can perform simple, keyword, spatial and temporal searches across these metadata sources. The search user interface software has two API categories; a common core API which is used by all the Mercury user interfaces for querying the index and a customized API for project specific user interfaces. For our work in producing a reusable, portable, robust, feature-rich application, Mercury received a 2008 NASA Earth Science Data Systems Software Reuse Working Group Peer-Recognition Software Reuse Award. The new Mercury system is based on a Service Oriented Architecture and effectively reuses components for various services such as Thesaurus Service, Gazetteer Web Service and UDDI Directory Services. The software also provides various search services including: RSS, Geo-RSS, OpenSearch, Web Services and Portlets, integrated shopping cart to order datasets from various data centers (ORNL DAAC, NSIDC) and integrated visualization tools. Other features include: Filtering and dynamic sorting of search results, book-markable search results, save, retrieve, and modify search criteria.

  15. Using Arden Syntax for the creation of a multi-patient surveillance dashboard.

    PubMed

    Kraus, Stefan; Drescher, Caroline; Sedlmayr, Martin; Castellanos, Ixchel; Prokosch, Hans-Ulrich; Toddenroth, Dennis

    2015-10-09

    Most practically deployed Arden-Syntax-based clinical decision support (CDS) modules process data from individual patients. The specification of Arden Syntax, however, would in principle also support multi-patient CDS. The patient data management system (PDMS) at our local intensive care units does not natively support patient overviews from customizable CDS routines, but local physicians indicated a demand for multi-patient tabular overviews of important clinical parameters such as key laboratory measurements. As our PDMS installation provides Arden Syntax support, we set out to explore the capability of Arden Syntax for multi-patient CDS by implementing a prototypical dashboard for visualizing laboratory findings from patient sets. Our implementation leveraged the object data type, supported by later versions of Arden, which turned out to be serviceable for representing complex input data from several patients. For our prototype, we designed a modularized architecture that separates the definition of technical operations, in particular the control of the patient context, from the actual clinical knowledge. Individual Medical Logic Modules (MLMs) for processing single patient attributes could then be developed according to well-tried Arden Syntax conventions. We successfully implemented a working dashboard prototype entirely in Arden Syntax. The architecture consists of a controller MLM to handle the patient context, a presenter MLM to generate a dashboard view, and a set of traditional MLMs containing the clinical decision logic. Our prototype could be integrated into the graphical user interface of the local PDMS. We observed that with realistic input data the average execution time of about 200ms for generating dashboard views attained applicable performance. Our study demonstrated the general feasibility of creating multi-patient CDS routines in Arden Syntax. We believe that our prototypical dashboard also suggests that such implementations can be relatively easy, and may simultaneously hold promise for sharing dashboards between institutions and reusing elementary components for additional dashboards. Copyright © 2015 Elsevier B.V. All rights reserved.

  16. Parallel heterogeneous architectures for efficient OMP compressive sensing reconstruction

    NASA Astrophysics Data System (ADS)

    Kulkarni, Amey; Stanislaus, Jerome L.; Mohsenin, Tinoosh

    2014-05-01

    Compressive Sensing (CS) is a novel scheme, in which a signal that is sparse in a known transform domain can be reconstructed using fewer samples. The signal reconstruction techniques are computationally intensive and have sluggish performance, which make them impractical for real-time processing applications . The paper presents novel architectures for Orthogonal Matching Pursuit algorithm, one of the popular CS reconstruction algorithms. We show the implementation results of proposed architectures on FPGA, ASIC and on a custom many-core platform. For FPGA and ASIC implementation, a novel thresholding method is used to reduce the processing time for the optimization problem by at least 25%. Whereas, for the custom many-core platform, efficient parallelization techniques are applied, to reconstruct signals with variant signal lengths of N and sparsity of m. The algorithm is divided into three kernels. Each kernel is parallelized to reduce execution time, whereas efficient reuse of the matrix operators allows us to reduce area. Matrix operations are efficiently paralellized by taking advantage of blocked algorithms. For demonstration purpose, all architectures reconstruct a 256-length signal with maximum sparsity of 8 using 64 measurements. Implementation on Xilinx Virtex-5 FPGA, requires 27.14 μs to reconstruct the signal using basic OMP. Whereas, with thresholding method it requires 18 μs. ASIC implementation reconstructs the signal in 13 μs. However, our custom many-core, operating at 1.18 GHz, takes 18.28 μs to complete. Our results show that compared to the previous published work of the same algorithm and matrix size, proposed architectures for FPGA and ASIC implementations perform 1.3x and 1.8x respectively faster. Also, the proposed many-core implementation performs 3000x faster than the CPU and 2000x faster than the GPU.

  17. Judicious use of custom development in an open source component architecture

    NASA Astrophysics Data System (ADS)

    Bristol, S.; Latysh, N.; Long, D.; Tekell, S.; Allen, J.

    2014-12-01

    Modern software engineering is not as much programming from scratch as innovative assembly of existing components. Seamlessly integrating disparate components into scalable, performant architecture requires sound engineering craftsmanship and can often result in increased cost efficiency and accelerated capabilities if software teams focus their creativity on the edges of the problem space. ScienceBase is part of the U.S. Geological Survey scientific cyberinfrastructure, providing data and information management, distribution services, and analysis capabilities in a way that strives to follow this pattern. ScienceBase leverages open source NoSQL and relational databases, search indexing technology, spatial service engines, numerous libraries, and one proprietary but necessary software component in its architecture. The primary engineering focus is cohesive component interaction, including construction of a seamless Application Programming Interface (API) across all elements. The API allows researchers and software developers alike to leverage the infrastructure in unique, creative ways. Scaling the ScienceBase architecture and core API with increasing data volume (more databases) and complexity (integrated science problems) is a primary challenge addressed by judicious use of custom development in the component architecture. Other data management and informatics activities in the earth sciences have independently resolved to a similar design of reusing and building upon established technology and are working through similar issues for managing and developing information (e.g., U.S. Geoscience Information Network; NASA's Earth Observing System Clearing House; GSToRE at the University of New Mexico). Recent discussions facilitated through the Earth Science Information Partners are exploring potential avenues to exploit the implicit relationships between similar projects for explicit gains in our ability to more rapidly advance global scientific cyberinfrastructure.

  18. Analyzing and designing object-oriented missile simulations with concurrency

    NASA Astrophysics Data System (ADS)

    Randorf, Jeffrey Allen

    2000-11-01

    A software object model for the six degree-of-freedom missile modeling domain is presented. As a precursor, a domain analysis of the missile modeling domain was started, based on the Feature-Oriented Domain Analysis (FODA) technique described by the Software Engineering Institute (SEI). It was subsequently determined the FODA methodology is functionally equivalent to the Object Modeling Technique. The analysis used legacy software documentation and code from the ENDOSIM, KDEC, and TFrames 6-DOF modeling tools, including other technical literature. The SEI Object Connection Architecture (OCA) was the template for designing the object model. Three variants of the OCA were considered---a reference structure, a recursive structure, and a reference structure with augmentation for flight vehicle modeling. The reference OCA design option was chosen for maintaining simplicity while not compromising the expressive power of the OMT model. The missile architecture was then analyzed for potential areas of concurrent computing. It was shown how protected objects could be used for data passing between OCA object managers, allowing concurrent access without changing the OCA reference design intent or structure. The implementation language was the 1995 release of Ada. OCA software components were shown how to be expressed as Ada child packages. While acceleration of several low level and other high operations level are possible on proper hardware, there was a 33% degradation of 4th order Runge-Kutta integrator performance of two simultaneous ordinary differential equations using Ada tasking on a single processor machine. The Defense Department's High Level Architecture was introduced and explained in context with the OCA. It was shown the HLA and OCA were not mutually exclusive architectures, but complimentary. HLA was shown as an interoperability solution, with the OCA as an architectural vehicle for software reuse. Further directions for implementing a 6-DOF missile modeling environment are discussed.

  19. On Some Aspects of Study on Dimensions and Proportions of Church Architecture

    NASA Astrophysics Data System (ADS)

    Kolobaeva, T. V.

    2017-11-01

    Architecture forms and arranges the environment required for a comfortable life and human activity. The modern principles of architectural space arrangement and form making are represented by a reliable system of buildings which are used in design. Architects apply these principles and knowledge of space arrangement in regard to the study of special and regulatory literature when performing a particular creative task. This system of accumulated knowledge is perceived in the form of an existing stereotype with no regard for understanding of the form making and experience inherent to the architects and thinkers of previous ages. We make an attempt to restore this connection as the form-making specific regularities known by ancient architects should be taken into account. The paper gives an insight into some aspects of traditional dimensions and proportions of church architecture.

  20. Knowledge Framework Implementation with Multiple Architectures - 13090

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Upadhyay, H.; Lagos, L.; Quintero, W.

    2013-07-01

    Multiple kinds of knowledge management systems are operational in public and private enterprises, large and small organizations with a variety of business models that make the design, implementation and operation of integrated knowledge systems very difficult. In recent days, there has been a sweeping advancement in the information technology area, leading to the development of sophisticated frameworks and architectures. These platforms need to be used for the development of integrated knowledge management systems which provides a common platform for sharing knowledge across the enterprise, thereby reducing the operational inefficiencies and delivering cost savings. This paper discusses the knowledge framework andmore » architecture that can be used for the system development and its application to real life need of nuclear industry. A case study of deactivation and decommissioning (D and D) is discussed with the Knowledge Management Information Tool platform and framework. D and D work is a high priority activity across the Department of Energy (DOE) complex. Subject matter specialists (SMS) associated with DOE sites, the Energy Facility Contractors Group (EFCOG) and the D and D community have gained extensive knowledge and experience over the years in the cleanup of the legacy waste from the Manhattan Project. To prevent the D and D knowledge and expertise from being lost over time from the evolving and aging workforce, DOE and the Applied Research Center (ARC) at Florida International University (FIU) proposed to capture and maintain this valuable information in a universally available and easily usable system. (authors)« less

  1. Towards a standardised representation of a knowledge base for adverse drug event prevention.

    PubMed

    Koutkias, Vassilis; Lazou, Katerina; de Clercq, Paul; Maglaveras, Nicos

    2011-01-01

    Knowledge representation is an important part of knowledge engineering activities that is crucial for enabling knowledge sharing and reuse. In this regard, standardised formalisms and technologies play a significant role. Especially for the medical domain, where knowledge may be tacit, not articulated and highly diverse, the development and adoption of standardised knowledge representations is highly challenging and of outmost importance to achieve knowledge interoperability. To this end, this paper presents a research effort towards the standardised representation of a Knowledge Base (KB) encapsulating rule-based signals and procedures for Adverse Drug Event (ADE) prevention. The KB constitutes an integral part of Clinical Decision Support Systems (CDSSs) to be used at the point of care. The paper highlights the requirements at the domain of discourse with respect to knowledge representation, according to which GELLO (an HL7 and ANSI standard) has been adopted. Results of our prototype implementation are presented along with the advantages and the limitations introduced by the employed approach.

  2. Proposal for Re-Usable TODO Knowledge Management System RESTER

    NASA Astrophysics Data System (ADS)

    Saga, Ryosuke; Kageyama, Akinori; Tsuji, Hiroshi

    This paper describes how to reuse a series of ad-hoc tasks such as special meeting arrangement and equipment procurement. Our RESTER (Reusable TODO Synthesizer) allows a group to reuse a series of tasks which are recorded in case database. Given a specific event, RESTER repairs the retrieved similar case by the ontology which describes the relationship of concept in the organization. A user has chance to check the modified case and to update it if he finds that there are incorrect repair because of deficient ontology. The user is also requested to judge if the retrieved case works or not. If he judges it is useful, the case becomes to be reused more frequently. Thus, RESTER works under the premise of human-computer collaboration. Based on the presented framework, this paper has identified several desirable attributes: (1) RESTER allows a group to externalize its experience on jobs, (2) Externalized experience are connected in case database, (3) A case is internalized by other group when it is retrieved and repaired for a new event, (4) New job generated from the previous similar job of one group is socialized by the other group.

  3. Numerical Modeling of the Lake Mary Road Bridge for Foundation Reuse Assessment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sitek, M. A.; Bojanowski, C.; Lottes, S. A.

    This project uses numerical techniques to assess the structural integrity and capacity of the bridge foundations and, as a result, reduces the risk associated with reusing the same foundation for a new superstructure. Nondestructive test methods of different types were used in combination with the numerical modeling and analysis. The onsite tests included visual inspection, tomography, ground penetrating radar, drilling boreholes and coreholes, and the laboratory tests on recovered samples. The results were utilized to identify the current geometry of the structure with foundation, including the hidden geometry of the abutments and piers, and soil and foundation material properties. Thismore » data was used to build the numerical models and run computational analyses on a high performance computer cluster to assess the structural integrity of the bridge and foundations including the suitability of the foundation for reuse with a new superstructure and traffic that will increase the load on the foundations. Computational analysis is more cost-effective and gives an advantage of getting more detailed knowledge about the structural response. It also enables to go beyond non-destructive testing and find the failure conditions without destroying the structure under consideration.« less

  4. Arranging ISO 13606 archetypes into a knowledge base using UML connectors.

    PubMed

    Kopanitsa, Georgy

    2014-01-01

    To enable the efficient reuse of standard based medical data we propose to develop a higher-level information model that will complement the archetype model of ISO 13606. This model will make use of the relationships that are specified in UML to connect medical archetypes into a knowledge base within a repository. UML connectors were analysed for their ability to be applied in the implementation of a higher-level model that will establish relationships between archetypes. An information model was developed using XML Schema notation. The model allows linking different archetypes of one repository into a knowledge base. Presently it supports several relationships and will be advanced in future.

  5. Meta-Synthetic Support Frameworks for Reuse of Government Information Resources on City Travel and Traffic: The Case of Beijing

    ERIC Educational Resources Information Center

    An, Xiaomi; Xu, Shaotong; Mu, Yong; Wang, Wei; Bai, Xian Yang; Dawson, Andy; Han, Hongqi

    2012-01-01

    Purpose: The purpose of this paper is to propose meta-synthetic ideas and knowledge asset management approaches to build a comprehensive strategic framework for Beijing City in China. Design/methodology/approach: Methods include a review of relevant literature in both English and Chinese, case studies of different types of support frameworks in…

  6. Ethics Issues of Digital Contents for Pre-Service Primary Teachers: A Gamification Experience for Self-Assessment with Socrative

    ERIC Educational Resources Information Center

    Pérez Garcias, Adolfina; Marín, Victoria I.

    2016-01-01

    The knowledge society has brought many possibilities for open education practices and, simultaneously, deep ethical challenges related to the use, sharing and reuse of digital content. In fact, even at university level, many undergraduate students do not respect the licences of digital resources. As part of the contents of a third-year educational…

  7. Reusing Comprehensive Charts of Tense Forms to Teach EFL Students in a University of Science and Technology

    ERIC Educational Resources Information Center

    Tsai, Min-Hsiu

    2017-01-01

    This study investigated 228 English as foreign language freshmen at a university of science and technology in southern Taiwan to explore the participants' knowledge of English tense forms by recognizing 12 tense forms and translating Chinese sentences into English with specific tense forms. The results showed that the participants who were taught…

  8. Students' Knowledge Sources and Knowledge Sharing in the Design Studio--An Exploratory Study

    ERIC Educational Resources Information Center

    Chiu, Sheng-Hsiao

    2010-01-01

    Architectural design is a knowledge-intensive activity; however, students frequently lack sufficient knowledge when they practice design. Collaborative learning can supplement the students' insufficient expertise. Successful collaborative learning relies on knowledge sharing between students. This implies that the peers are a considerable design…

  9. Utilizing Expert Knowledge in Estimating Future STS Costs

    NASA Technical Reports Server (NTRS)

    Fortner, David B.; Ruiz-Torres, Alex J.

    2004-01-01

    A method of estimating the costs of future space transportation systems (STSs) involves classical activity-based cost (ABC) modeling combined with systematic utilization of the knowledge and opinions of experts to extend the process-flow knowledge of existing systems to systems that involve new materials and/or new architectures. The expert knowledge is particularly helpful in filling gaps that arise in computational models of processes because of inconsistencies in historical cost data. Heretofore, the costs of planned STSs have been estimated following a "top-down" approach that tends to force the architectures of new systems to incorporate process flows like those of the space shuttles. In this ABC-based method, one makes assumptions about the processes, but otherwise follows a "bottoms up" approach that does not force the new system architecture to incorporate a space-shuttle-like process flow. Prototype software has been developed to implement this method. Through further development of software, it should be possible to extend the method beyond the space program to almost any setting in which there is a need to estimate the costs of a new system and to extend the applicable knowledge base in order to make the estimate.

  10. Knowledge Management in Role Based Agents

    NASA Astrophysics Data System (ADS)

    Kır, Hüseyin; Ekinci, Erdem Eser; Dikenelli, Oguz

    In multi-agent system literature, the role concept is getting increasingly researched to provide an abstraction to scope beliefs, norms, goals of agents and to shape relationships of the agents in the organization. In this research, we propose a knowledgebase architecture to increase applicability of roles in MAS domain by drawing inspiration from the self concept in the role theory of sociology. The proposed knowledgebase architecture has granulated structure that is dynamically organized according to the agent's identification in a social environment. Thanks to this dynamic structure, agents are enabled to work on consistent knowledge in spite of inevitable conflicts between roles and the agent. The knowledgebase architecture is also implemented and incorporated into the SEAGENT multi-agent system development framework.

  11. An acceleration system for Laplacian image fusion based on SoC

    NASA Astrophysics Data System (ADS)

    Gao, Liwen; Zhao, Hongtu; Qu, Xiujie; Wei, Tianbo; Du, Peng

    2018-04-01

    Based on the analysis of Laplacian image fusion algorithm, this paper proposes a partial pipelining and modular processing architecture, and a SoC based acceleration system is implemented accordingly. Full pipelining method is used for the design of each module, and modules in series form the partial pipelining with unified data formation, which is easy for management and reuse. Integrated with ARM processor, DMA and embedded bare-mental program, this system achieves 4 layers of Laplacian pyramid on the Zynq-7000 board. Experiments show that, with small resources consumption, a couple of 256×256 images can be fused within 1ms, maintaining a fine fusion effect at the same time.

  12. Landsat Ecosystem Disturbance Adaptive Processing System (LEDAPS)

    NASA Technical Reports Server (NTRS)

    Masek, Jeffrey G.

    2006-01-01

    The Landsat Ecosystem Disturbance Adaptive Processing System (LEDAPS) project is creating a record of forest disturbance and regrowth for North America from the Landsat satellite record, in support of the carbon modeling activities. LEDAPS relies on the decadal Landsat GeoCover data set supplemented by dense image time series for selected locations. Imagery is first atmospherically corrected to surface reflectance, and then change detection algorithms are used to extract disturbance area, type, and frequency. Reuse of the MODIS Land processing system (MODAPS) architecture allows rapid throughput of over 2200 MSS, TM, and ETM+ scenes. Initial ("Beta") surface reflectance products are currently available for testing, and initial continental disturbance products will be available by the middle of 2006.

  13. Network flexibility of the IRIDIUM (R) Global Mobile Satellite System

    NASA Technical Reports Server (NTRS)

    Hutcheson, Jonathan; Laurin, Mala

    1995-01-01

    The IRIDIUM system is a global personal communications system supported by a constellation of 66 low earth orbit (LEO) satellites and a collection of earth-based 'gateway' switching installations. Like traditional wireless cellular systems, coverage is achieved by a grid of cells in which bandwidth is reused for spectral efficiency. Unlike any cellular system ever built, the moving cells can be shared by multiple switching facilities. Noteworthy features of the IRIDIUM system include inter-satellite links, a GSM-based telephony architecture, and a geographically controlled system access process. These features, working in concert, permit flexible and reliable administration of the worldwide service area by gateway operators. This paper will explore this unique concept.

  14. A Bridging Opportunities Work-frame to develop mobile applications for clinical decision making

    PubMed Central

    van Rooij, Tibor; Rix, Serena; Moore, James B; Marsh, Sharon

    2015-01-01

    Background: Mobile applications (apps) providing clinical decision support (CDS) may show the greatest promise when created by and for frontline clinicians. Our aim was to create a generic model enabling healthcare providers to direct the development of CDS apps. Methods: We combined Change Management with a three-tier information technology architecture to stimulate CDS app development. Results: A Bridging Opportunities Work-frame model was developed. A test case was used to successfully develop an app. Conclusion: Healthcare providers can re-use this globally applicable model to actively create and manage regional decision support applications to translate evidence-based medicine in the use of emerging medication or novel treatment regimens. PMID:28031883

  15. Open exchange of scientific knowledge and European copyright: The case of biodiversity information

    PubMed Central

    Egloff, Willi; Patterson, David J.; Agosti, Donat; Hagedorn, Gregor

    2014-01-01

    Abstract Background. The 7th Framework Programme for Research and Technological Development is helping the European Union to prepare for an integrative system for intelligent management of biodiversity knowledge. The infrastructure that is envisaged and that will be further developed within the Programme “Horizon 2020” aims to provide open and free access to taxonomic information to anyone with a requirement for biodiversity data, without the need for individual consent of other persons or institutions. Open and free access to information will foster the re-use and improve the quality of data, will accelerate research, and will promote new types of research. Progress towards the goal of free and open access to content is hampered by numerous technical, economic, sociological, legal, and other factors. The present article addresses barriers to the open exchange of biodiversity knowledge that arise from European laws, in particular European legislation on copyright and database protection rights. We present a legal point of view as to what will be needed to bring distributed information together and facilitate its re-use by data mining, integration into semantic knowledge systems, and similar techniques. We address exceptions and limitations of copyright or database protection within Europe, and we point to the importance of data use agreements. We illustrate how exceptions and limitations have been transformed into national legislations within some European states to create inconsistencies that impede access to biodiversity information. Conclusions. The legal situation within the EU is unsatisfactory because there are inconsistencies among states that hamper the deployment of an open biodiversity knowledge management system. Scientists within the EU who work with copyright protected works or with protected databases have to be aware of regulations that vary from country to country. This is a major stumbling block to international collaboration and is an impediment to the open exchange of biodiversity knowledge. Such differences should be removed by unifying exceptions and limitations for research purposes in a binding, Europe-wide regulation. PMID:25009418

  16. Knowledge Evolution in Distributed Geoscience Datasets and the Role of Semantic Technologies

    NASA Astrophysics Data System (ADS)

    Ma, X.

    2014-12-01

    Knowledge evolves in geoscience, and the evolution is reflected in datasets. In a context with distributed data sources, the evolution of knowledge may cause considerable challenges to data management and re-use. For example, a short news published in 2009 (Mascarelli, 2009) revealed the geoscience community's concern that the International Commission on Stratigraphy's change to the definition of Quaternary may bring heavy reworking of geologic maps. Now we are in the era of the World Wide Web, and geoscience knowledge is increasingly modeled and encoded in the form of ontologies and vocabularies by using semantic technologies. Accordingly, knowledge evolution leads to a consequence called ontology dynamics. Flouris et al. (2008) summarized 10 topics of general ontology changes/dynamics such as: ontology mapping, morphism, evolution, debugging and versioning, etc. Ontology dynamics makes impacts at several stages of a data life cycle and causes challenges, such as: the request for reworking of the extant data in a data center, semantic mismatch among data sources, differentiated understanding of a same piece of dataset between data providers and data users, as well as error propagation in cross-discipline data discovery and re-use (Ma et al., 2014). This presentation will analyze the best practices in the geoscience community so far and summarize a few recommendations to reduce the negative impacts of ontology dynamics in a data life cycle, including: communities of practice and collaboration on ontology and vocabulary building, link data records to standardized terms, and methods for (semi-)automatic reworking of datasets using semantic technologies. References: Flouris, G., Manakanatas, D., Kondylakis, H., Plexousakis, D., Antoniou, G., 2008. Ontology change: classification and survey. The Knowledge Engineering Review 23 (2), 117-152. Ma, X., Fox, P., Rozell, E., West, P., Zednik, S., 2014. Ontology dynamics in a data life cycle: Challenges and recommendations from a Geoscience Perspective. Journal of Earth Science 25 (2), 407-412. Mascarelli, A.L., 2009. Quaternary geologists win timescale vote. Nature 459, 624.

  17. A Lovely Building for Difficult Knowledge: The Architecture of the Canadian Museum for Human Rights

    ERIC Educational Resources Information Center

    Wodtke, Larissa

    2015-01-01

    One only needs to look at the Canadian Museum for Human Rights (CMHR) logo, with its abstract outline of the CMHR building, to see the way in which the museum's architecture has come to stand for the CMHR's immaterial meanings and content. The CMHR's architecture becomes a material intersection of discourses of cosmopolitanism, human rights, and…

  18. Teaching History of Architecture--Moving from a Knowledge Transfer to a Multi-Participative Methodology Based on IT Tools

    ERIC Educational Resources Information Center

    Cimadomo, Guido

    2014-01-01

    The changes that the European Higher Education Area (EHEA) framework obliged the School of Architecture of Malaga, University of Malaga. to make to its "History of Architecture" course are discussed in this paper. It was taken up as an opportunity to modify the whole course, introducing creative teaching and "imaginative…

  19. An Architecture for Performance Optimization in a Collaborative Knowledge-Based Approach for Wireless Sensor Networks

    PubMed Central

    Gadeo-Martos, Manuel Angel; Fernandez-Prieto, Jose Angel; Canada-Bago, Joaquin; Velasco, Juan Ramon

    2011-01-01

    Over the past few years, Intelligent Spaces (ISs) have received the attention of many Wireless Sensor Network researchers. Recently, several studies have been devoted to identify their common capacities and to set up ISs over these networks. However, little attention has been paid to integrating Fuzzy Rule-Based Systems into collaborative Wireless Sensor Networks for the purpose of implementing ISs. This work presents a distributed architecture proposal for collaborative Fuzzy Rule-Based Systems embedded in Wireless Sensor Networks, which has been designed to optimize the implementation of ISs. This architecture includes the following: (a) an optimized design for the inference engine; (b) a visual interface; (c) a module to reduce the redundancy and complexity of the knowledge bases; (d) a module to evaluate the accuracy of the new knowledge base; (e) a module to adapt the format of the rules to the structure used by the inference engine; and (f) a communications protocol. As a real-world application of this architecture and the proposed methodologies, we show an application to the problem of modeling two plagues of the olive tree: prays (olive moth, Prays oleae Bern.) and repilo (caused by the fungus Spilocaea oleagina). The results show that the architecture presented in this paper significantly decreases the consumption of resources (memory, CPU and battery) without a substantial decrease in the accuracy of the inferred values. PMID:22163687

  20. An architecture for performance optimization in a collaborative knowledge-based approach for wireless sensor networks.

    PubMed

    Gadeo-Martos, Manuel Angel; Fernandez-Prieto, Jose Angel; Canada-Bago, Joaquin; Velasco, Juan Ramon

    2011-01-01

    Over the past few years, Intelligent Spaces (ISs) have received the attention of many Wireless Sensor Network researchers. Recently, several studies have been devoted to identify their common capacities and to set up ISs over these networks. However, little attention has been paid to integrating Fuzzy Rule-Based Systems into collaborative Wireless Sensor Networks for the purpose of implementing ISs. This work presents a distributed architecture proposal for collaborative Fuzzy Rule-Based Systems embedded in Wireless Sensor Networks, which has been designed to optimize the implementation of ISs. This architecture includes the following: (a) an optimized design for the inference engine; (b) a visual interface; (c) a module to reduce the redundancy and complexity of the knowledge bases; (d) a module to evaluate the accuracy of the new knowledge base; (e) a module to adapt the format of the rules to the structure used by the inference engine; and (f) a communications protocol. As a real-world application of this architecture and the proposed methodologies, we show an application to the problem of modeling two plagues of the olive tree: prays (olive moth, Prays oleae Bern.) and repilo (caused by the fungus Spilocaea oleagina). The results show that the architecture presented in this paper significantly decreases the consumption of resources (memory, CPU and battery) without a substantial decrease in the accuracy of the inferred values.

  1. Machine Learning-based Intelligent Formal Reasoning and Proving System

    NASA Astrophysics Data System (ADS)

    Chen, Shengqing; Huang, Xiaojian; Fang, Jiaze; Liang, Jia

    2018-03-01

    The reasoning system can be used in many fields. How to improve reasoning efficiency is the core of the design of system. Through the formal description of formal proof and the regular matching algorithm, after introducing the machine learning algorithm, the system of intelligent formal reasoning and verification has high efficiency. The experimental results show that the system can verify the correctness of propositional logic reasoning and reuse the propositional logical reasoning results, so as to obtain the implicit knowledge in the knowledge base and provide the basic reasoning model for the construction of intelligent system.

  2. Engineering Knowledge for Assistive Living

    NASA Astrophysics Data System (ADS)

    Chen, Liming; Nugent, Chris

    This paper introduces a knowledge based approach to assistive living in smart homes. It proposes a system architecture that makes use of knowledge in the lifecycle of assistive living. The paper describes ontology based knowledge engineering practices and discusses mechanisms for exploiting knowledge for activity recognition and assistance. It presents system implementation and experiments, and discusses initial results.

  3. Conservation Process Model (cpm): a Twofold Scientific Research Scope in the Information Modelling for Cultural Heritage

    NASA Astrophysics Data System (ADS)

    Fiorani, D.; Acierno, M.

    2017-05-01

    The aim of the present research is to develop an instrument able to adequately support the conservation process by means of a twofold approach, based on both BIM environment and ontology formalisation. Although BIM has been successfully experimented within AEC (Architecture Engineering Construction) field, it has showed many drawbacks for architectural heritage. To cope with unicity and more generally complexity of ancient buildings, applications so far developed have shown to poorly adapt BIM to conservation design with unsatisfactory results (Dore, Murphy 2013; Carrara 2014). In order to combine achievements reached within AEC through BIM environment (design control and management) with an appropriate, semantically enriched and flexible The presented model has at its core a knowledge base developed through information ontologies and oriented around the formalization and computability of all the knowledge necessary for the full comprehension of the object of architectural heritage an its conservation. Such a knowledge representation is worked out upon conceptual categories defined above all within architectural criticism and conservation scope. The present paper aims at further extending the scope of conceptual modelling within cultural heritage conservation already formalized by the model. A special focus is directed on decay analysis and surfaces conservation project.

  4. Parallel Implementation of MAFFT on CUDA-Enabled Graphics Hardware.

    PubMed

    Zhu, Xiangyuan; Li, Kenli; Salah, Ahmad; Shi, Lin; Li, Keqin

    2015-01-01

    Multiple sequence alignment (MSA) constitutes an extremely powerful tool for many biological applications including phylogenetic tree estimation, secondary structure prediction, and critical residue identification. However, aligning large biological sequences with popular tools such as MAFFT requires long runtimes on sequential architectures. Due to the ever increasing sizes of sequence databases, there is increasing demand to accelerate this task. In this paper, we demonstrate how graphic processing units (GPUs), powered by the compute unified device architecture (CUDA), can be used as an efficient computational platform to accelerate the MAFFT algorithm. To fully exploit the GPU's capabilities for accelerating MAFFT, we have optimized the sequence data organization to eliminate the bandwidth bottleneck of memory access, designed a memory allocation and reuse strategy to make full use of limited memory of GPUs, proposed a new modified-run-length encoding (MRLE) scheme to reduce memory consumption, and used high-performance shared memory to speed up I/O operations. Our implementation tested in three NVIDIA GPUs achieves speedup up to 11.28 on a Tesla K20m GPU compared to the sequential MAFFT 7.015.

  5. Software Defined Radio with Parallelized Software Architecture

    NASA Technical Reports Server (NTRS)

    Heckler, Greg

    2013-01-01

    This software implements software-defined radio procession over multicore, multi-CPU systems in a way that maximizes the use of CPU resources in the system. The software treats each processing step in either a communications or navigation modulator or demodulator system as an independent, threaded block. Each threaded block is defined with a programmable number of input or output buffers; these buffers are implemented using POSIX pipes. In addition, each threaded block is assigned a unique thread upon block installation. A modulator or demodulator system is built by assembly of the threaded blocks into a flow graph, which assembles the processing blocks to accomplish the desired signal processing. This software architecture allows the software to scale effortlessly between single CPU/single-core computers or multi-CPU/multi-core computers without recompilation. NASA spaceflight and ground communications systems currently rely exclusively on ASICs or FPGAs. This software allows low- and medium-bandwidth (100 bps to approx.50 Mbps) software defined radios to be designed and implemented solely in C/C++ software, while lowering development costs and facilitating reuse and extensibility.

  6. Software Defined Radio with Parallelized Software Architecture

    NASA Technical Reports Server (NTRS)

    Heckler, Greg

    2013-01-01

    This software implements software-defined radio procession over multi-core, multi-CPU systems in a way that maximizes the use of CPU resources in the system. The software treats each processing step in either a communications or navigation modulator or demodulator system as an independent, threaded block. Each threaded block is defined with a programmable number of input or output buffers; these buffers are implemented using POSIX pipes. In addition, each threaded block is assigned a unique thread upon block installation. A modulator or demodulator system is built by assembly of the threaded blocks into a flow graph, which assembles the processing blocks to accomplish the desired signal processing. This software architecture allows the software to scale effortlessly between single CPU/single-core computers or multi-CPU/multi-core computers without recompilation. NASA spaceflight and ground communications systems currently rely exclusively on ASICs or FPGAs. This software allows low- and medium-bandwidth (100 bps to .50 Mbps) software defined radios to be designed and implemented solely in C/C++ software, while lowering development costs and facilitating reuse and extensibility.

  7. Towards the Use of Unmanned Aerial Systems for Providing Sustainable Services in Smart Cities

    PubMed Central

    Conejero, José M.; Rodríguez-Echeverría, Roberto

    2017-01-01

    Sustainability is at the heart of many application fields where the use of Unmanned Aerial Systems (UAS) is becoming more and more important (e.g., agriculture, fire detection and prediction, environmental surveillance, mapping, etc.). However, their usage and evolution are highly conditioned by the specific application field they are designed for, and thus, they cannot be easily reused among different application fields. From this point of view, being that they are not multipurpose, we can say that they are not fully sustainable. Bearing this in mind, the objective of this paper is two-fold: on the one hand, to identify the whole set of features that must be provided by a UAS to be considered sustainable and to show that there is no UAS satisfying all these features; on the other hand, to present an open and sustainable UAS architecture that may be used to build UAS on demand to provide the features needed in each application field. Since this architecture is mainly based on software and hardware adaptability, it contributes to the technical sustainability of cities. PMID:29280984

  8. A Low Power SOC Architecture for the V2.0+EDR Bluetooth Using a Unified Verification Platform

    NASA Astrophysics Data System (ADS)

    Kim, Jeonghun; Kim, Suki; Baek, Kwang-Hyun

    This paper presents a low-power System on Chip (SOC) architecture for the v2.0+EDR (Enhanced Data Rate) Bluetooth and its applications. Our design includes a link controller, modem, RF transceiver, Sub-Band Codec (SBC), Expanded Instruction Set Computer (ESIC) processor, and peripherals. To decrease power consumption of the proposed SOC, we reduce data transfer using a dual-port memory, including a power management unit, and a clock gated approach. We also address some of issues and benefits of reusable and unified environment on a centralized data structure and SOC verification platform. This includes flexibility in meeting the final requirements using technology-independent tools wherever possible in various processes and for projects. The other aims of this work are to minimize design efforts by avoiding the same work done twice by different people and to reuse the similar environment and platform for different projects. This chip occupies a die size of 30mm2 in 0.18µm CMOS, and the worst-case current of the total chip is 54mA.

  9. Towards the Use of Unmanned Aerial Systems for Providing Sustainable Services in Smart Cities.

    PubMed

    Moguel, Enrique; Conejero, José M; Sánchez-Figueroa, Fernando; Hernández, Juan; Preciado, Juan C; Rodríguez-Echeverría, Roberto

    2017-12-27

    Sustainability is at the heart of many application fields where the use of Unmanned Aerial Systems (UAS) is becoming more and more important (e.g., agriculture, fire detection and prediction, environmental surveillance, mapping, etc.). However, their usage and evolution are highly conditioned by the specific application field they are designed for, and thus, they cannot be easily reused among different application fields. From this point of view, being that they are not multipurpose, we can say that they are not fully sustainable. Bearing this in mind, the objective of this paper is two-fold: on the one hand, to identify the whole set of features that must be provided by a UAS to be considered sustainable and to show that there is no UAS satisfying all these features; on the other hand, to present an open and sustainable UAS architecture that may be used to build UAS on demand to provide the features needed in each application field. Since this architecture is mainly based on software and hardware adaptability, it contributes to the technical sustainability of cities.

  10. Knowledge Innovation System: The Common Language.

    ERIC Educational Resources Information Center

    Rogers, Debra M. Amidon

    1993-01-01

    The Knowledge Innovation System is a management technique in which a networked enterprise uses knowledge flow as a collaborative advantage. Enterprise Management System-Architecture, which can be applied to collaborative activities, has five domains: economic, sociological, psychological, managerial, and technological. (SK)

  11. The Architecture of Personality

    ERIC Educational Resources Information Center

    Cervone, Daniel

    2004-01-01

    This article presents a theoretical framework for analyzing psychological systems that contribute to the variability, consistency, and cross-situational coherence of personality functioning. In the proposed knowledge-and-appraisal personality architecture (KAPA), personality structures and processes are delineated by combining 2 principles:…

  12. A novel configurable VLSI architecture design of window-based image processing method

    NASA Astrophysics Data System (ADS)

    Zhao, Hui; Sang, Hongshi; Shen, Xubang

    2018-03-01

    Most window-based image processing architecture can only achieve a certain kind of specific algorithms, such as 2D convolution, and therefore lack the flexibility and breadth of application. In addition, improper handling of the image boundary can cause loss of accuracy, or consume more logic resources. For the above problems, this paper proposes a new VLSI architecture of window-based image processing operations, which is configurable and based on consideration of the image boundary. An efficient technique is explored to manage the image borders by overlapping and flushing phases at the end of row and the end of frame, which does not produce new delay and reduce the overhead in real-time applications. Maximize the reuse of the on-chip memory data, in order to reduce the hardware complexity and external bandwidth requirements. To perform different scalar function and reduction function operations in pipeline, this can support a variety of applications of window-based image processing. Compared with the performance of other reported structures, the performance of the new structure has some similarities to some of the structures, but also superior to some other structures. Especially when compared with a systolic array processor CWP, this structure at the same frequency of approximately 12.9% of the speed increases. The proposed parallel VLSI architecture was implemented with SIMC 0.18-μm CMOS technology, and the maximum clock frequency, power consumption, and area are 125Mhz, 57mW, 104.8K Gates, respectively, furthermore the processing time is independent of the different window-based algorithms mapped to the structure

  13. The NASA Mission Operations and Control Architecture Program

    NASA Technical Reports Server (NTRS)

    Ondrus, Paul J.; Carper, Richard D.; Jeffries, Alan J.

    1994-01-01

    The conflict between increases in space mission complexity and rapidly declining space mission budgets has created strong pressures to radically reduce the costs of designing and operating spacecraft. A key approach to achieving such reductions is through reducing the development and operations costs of the supporting mission operations systems. One of the efforts which the Communications and Data Systems Division at NASA Headquarters is using to meet this challenge is the Mission Operations Control Architecture (MOCA) project. Technical direction of this effort has been delegated to the Mission Operations Division (MOD) of the Goddard Space Flight Center (GSFC). MOCA is to develop a mission control and data acquisition architecture, and supporting standards, to guide the development of future spacecraft and mission control facilities at GSFC. The architecture will reduce the need for around-the-clock operations staffing, obtain a high level of reuse of flight and ground software elements from mission to mission, and increase overall system flexibility by enabling the migration of appropriate functions from the ground to the spacecraft. The end results are to be an established way of designing the spacecraft-ground system interface for GSFC's in-house developed spacecraft, and a specification of the end to end spacecraft control process, including data structures, interfaces, and protocols, suitable for inclusion in solicitation documents for future flight spacecraft. A flight software kernel may be developed and maintained in a condition that it can be offered as Government Furnished Equipment in solicitations. This paper describes the MOCA project, its current status, and the results to date.

  14. Toward a Theory of Adaptive Transfer: Expanding Disciplinary Discussions of "Transfer" in Second-Language Writing and Composition Studies

    ERIC Educational Resources Information Center

    DePalma, Michael-John; Ringer, Jeffrey M.

    2011-01-01

    In this paper, we argue that discussions of transfer in L2 writing and composition studies have focused primarily on the reuse of past learning and thus have not adequately accounted for the adaptation of learned writing knowledge in unfamiliar situations. In an effort to expand disciplinary discussions of transfer in L2 writing and composition…

  15. Ambient iron-mediated aeration (IMA) for water reuse.

    PubMed

    Deng, Yang; Englehardt, James D; Abdul-Aziz, Samer; Bataille, Tristan; Cueto, Josenrique; De Leon, Omar; Wright, Mary E; Gardinali, Piero; Narayanan, Aarthi; Polar, Jose; Tomoyuki, Shibata

    2013-02-01

    Global water shortages caused by rapidly expanding population, escalating water consumption, and dwindling water reserves have rendered water reuse a strategically significant approach to meet current and future water demand. This study is the first to our knowledge to evaluate the technical feasibility of iron-mediated aeration (IMA), an innovative, potentially economical, holistic, oxidizing co-precipitation process operating at room temperature, atmospheric pressure, and neutral pH, for water reuse. In the IMA process, dissolved oxygen (O₂) was continuously activated by zero-valent iron (Fe⁰) to produce reactive oxygen species (ROS) at ambient pH, temperature, and pressure. Concurrently, iron sludge was generated as a result of iron corrosion. Bench-scale tests were conducted to study the performance of IMA for treatment of secondary effluent, natural surface water, and simulated contaminated water. The following removal efficiencies were achieved: 82.2% glyoxylic acid, ~100% formaldehyde as an oxidation product of glyoxylic acid, 94% of Ca²⁺ and associated alkalinity, 44% of chemical oxygen demand (COD), 26% of electrical conductivity (EC), 98% of di-n-butyl phthalate (DBP), 80% of 17β-estradiol (E2), 45% of total nitrogen (TN), 96% of total phosphorus (TP), 99.8% of total Cr, >90% of total Ni, 99% of color, 3.2 log removal of total coliform, and 2.4 log removal of E. Coli. Removal was attributed principally to chemical oxidation, precipitation, co-precipitation, coagulation, adsorption, and air stripping concurrently occurring during the IMA treatment. Results suggest that IMA is a promising treatment technology for water reuse. Copyright © 2012 Elsevier Ltd. All rights reserved.

  16. Forward osmosis niches in seawater desalination and wastewater reuse.

    PubMed

    Valladares Linares, R; Li, Z; Sarp, S; Bucs, Sz S; Amy, G; Vrouwenvelder, J S

    2014-12-01

    This review focuses on the present status of forward osmosis (FO) niches in two main areas: seawater desalination and wastewater reuse. Specific applications for desalination and impaired-quality water treatment and reuse are described, as well as the benefits, advantages, challenges, costs and knowledge gaps on FO hybrid systems are discussed. FO can play a role as a bridge to integrate upstream and downstream water treatment processes, to reduce the energy consumption of the entire desalination or water recovery and reuse processes, thus achieving a sustainable solution for the water-energy nexus. FO hybrid membrane systems showed to have advantages over traditional membrane process like high pressure reverse osmosis and nanofiltration for desalination and wastewater treatment: (i) chemical storage and feed water systems may be reduced for capital, operational and maintenance cost, (ii) water quality is improved, (iii) reduced process piping costs, (iv) more flexible treatment units, and (v) higher overall sustainability of the desalination and wastewater treatment process. Nevertheless, major challenges make FO systems not yet a commercially viable technology, the most critical being the development of a high flux membrane, capable of maintaining an elevated salt rejection and a reduced internal concentration polarization effect, and the availability of appropriate draw solutions (cost effective and non-toxic), which can be recirculated via an efficient recovery process. This review article highlights the features of hybrid FO systems and specifically provides the state-of-the-art applications in the water industry in a novel classification and based on the latest developments toward scaling up these systems. Copyright © 2014 Elsevier Ltd. All rights reserved.

  17. Genetic control of inflorescence architecture in legumes

    PubMed Central

    Benlloch, Reyes; Berbel, Ana; Ali, Latifeh; Gohari, Gholamreza; Millán, Teresa; Madueño, Francisco

    2015-01-01

    The architecture of the inflorescence, the shoot system that bears the flowers, is a main component of the huge diversity of forms found in flowering plants. Inflorescence architecture has also a strong impact on the production of fruits and seeds, and on crop management, two highly relevant agronomical traits. Elucidating the genetic networks that control inflorescence development, and how they vary between different species, is essential to understanding the evolution of plant form and to being able to breed key architectural traits in crop species. Inflorescence architecture depends on the identity and activity of the meristems in the inflorescence apex, which determines when flowers are formed, how many are produced and their relative position in the inflorescence axis. Arabidopsis thaliana, where the genetic control of inflorescence development is best known, has a simple inflorescence, where the primary inflorescence meristem directly produces the flowers, which are thus borne in the main inflorescence axis. In contrast, legumes represent a more complex inflorescence type, the compound inflorescence, where flowers are not directly borne in the main inflorescence axis but, instead, they are formed by secondary or higher order inflorescence meristems. Studies in model legumes such as pea (Pisum sativum) or Medicago truncatula have led to a rather good knowledge of the genetic control of the development of the legume compound inflorescence. In addition, the increasing availability of genetic and genomic tools for legumes is allowing to rapidly extending this knowledge to other grain legume crops. This review aims to describe the current knowledge of the genetic network controlling inflorescence development in legumes. It also discusses how the combination of this knowledge with the use of emerging genomic tools and resources may allow rapid advances in the breeding of grain legume crops. PMID:26257753

  18. Mercury: An Example of Effective Software Reuse for Metadata Management, Data Discovery and Access

    NASA Astrophysics Data System (ADS)

    Devarakonda, Ranjeet; Palanisamy, Giri; Green, James; Wilson, Bruce E.

    2008-12-01

    Mercury is a federated metadata harvesting, data discovery and access tool based on both open source packages and custom developed software. Though originally developed for NASA, the Mercury development consortium now includes funding from NASA, USGS, and DOE. Mercury supports the reuse of metadata by enabling searching across a range of metadata specification and standards including XML, Z39.50, FGDC, Dublin-Core, Darwin-Core, EML, and ISO-19115. Mercury provides a single portal to information contained in distributed data management systems. It collects metadata and key data from contributing project servers distributed around the world and builds a centralized index. The Mercury search interfaces then allow the users to perform simple, fielded, spatial and temporal searches across these metadata sources. One of the major goals of the recent redesign of Mercury was to improve the software reusability across the 12 projects which currently fund the continuing development of Mercury. These projects span a range of land, atmosphere, and ocean ecological communities and have a number of common needs for metadata searches, but they also have a number of needs specific to one or a few projects. To balance these common and project-specific needs, Mercury's architecture has three major reusable components; a harvester engine, an indexing system and a user interface component. The harvester engine is responsible for harvesting metadata records from various distributed servers around the USA and around the world. The harvester software was packaged in such a way that all the Mercury projects will use the same harvester scripts but each project will be driven by a set of project specific configuration files. The harvested files are structured metadata records that are indexed against the search library API consistently, so that it can render various search capabilities such as simple, fielded, spatial and temporal. This backend component is supported by a very flexible, easy to use Graphical User Interface which is driven by cascading style sheets, which make it even simpler for reusable design implementation. The new Mercury system is based on a Service Oriented Architecture and effectively reuses components for various services such as Thesaurus Service, Gazetteer Web Service and UDDI Directory Services. The software also provides various search services including: RSS, Geo-RSS, OpenSearch, Web Services and Portlets, integrated shopping cart to order datasets from various data centers (ORNL DAAC, NSIDC) and integrated visualization tools. Other features include: Filtering and dynamic sorting of search results, book- markable search results, save, retrieve, and modify search criteria.

  19. Mercury: An Example of Effective Software Reuse for Metadata Management, Data Discovery and Access

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Devarakonda, Ranjeet

    2008-01-01

    Mercury is a federated metadata harvesting, data discovery and access tool based on both open source packages and custom developed software. Though originally developed for NASA, the Mercury development consortium now includes funding from NASA, USGS, and DOE. Mercury supports the reuse of metadata by enabling searching across a range of metadata specification and standards including XML, Z39.50, FGDC, Dublin-Core, Darwin-Core, EML, and ISO-19115. Mercury provides a single portal to information contained in distributed data management systems. It collects metadata and key data from contributing project servers distributed around the world and builds a centralized index. The Mercury search interfacesmore » then allow the users to perform simple, fielded, spatial and temporal searches across these metadata sources. One of the major goals of the recent redesign of Mercury was to improve the software reusability across the 12 projects which currently fund the continuing development of Mercury. These projects span a range of land, atmosphere, and ocean ecological communities and have a number of common needs for metadata searches, but they also have a number of needs specific to one or a few projects. To balance these common and project-specific needs, Mercury's architecture has three major reusable components; a harvester engine, an indexing system and a user interface component. The harvester engine is responsible for harvesting metadata records from various distributed servers around the USA and around the world. The harvester software was packaged in such a way that all the Mercury projects will use the same harvester scripts but each project will be driven by a set of project specific configuration files. The harvested files are structured metadata records that are indexed against the search library API consistently, so that it can render various search capabilities such as simple, fielded, spatial and temporal. This backend component is supported by a very flexible, easy to use Graphical User Interface which is driven by cascading style sheets, which make it even simpler for reusable design implementation. The new Mercury system is based on a Service Oriented Architecture and effectively reuses components for various services such as Thesaurus Service, Gazetteer Web Service and UDDI Directory Services. The software also provides various search services including: RSS, Geo-RSS, OpenSearch, Web Services and Portlets, integrated shopping cart to order datasets from various data centers (ORNL DAAC, NSIDC) and integrated visualization tools. Other features include: Filtering and dynamic sorting of search results, book- markable search results, save, retrieve, and modify search criteria.« less

  20. SIGMA: A Knowledge-Based Simulation Tool Applied to Ecosystem Modeling

    NASA Technical Reports Server (NTRS)

    Dungan, Jennifer L.; Keller, Richard; Lawless, James G. (Technical Monitor)

    1994-01-01

    The need for better technology to facilitate building, sharing and reusing models is generally recognized within the ecosystem modeling community. The Scientists' Intelligent Graphical Modelling Assistant (SIGMA) creates an environment for model building, sharing and reuse which provides an alternative to more conventional approaches which too often yield poorly documented, awkwardly structured model code. The SIGMA interface presents the user a list of model quantities which can be selected for computation. Equations to calculate the model quantities may be chosen from an existing library of ecosystem modeling equations, or built using a specialized equation editor. Inputs for dim equations may be supplied by data or by calculation from other equations. Each variable and equation is expressed using ecological terminology and scientific units, and is documented with explanatory descriptions and optional literature citations. Automatic scientific unit conversion is supported and only physically-consistent equations are accepted by the system. The system uses knowledge-based semantic conditions to decide which equations in its library make sense to apply in a given situation, and supplies these to the user for selection. "Me equations and variables are graphically represented as a flow diagram which provides a complete summary of the model. Forest-BGC, a stand-level model that simulates photosynthesis and evapo-transpiration for conifer canopies, was originally implemented in Fortran and subsequenty re-implemented using SIGMA. The SIGMA version reproduces daily results and also provides a knowledge base which greatly facilitates inspection, modification and extension of Forest-BGC.

  1. A Tailored Ontology Supporting Sensor Implementation for the Maintenance of Industrial Machines

    PubMed Central

    Belkadi, Farouk; Bernard, Alain

    2017-01-01

    The longtime productivity of an industrial machine is improved by condition-based maintenance strategies. To do this, the integration of sensors and other cyber-physical devices is necessary in order to capture and analyze a machine’s condition through its lifespan. Thus, choosing the best sensor is a critical step to ensure the efficiency of the maintenance process. Indeed, considering the variety of sensors, and their features and performance, a formal classification of a sensor’s domain knowledge is crucial. This classification facilitates the search for and reuse of solutions during the design of a new maintenance service. Following a Knowledge Management methodology, the paper proposes and develops a new sensor ontology that structures the domain knowledge, covering both theoretical and experimental sensor attributes. An industrial case study is conducted to validate the proposed ontology and to demonstrate its utility as a guideline to ease the search of suitable sensors. Based on the ontology, the final solution will be implemented in a shared repository connected to legacy CAD (computer-aided design) systems. The selection of the best sensor is, firstly, obtained by the matching of application requirements and sensor specifications (that are proposed by this sensor repository). Then, it is refined from the experimentation results. The achieved solution is recorded in the sensor repository for future reuse. As a result, the time and cost of the design process of new condition-based maintenance services is reduced. PMID:28885592

  2. A knowledge base architecture for distributed knowledge agents

    NASA Technical Reports Server (NTRS)

    Riedesel, Joel; Walls, Bryan

    1990-01-01

    A tuple space based object oriented model for knowledge base representation and interpretation is presented. An architecture for managing distributed knowledge agents is then implemented within the model. The general model is based upon a database implementation of a tuple space. Objects are then defined as an additional layer upon the database. The tuple space may or may not be distributed depending upon the database implementation. A language for representing knowledge and inference strategy is defined whose implementation takes advantage of the tuple space. The general model may then be instantiated in many different forms, each of which may be a distinct knowledge agent. Knowledge agents may communicate using tuple space mechanisms as in the LINDA model as well as using more well known message passing mechanisms. An implementation of the model is presented describing strategies used to keep inference tractable without giving up expressivity. An example applied to a power management and distribution network for Space Station Freedom is given.

  3. The architecture challenge: Future artificial-intelligence systems will require sophisticated architectures, and knowledge of the brain might guide their construction.

    PubMed

    Baldassarre, Gianluca; Santucci, Vieri Giuliano; Cartoni, Emilio; Caligiore, Daniele

    2017-01-01

    In this commentary, we highlight a crucial challenge posed by the proposal of Lake et al. to introduce key elements of human cognition into deep neural networks and future artificial-intelligence systems: the need to design effective sophisticated architectures. We propose that looking at the brain is an important means of facing this great challenge.

  4. A Novel Ontology Approach to Support Design for Reliability considering Environmental Effects

    PubMed Central

    Sun, Bo; Li, Yu; Ye, Tianyuan

    2015-01-01

    Environmental effects are not considered sufficiently in product design. Reliability problems caused by environmental effects are very prominent. This paper proposes a method to apply ontology approach in product design. During product reliability design and analysis, environmental effects knowledge reusing is achieved. First, the relationship of environmental effects and product reliability is analyzed. Then environmental effects ontology to describe environmental effects domain knowledge is designed. Related concepts of environmental effects are formally defined by using the ontology approach. This model can be applied to arrange environmental effects knowledge in different environments. Finally, rubber seals used in the subhumid acid rain environment are taken as an example to illustrate ontological model application on reliability design and analysis. PMID:25821857

  5. A novel ontology approach to support design for reliability considering environmental effects.

    PubMed

    Sun, Bo; Li, Yu; Ye, Tianyuan; Ren, Yi

    2015-01-01

    Environmental effects are not considered sufficiently in product design. Reliability problems caused by environmental effects are very prominent. This paper proposes a method to apply ontology approach in product design. During product reliability design and analysis, environmental effects knowledge reusing is achieved. First, the relationship of environmental effects and product reliability is analyzed. Then environmental effects ontology to describe environmental effects domain knowledge is designed. Related concepts of environmental effects are formally defined by using the ontology approach. This model can be applied to arrange environmental effects knowledge in different environments. Finally, rubber seals used in the subhumid acid rain environment are taken as an example to illustrate ontological model application on reliability design and analysis.

  6. A Telemetry Browser Built with Java Components

    NASA Astrophysics Data System (ADS)

    Poupart, E.

    In the context of CNES balloon scientific campaigns and telemetry survey field, a generic telemetry processing product, called TelemetryBrowser in the following, was developed reusing COTS, Java Components for most of them. Connection between those components relies on a software architecture based on parameter producers and parameter consumers. The first one transmit parameter values to the second one which has registered to it. All of those producers and consumers can be spread over the network thanks to Corba, and over every kind of workstation thanks to Java. This gives a very powerful mean to adapt to constraints like network bandwidth, or workstations processing or memory. It's also very useful to display and correlate at the same time information coming from multiple and various sources. An important point of this architecture is that the coupling between parameter producers and parameter consumers is reduced to the minimum and that transmission of information on the network is made asynchronously. So, if a parameter consumer goes down or runs slowly, there is no consequence on the other consumers, because producers don't wait for their consumers to finish their data processing before sending it to other consumers. An other interesting point is that parameter producers, also called TelemetryServers in the following are generated nearly automatically starting from a telemetry description using Flavori component. Keywords Java components, Corba, distributed application, OpenORBii, software reuse, COTS, Internet, Flavor. i Flavor (Formal Language for Audio-Visual Object Representation) is an object-oriented media representation language being developed at Columbia University. It is designed as an extension of Java and C++ and simplifies the development of applications that involve a significant media processing component (encoding, decoding, editing, manipulation, etc.) by providing bitstream representation semantics. (flavor.sourceforge.net) ii OpenORB provides a Java implementation of the OMG Corba 2.4.2 specification (openorb.sourceforge.net) 1/16

  7. Modular System to Enable Extravehicular Activity

    NASA Technical Reports Server (NTRS)

    Sargusingh, Miriam J.

    2012-01-01

    The ability to perform extravehicular activity (EVA), both human and robotic, has been identified as a key component to space missions to support such operations as assembly and maintenance of space systems (e.g. construction and maintenance of the International Space Station), and unscheduled activities to repair an element of the transportation and habitation systems that can only be accessed externally and via unpressurized areas. In order to make human transportation beyond lower Earth orbit (LEO) practical, efficiencies must be incorporated into the integrated transportation systems to reduce system mass and operational complexity. Affordability is also a key aspect to be considered in space system development; this could be achieved through commonality, modularity and component reuse. Another key aspect identified for the EVA system was the ability to produce flight worthy hardware quickly to support early missions and near Earth technology demonstrations. This paper details a conceptual architecture for a modular EVA system that would meet these stated needs for EVA capability that is affordable, and that could be produced relatively quickly. Operational concepts were developed to elaborate on the defined needs, and to define the key capabilities, operational and design constraints, and general timelines. The operational concept lead to a high level design concept for a module that interfaces with various space transportation elements and contains the hardware and systems required to support human and telerobotic EVA; the module would not be self-propelled and would rely on an interfacing element for consumable resources. The conceptual architecture was then compared to EVA Systems used in the Space Shuttle Orbiter, on the International Space Station to develop high level design concepts that incorporate opportunities for cost savings through hardware reuse, and quick production through the use of existing technologies and hardware designs. An upgrade option was included to make use of the developing suit port technologies.

  8. Meeting the challenges of the digital medical enterprise of the future by reusing enterprise software components

    NASA Astrophysics Data System (ADS)

    Shani, Uri; Kol, Tomer; Shachor, Gal

    2004-04-01

    Managing medical digital information objects, and in particular medical images is an enterprise-grade problem. Firstly, there is the sheer amount of digital data that is generated in the proliferation of digital (and film-free) medical imaging. Secondly, the managing software ought to enjoy high availability, recoverability and manageability that are found only in the most business-critical systems. Indeed, such requirements are borrowed from the business enterprise world. Moreover, the solution for the medical information management problem should too employ the same software tools, middlewares and architectures. It is safe to say that all first-line medical PACS products strive to provide a solution for all these challenging requirements. The DICOM standard has been a prime enabler of such solutions. DICOM created the interconnectivity, which made it possible for a PACS service to manage millions of exams consisting of trillions of images. With the more comprehensive IHE architecture, the enterprise is expanded into a multi-facility regional conglomerate, which presents extreme demands from the data management system. HIPPA legislations add considerable challenges per security, privacy and other legal issues, which aggravate the situation. In this paper, we firstly present what in our view should be the general requirements for a first-line medical PACS, taken from an enterprise medical imaging storage and management solution perspective. While these requirements can be met by homegrown implementations, we suggest looking at the existing technologies, which have emerged in the recent years to meet exactly these challenges in the business world. We present an evolutionary process, which led to the design and implementation of a medical object management subsystem. This is indeed an enterprise medical imaging solution that is built upon respective technological components. The system answers all these challenges simply by not reinventing wheels, but rather reusing the best "wheels" for the job. Relying on such middleware components allowed us to concentrate on added value for this specific problem domain.

  9. Computer Architects.

    ERIC Educational Resources Information Center

    Betts, Janelle Lyon

    2001-01-01

    Describes a high school art assignment in which students utilize Appleworks or Claris Works to design their own house, after learning about architectural styles and how to use the computer program. States that the project develops student computer skills and increases student knowledge about architecture. (CMK)

  10. The Electronic Logbook for the Information Storage of ATLAS Experiment at LHC (ELisA)

    NASA Astrophysics Data System (ADS)

    Corso Radu, A.; Lehmann Miotto, G.; Magnoni, L.

    2012-12-01

    A large experiment like ATLAS at LHC (CERN), with over three thousand members and a shift crew of 15 people running the experiment 24/7, needs an easy and reliable tool to gather all the information concerning the experiment development, installation, deployment and exploitation over its lifetime. With the increasing number of users and the accumulation of stored information since the experiment start-up, the electronic logbook actually in use, ATLOG, started to show its limitations in terms of speed and usability. Its monolithic architecture makes the maintenance and implementation of new functionality a hard-to-almost-impossible process. A new tool ELisA has been developed to replace the existing ATLOG. It is based on modern web technologies: the Spring framework using a Model-View-Controller architecture was chosen, thus helping building flexible and easy to maintain applications. The new tool implements all features of the old electronic logbook with increased performance and better graphics: it uses the same database back-end for portability reasons. In addition, several new requirements have been accommodated which could not be implemented in ATLOG. This paper describes the architecture, implementation and performance of ELisA, with particular emphasis on the choices that allowed having a scalable and very fast system and on the aspects that could be re-used in different contexts to build a similar application.

  11. Systems biology driven software design for the research enterprise

    PubMed Central

    Boyle, John; Cavnor, Christopher; Killcoyne, Sarah; Shmulevich, Ilya

    2008-01-01

    Background In systems biology, and many other areas of research, there is a need for the interoperability of tools and data sources that were not originally designed to be integrated. Due to the interdisciplinary nature of systems biology, and its association with high throughput experimental platforms, there is an additional need to continually integrate new technologies. As scientists work in isolated groups, integration with other groups is rarely a consideration when building the required software tools. Results We illustrate an approach, through the discussion of a purpose built software architecture, which allows disparate groups to reuse tools and access data sources in a common manner. The architecture allows for: the rapid development of distributed applications; interoperability, so it can be used by a wide variety of developers and computational biologists; development using standard tools, so that it is easy to maintain and does not require a large development effort; extensibility, so that new technologies and data types can be incorporated; and non intrusive development, insofar as researchers need not to adhere to a pre-existing object model. Conclusion By using a relatively simple integration strategy, based upon a common identity system and dynamically discovered interoperable services, a light-weight software architecture can become the focal point through which scientists can both get access to and analyse the plethora of experimentally derived data. PMID:18578887

  12. A flexible algorithm for calculating pair interactions on SIMD architectures

    NASA Astrophysics Data System (ADS)

    Páll, Szilárd; Hess, Berk

    2013-12-01

    Calculating interactions or correlations between pairs of particles is typically the most time-consuming task in particle simulation or correlation analysis. Straightforward implementations using a double loop over particle pairs have traditionally worked well, especially since compilers usually do a good job of unrolling the inner loop. In order to reach high performance on modern CPU and accelerator architectures, single-instruction multiple-data (SIMD) parallelization has become essential. Avoiding memory bottlenecks is also increasingly important and requires reducing the ratio of memory to arithmetic operations. Moreover, when pairs only interact within a certain cut-off distance, good SIMD utilization can only be achieved by reordering input and output data, which quickly becomes a limiting factor. Here we present an algorithm for SIMD parallelization based on grouping a fixed number of particles, e.g. 2, 4, or 8, into spatial clusters. Calculating all interactions between particles in a pair of such clusters improves data reuse compared to the traditional scheme and results in a more efficient SIMD parallelization. Adjusting the cluster size allows the algorithm to map to SIMD units of various widths. This flexibility not only enables fast and efficient implementation on current CPUs and accelerator architectures like GPUs or Intel MIC, but it also makes the algorithm future-proof. We present the algorithm with an application to molecular dynamics simulations, where we can also make use of the effective buffering the method introduces.

  13. An empirical analysis of ontology reuse in BioPortal.

    PubMed

    Ochs, Christopher; Perl, Yehoshua; Geller, James; Arabandi, Sivaram; Tudorache, Tania; Musen, Mark A

    2017-07-01

    Biomedical ontologies often reuse content (i.e., classes and properties) from other ontologies. Content reuse enables a consistent representation of a domain and reusing content can save an ontology author significant time and effort. Prior studies have investigated the existence of reused terms among the ontologies in the NCBO BioPortal, but as of yet there has not been a study investigating how the ontologies in BioPortal utilize reused content in the modeling of their own content. In this study we investigate how 355 ontologies hosted in the NCBO BioPortal reuse content from other ontologies for the purposes of creating new ontology content. We identified 197 ontologies that reuse content. Among these ontologies, 108 utilize reused classes in the modeling of their own classes and 116 utilize reused properties in class restrictions. Current utilization of reuse and quality issues related to reuse are discussed. Copyright © 2017 Elsevier Inc. All rights reserved.

  14. Revisions to the JDL data fusion model

    NASA Astrophysics Data System (ADS)

    Steinberg, Alan N.; Bowman, Christopher L.; White, Franklin E.

    1999-03-01

    The Data Fusion Model maintained by the Joint Directors of Laboratories (JDL) Data Fusion Group is the most widely-used method for categorizing data fusion-related functions. This paper discusses the current effort to revise the expand this model to facilitate the cost-effective development, acquisition, integration and operation of multi- sensor/multi-source systems. Data fusion involves combining information - in the broadest sense - to estimate or predict the state of some aspect of the universe. These may be represented in terms of attributive and relational states. If the job is to estimate the state of a people, it can be useful to include consideration of informational and perceptual states in addition to the physical state. Developing cost-effective multi-source information systems requires a method for specifying data fusion processing and control functions, interfaces, and associate databases. The lack of common engineering standards for data fusion systems has been a major impediment to integration and re-use of available technology: current developments do not lend themselves to objective evaluation, comparison or re-use. This paper reports on proposed revisions and expansions of the JDL Data FUsion model to remedy some of these deficiencies. This involves broadening the functional model and related taxonomy beyond the original military focus, and integrating the Data Fusion Tree Architecture model for system description, design and development.

  15. From Architectural Photogrammetry Toward Digital Architectural Heritage Education

    NASA Astrophysics Data System (ADS)

    Baik, A.; Alitany, A.

    2018-05-01

    This paper considers the potential of using the documentation approach proposed for the heritage buildings in Historic Jeddah, Saudi Arabia (as a case study) by using the close-range photogrammetry / the Architectural Photogrammetry techniques as a new academic experiment in digital architectural heritage education. Moreover, different than most of engineering educational techniques related to architecture education, this paper will be focusing on the 3-D data acquisition technology as a tool to document and to learn the principals of the digital architectural heritage documentation. The objective of this research is to integrate the 3-D modelling and visualisation knowledge for the purposes of identifying, designing and evaluating an effective engineering educational experiment. Furthermore, the students will learn and understand the characteristics of the historical building while learning more advanced 3-D modelling and visualisation techniques. It can be argued that many of these technologies alone are difficult to improve the education; therefore, it is important to integrate them in an educational framework. This should be in line with the educational ethos of the academic discipline. Recently, a number of these technologies and methods have been effectively used in education sectors and other purposes; such as in the virtual museum. However, these methods are not directly coincided with the traditional education and teaching architecture. This research will be introduced the proposed approach as a new academic experiment in the architecture education sector. The new teaching approach will be based on the Architectural Photogrammetry to provide semantically rich models. The academic experiment will require students to have suitable knowledge in both Photogrammetry applications to engage with the process.

  16. A Collaborative Reasoning Maintenance System for a Reliable Application of Legislations

    NASA Astrophysics Data System (ADS)

    Tamisier, Thomas; Didry, Yoann; Parisot, Olivier; Feltz, Fernand

    Decision support systems are nowadays used to disentangle all kinds of intricate situations and perform sophisticated analysis. Moreover, they are applied in areas where the knowledge can be heterogeneous, partially un-formalized, implicit, or diffuse. The representation and management of this knowledge become the key point to ensure the proper functioning of the system and keep an intuitive view upon its expected behavior. This paper presents a generic architecture for implementing knowledge-base systems used in collaborative business, where the knowledge is organized into different databases, according to the usage, persistence and quality of the information. This approach is illustrated with Cadral, a customizable automated tool built on this architecture and used for processing family benefits applications at the National Family Benefits Fund of the Grand-Duchy of Luxembourg.

  17. GROUNDWATER RECHARGE AND CHEMICAL ...

    EPA Pesticide Factsheets

    The existing knowledge base regarding the presence and significance of chemicals foreign to the subsurface environment is large and growing -the papers in this volume serving as recent testament. But complex questions with few answers surround the unknowns regarding the potential for environmental or human health effects from trace levels of xenobiotics in groundwater, especially groundwater augmented with treated wastewater. Public acceptance for direct or indirect groundwater recharge using treated municipal wastewater ( especially sewage) spans the spectrum from unquestioned embrace to outright rejection. In this article, I detour around the issues most commonly discussed for groundwater recharge and instead focus on some of the less-recognized issues- those that emanate from the mysteries created at the many literal and virtual interfaces involved with the subsurface world. My major objective is to catalyze discussion that advances our understanding of the barriers to public acceptance of wastewater reuse -with its ultimate culmination in direct reuse for drinking. I pose what could be a key question as to whether much of the public's frustration or ambivalence in its decision making process for accepting or rejecting water reuse (for various purposes including personal use) emanates from fundamental inaccuracies, misrepresentation, or oversimplification of what water 'is' and how it functions in the environment -just what exactly is the 'water cyc

  18. EPA Scientific Knowledge Management Assessment and ...

    EPA Pesticide Factsheets

    A series of activities have been conducted by a core group of EPA scientists from across the Agency. The activities were initiated in 2012 and the focus was to increase the reuse and interoperability of science software at EPA. The need for increased reuse and interoperability is linked to the increased complexity of environmental assessments in the 21st century. This complexity is manifest in the form of problems that require integrated multi-disciplinary solutions. To enable the means to develop these solutions (i.e., science software systems) it is necessary to integrate software developed by disparate groups representing a variety of science domains. Thus, reuse and interoperability becomes imperative. This report briefly describes the chronology of activities conducted by the group of scientists to provide context for the primary purpose of this report, that is, to describe the proceedings and outcomes of the latest activity, a workshop entitled “Workshop on Advancing US EPA integration of environmental and information sciences”. The EPA has been lagging in digital maturity relative to the private sector and even other government agencies. This report helps begin the process of improving the agency’s use of digital technologies, especially in the areas of efficiency and transparency. This report contributes to SHC 1.61.2.

  19. New knowledge network evaluation method for design rationale management

    NASA Astrophysics Data System (ADS)

    Jing, Shikai; Zhan, Hongfei; Liu, Jihong; Wang, Kuan; Jiang, Hao; Zhou, Jingtao

    2015-01-01

    Current design rationale (DR) systems have not demonstrated the value of the approach in practice since little attention is put to the evaluation method of DR knowledge. To systematize knowledge management process for future computer-aided DR applications, a prerequisite is to provide the measure for the DR knowledge. In this paper, a new knowledge network evaluation method for DR management is presented. The method characterizes the DR knowledge value from four perspectives, namely, the design rationale structure scale, association knowledge and reasoning ability, degree of design justification support and degree of knowledge representation conciseness. The DR knowledge comprehensive value is also measured by the proposed method. To validate the proposed method, different style of DR knowledge network and the performance of the proposed measure are discussed. The evaluation method has been applied in two realistic design cases and compared with the structural measures. The research proposes the DR knowledge evaluation method which can provide object metric and selection basis for the DR knowledge reuse during the product design process. In addition, the method is proved to be more effective guidance and support for the application and management of DR knowledge.

  20. Attitudes and norms affecting scientists’ data reuse

    PubMed Central

    Curty, Renata Gonçalves; Specht, Alison; Grant, Bruce W.; Dalton, Elizabeth D.

    2017-01-01

    The value of sharing scientific research data is widely appreciated, but factors that hinder or prompt the reuse of data remain poorly understood. Using the Theory of Reasoned Action, we test the relationship between the beliefs and attitudes of scientists towards data reuse, and their self-reported data reuse behaviour. To do so, we used existing responses to selected questions from a worldwide survey of scientists developed and administered by the DataONE Usability and Assessment Working Group (thus practicing data reuse ourselves). Results show that the perceived efficacy and efficiency of data reuse are strong predictors of reuse behaviour, and that the perceived importance of data reuse corresponds to greater reuse. Expressed lack of trust in existing data and perceived norms against data reuse were not found to be major impediments for reuse contrary to our expectations. We found that reported use of models and remotely-sensed data was associated with greater reuse. The results suggest that data reuse would be encouraged and normalized by demonstration of its value. We offer some theoretical and practical suggestions that could help to legitimize investment and policies in favor of data sharing. PMID:29281658

  1. Neurally and mathematically motivated architecture for language and thought.

    PubMed

    Perlovsky, L I; Ilin, R

    2010-01-01

    Neural structures of interaction between thinking and language are unknown. This paper suggests a possible architecture motivated by neural and mathematical considerations. A mathematical requirement of computability imposes significant constraints on possible architectures consistent with brain neural structure and with a wealth of psychological knowledge. How language interacts with cognition. Do we think with words, or is thinking independent from language with words being just labels for decisions? Why is language learned by the age of 5 or 7, but acquisition of knowledge represented by learning to use this language knowledge takes a lifetime? This paper discusses hierarchical aspects of language and thought and argues that high level abstract thinking is impossible without language. We discuss a mathematical technique that can model the joint language-thought architecture, while overcoming previously encountered difficulties of computability. This architecture explains a contradiction between human ability for rational thoughtful decisions and irrationality of human thinking revealed by Tversky and Kahneman; a crucial role in this contradiction might be played by language. The proposed model resolves long-standing issues: how the brain learns correct words-object associations; why animals do not talk and think like people. We propose the role played by language emotionality in its interaction with thought. We relate the mathematical model to Humboldt's "firmness" of languages; and discuss possible influence of language grammar on its emotionality. Psychological and brain imaging experiments related to the proposed model are discussed. Future theoretical and experimental research is outlined.

  2. Neurally and Mathematically Motivated Architecture for Language and Thought

    PubMed Central

    Perlovsky, L.I; Ilin, R

    2010-01-01

    Neural structures of interaction between thinking and language are unknown. This paper suggests a possible architecture motivated by neural and mathematical considerations. A mathematical requirement of computability imposes significant constraints on possible architectures consistent with brain neural structure and with a wealth of psychological knowledge. How language interacts with cognition. Do we think with words, or is thinking independent from language with words being just labels for decisions? Why is language learned by the age of 5 or 7, but acquisition of knowledge represented by learning to use this language knowledge takes a lifetime? This paper discusses hierarchical aspects of language and thought and argues that high level abstract thinking is impossible without language. We discuss a mathematical technique that can model the joint language-thought architecture, while overcoming previously encountered difficulties of computability. This architecture explains a contradiction between human ability for rational thoughtful decisions and irrationality of human thinking revealed by Tversky and Kahneman; a crucial role in this contradiction might be played by language. The proposed model resolves long-standing issues: how the brain learns correct words-object associations; why animals do not talk and think like people. We propose the role played by language emotionality in its interaction with thought. We relate the mathematical model to Humboldt’s “firmness” of languages; and discuss possible influence of language grammar on its emotionality. Psychological and brain imaging experiments related to the proposed model are discussed. Future theoretical and experimental research is outlined. PMID:21673788

  3. Reuse rate of treated wastewater in water reuse system.

    PubMed

    Fan, Yao-bo; Yang, Wen-bo; Li, Gang; Wu, Lin-lin; Wei, Yuan-song

    2005-01-01

    A water quality model for water reuse was made by mathematics induction. The relationship among the reuse rate of treated wastewater (R), pollutant concentration of reused water (Cs), pollutant concentration of influent (C0), removal efficiency of pollutant in wastewater (E), and the standard of reuse water were discussed in this study. According to the experiment result of a toilet wastewater treatment and reuse with membrane bioreactors, R would be set at less than 40%, on which all the concemed parameters could meet with the reuse water standards. To raise R of reuse water in the toilet, an important way was to improve color removal of the wastewater.

  4. Beginning to manage drug discovery and development knowledge.

    PubMed

    Sumner-Smith, M

    2001-05-01

    Knowledge management approaches and technologies are beginning to be implemented by the pharmaceutical industry in support of new drug discovery and development processes aimed at greater efficiencies and effectiveness. This trend coincides with moves to reduce paper, coordinate larger teams with more diverse skills that are distributed around the globe, and to comply with regulatory requirements for electronic submissions and the associated maintenance of electronic records. Concurrently, the available technologies have implemented web-based architectures with a greater range of collaborative tools and personalization through portal approaches. However, successful application of knowledge management methods depends on effective cultural change management, as well as proper architectural design to match the organizational and work processes within a company.

  5. Sensitivity analysis by approximation formulas - Illustrative examples. [reliability analysis of six-component architectures

    NASA Technical Reports Server (NTRS)

    White, A. L.

    1983-01-01

    This paper examines the reliability of three architectures for six components. For each architecture, the probabilities of the failure states are given by algebraic formulas involving the component fault rate, the system recovery rate, and the operating time. The dominant failure modes are identified, and the change in reliability is considered with respect to changes in fault rate, recovery rate, and operating time. The major conclusions concern the influence of system architecture on failure modes and parameter requirements. Without this knowledge, a system designer may pick an inappropriate structure.

  6. The Realities of "Reaching Out": Enacting the Public-Facing Open Scholar Role with Existing Online Communities

    ERIC Educational Resources Information Center

    Perryman, Leigh-Anne; Coughlan, Tony

    2013-01-01

    A core tenet of the open educational resources (OER) movement has long been that 'the world's knowledge is a public good' (Smith & Casserly, 2006, p.2) and should be available for everyone to use, reuse and share. However, this vision of openness and of the connection between OER and social justice, which McAndrew and Farrow…

  7. Comparing genomes to computer operating systems in terms of the topology and evolution of their regulatory control networks

    PubMed Central

    Yan, Koon-Kiu; Fang, Gang; Bhardwaj, Nitin; Alexander, Roger P.; Gerstein, Mark

    2010-01-01

    The genome has often been called the operating system (OS) for a living organism. A computer OS is described by a regulatory control network termed the call graph, which is analogous to the transcriptional regulatory network in a cell. To apply our firsthand knowledge of the architecture of software systems to understand cellular design principles, we present a comparison between the transcriptional regulatory network of a well-studied bacterium (Escherichia coli) and the call graph of a canonical OS (Linux) in terms of topology and evolution. We show that both networks have a fundamentally hierarchical layout, but there is a key difference: The transcriptional regulatory network possesses a few global regulators at the top and many targets at the bottom; conversely, the call graph has many regulators controlling a small set of generic functions. This top-heavy organization leads to highly overlapping functional modules in the call graph, in contrast to the relatively independent modules in the regulatory network. We further develop a way to measure evolutionary rates comparably between the two networks and explain this difference in terms of network evolution. The process of biological evolution via random mutation and subsequent selection tightly constrains the evolution of regulatory network hubs. The call graph, however, exhibits rapid evolution of its highly connected generic components, made possible by designers’ continual fine-tuning. These findings stem from the design principles of the two systems: robustness for biological systems and cost effectiveness (reuse) for software systems. PMID:20439753

  8. Comparing genomes to computer operating systems in terms of the topology and evolution of their regulatory control networks.

    PubMed

    Yan, Koon-Kiu; Fang, Gang; Bhardwaj, Nitin; Alexander, Roger P; Gerstein, Mark

    2010-05-18

    The genome has often been called the operating system (OS) for a living organism. A computer OS is described by a regulatory control network termed the call graph, which is analogous to the transcriptional regulatory network in a cell. To apply our firsthand knowledge of the architecture of software systems to understand cellular design principles, we present a comparison between the transcriptional regulatory network of a well-studied bacterium (Escherichia coli) and the call graph of a canonical OS (Linux) in terms of topology and evolution. We show that both networks have a fundamentally hierarchical layout, but there is a key difference: The transcriptional regulatory network possesses a few global regulators at the top and many targets at the bottom; conversely, the call graph has many regulators controlling a small set of generic functions. This top-heavy organization leads to highly overlapping functional modules in the call graph, in contrast to the relatively independent modules in the regulatory network. We further develop a way to measure evolutionary rates comparably between the two networks and explain this difference in terms of network evolution. The process of biological evolution via random mutation and subsequent selection tightly constrains the evolution of regulatory network hubs. The call graph, however, exhibits rapid evolution of its highly connected generic components, made possible by designers' continual fine-tuning. These findings stem from the design principles of the two systems: robustness for biological systems and cost effectiveness (reuse) for software systems.

  9. Learning Outcomes in Affective Domain within Contemporary Architectural Curricula

    ERIC Educational Resources Information Center

    Savic, Marko; Kashef, Mohamad

    2013-01-01

    Contemporary architectural education has shifted from the traditional focus on providing students with specific knowledge and skill sets or "inputs" to outcome based, student-centred educational approach. Within the outcome based model, students' performance is assessed against measureable objectives that relate acquired knowledge…

  10. Anticipatory Eye Movements in Interleaving Templates of Human Behavior

    NASA Technical Reports Server (NTRS)

    Matessa, Michael

    2004-01-01

    Performance modeling has been made easier by architectures which package psychological theory for reuse at useful levels of abstraction. CPM-GOMS uses templates of behavior to package at a task level (e.g., mouse move-click, typing) predictions of lower-level cognitive, perceptual, and motor resource use. CPM-GOMS also has a theory for interleaving resource use between templates. One example of interleaving is anticipatory eye movements. This paper describes the use of ACT-Stitch, a framework for translating CPM-GOMS templates and interleaving theory into ACT-R, to model anticipatory eye movements in skilled behavior. The anticipatory eye movements explain performance in a well-practiced perceptual/motor task, and the interleaving theory is supported with results from an eye-tracking experiment.

  11. Semantic Technologies for Re-Use of Clinical Routine Data.

    PubMed

    Kreuzthaler, Markus; Martínez-Costa, Catalina; Kaiser, Peter; Schulz, Stefan

    2017-01-01

    Routine patient data in electronic patient records are only partly structured, and an even smaller segment is coded, mainly for administrative purposes. Large parts are only available as free text. Transforming this content into a structured and semantically explicit form is a prerequisite for querying and information extraction. The core of the system architecture presented in this paper is based on SAP HANA in-memory database technology using the SAP Connected Health platform for data integration as well as for clinical data warehousing. A natural language processing pipeline analyses unstructured content and maps it to a standardized vocabulary within a well-defined information model. The resulting semantically standardized patient profiles are used for a broad range of clinical and research application scenarios.

  12. Trust information-based privacy architecture for ubiquitous health.

    PubMed

    Ruotsalainen, Pekka Sakari; Blobel, Bernd; Seppälä, Antto; Nykänen, Pirkko

    2013-10-08

    Ubiquitous health is defined as a dynamic network of interconnected systems that offers health services independent of time and location to a data subject (DS). The network takes place in open and unsecure information space. It is created and managed by the DS who sets rules that regulate the way personal health information is collected and used. Compared to health care, it is impossible in ubiquitous health to assume the existence of a priori trust between the DS and service providers and to produce privacy using static security services. In ubiquitous health features, business goals and regulations systems followed often remain unknown. Furthermore, health care-specific regulations do not rule the ways health data is processed and shared. To be successful, ubiquitous health requires novel privacy architecture. The goal of this study was to develop a privacy management architecture that helps the DS to create and dynamically manage the network and to maintain information privacy. The architecture should enable the DS to dynamically define service and system-specific rules that regulate the way subject data is processed. The architecture should provide to the DS reliable trust information about systems and assist in the formulation of privacy policies. Furthermore, the architecture should give feedback upon how systems follow the policies of DS and offer protection against privacy and trust threats existing in ubiquitous environments. A sequential method that combines methodologies used in system theory, systems engineering, requirement analysis, and system design was used in the study. In the first phase, principles, trust and privacy models, and viewpoints were selected. Thereafter, functional requirements and services were developed on the basis of a careful analysis of existing research published in journals and conference proceedings. Based on principles, models, and requirements, architectural components and their interconnections were developed using system analysis. The architecture mimics the way humans use trust information in decision making, and enables the DS to design system-specific privacy policies using computational trust information that is based on systems' measured features. The trust attributes that were developed describe the level systems for support awareness and transparency, and how they follow general and domain-specific regulations and laws. The monitoring component of the architecture offers dynamic feedback concerning how the system enforces the polices of DS. The privacy management architecture developed in this study enables the DS to dynamically manage information privacy in ubiquitous health and to define individual policies for all systems considering their trust value and corresponding attributes. The DS can also set policies for secondary use and reuse of health information. The architecture offers protection against privacy threats existing in ubiquitous environments. Although the architecture is targeted to ubiquitous health, it can easily be modified to other ubiquitous applications.

  13. Trust Information-Based Privacy Architecture for Ubiquitous Health

    PubMed Central

    2013-01-01

    Background Ubiquitous health is defined as a dynamic network of interconnected systems that offers health services independent of time and location to a data subject (DS). The network takes place in open and unsecure information space. It is created and managed by the DS who sets rules that regulate the way personal health information is collected and used. Compared to health care, it is impossible in ubiquitous health to assume the existence of a priori trust between the DS and service providers and to produce privacy using static security services. In ubiquitous health features, business goals and regulations systems followed often remain unknown. Furthermore, health care-specific regulations do not rule the ways health data is processed and shared. To be successful, ubiquitous health requires novel privacy architecture. Objective The goal of this study was to develop a privacy management architecture that helps the DS to create and dynamically manage the network and to maintain information privacy. The architecture should enable the DS to dynamically define service and system-specific rules that regulate the way subject data is processed. The architecture should provide to the DS reliable trust information about systems and assist in the formulation of privacy policies. Furthermore, the architecture should give feedback upon how systems follow the policies of DS and offer protection against privacy and trust threats existing in ubiquitous environments. Methods A sequential method that combines methodologies used in system theory, systems engineering, requirement analysis, and system design was used in the study. In the first phase, principles, trust and privacy models, and viewpoints were selected. Thereafter, functional requirements and services were developed on the basis of a careful analysis of existing research published in journals and conference proceedings. Based on principles, models, and requirements, architectural components and their interconnections were developed using system analysis. Results The architecture mimics the way humans use trust information in decision making, and enables the DS to design system-specific privacy policies using computational trust information that is based on systems’ measured features. The trust attributes that were developed describe the level systems for support awareness and transparency, and how they follow general and domain-specific regulations and laws. The monitoring component of the architecture offers dynamic feedback concerning how the system enforces the polices of DS. Conclusions The privacy management architecture developed in this study enables the DS to dynamically manage information privacy in ubiquitous health and to define individual policies for all systems considering their trust value and corresponding attributes. The DS can also set policies for secondary use and reuse of health information. The architecture offers protection against privacy threats existing in ubiquitous environments. Although the architecture is targeted to ubiquitous health, it can easily be modified to other ubiquitous applications. PMID:25099213

  14. Enhancing Knowledge Sharing Management Using BIM Technology in Construction

    PubMed Central

    Ho, Shih-Ping; Tserng, Hui-Ping

    2013-01-01

    Construction knowledge can be communicated and reused among project managers and jobsite engineers to alleviate problems on a construction jobsite and reduce the time and cost of solving problems related to constructability. This paper proposes a new methodology for the sharing of construction knowledge by using Building Information Modeling (BIM) technology. The main characteristics of BIM include illustrating 3D CAD-based presentations and keeping information in a digital format and facilitation of easy updating and transfer of information in the BIM environment. Using the BIM technology, project managers and engineers can gain knowledge related to BIM and obtain feedback provided by jobsite engineers for future reference. This study addresses the application of knowledge sharing management using BIM technology and proposes a BIM-based Knowledge Sharing Management (BIMKSM) system for project managers and engineers. The BIMKSM system is then applied in a selected case study of a construction project in Taiwan to demonstrate the effectiveness of sharing knowledge in the BIM environment. The results demonstrate that the BIMKSM system can be used as a visual BIM-based knowledge sharing management platform by utilizing the BIM technology. PMID:24723790

  15. Enhancing knowledge sharing management using BIM technology in construction.

    PubMed

    Ho, Shih-Ping; Tserng, Hui-Ping; Jan, Shu-Hui

    2013-01-01

    Construction knowledge can be communicated and reused among project managers and jobsite engineers to alleviate problems on a construction jobsite and reduce the time and cost of solving problems related to constructability. This paper proposes a new methodology for the sharing of construction knowledge by using Building Information Modeling (BIM) technology. The main characteristics of BIM include illustrating 3D CAD-based presentations and keeping information in a digital format and facilitation of easy updating and transfer of information in the BIM environment. Using the BIM technology, project managers and engineers can gain knowledge related to BIM and obtain feedback provided by jobsite engineers for future reference. This study addresses the application of knowledge sharing management using BIM technology and proposes a BIM-based Knowledge Sharing Management (BIMKSM) system for project managers and engineers. The BIMKSM system is then applied in a selected case study of a construction project in Taiwan to demonstrate the effectiveness of sharing knowledge in the BIM environment. The results demonstrate that the BIMKSM system can be used as a visual BIM-based knowledge sharing management platform by utilizing the BIM technology.

  16. Reconfigurable Transceiver and Software-Defined Radio Architecture and Technology Evaluated for NASA Space Communications

    NASA Technical Reports Server (NTRS)

    Reinhart, Richard C.; Kacpura, Thomas J.

    2004-01-01

    The NASA Glenn Research Center is investigating the development and suitability of a software-based open-architecture for space-based reconfigurable transceivers (RTs) and software-defined radios (SDRs). The main objectives of this project are to enable advanced operations and reduce mission costs. SDRs are becoming more common because of the capabilities of reconfigurable digital signal processing technologies such as field programmable gate arrays and digital signal processors, which place radio functions in firmware and software that were traditionally performed with analog hardware components. Features of interest of this communications architecture include nonproprietary open standards and application programming interfaces to enable software reuse and portability, independent hardware and software development, and hardware and software functional separation. The goals for RT and SDR technologies for NASA space missions include prelaunch and on-orbit frequency and waveform reconfigurability and programmability, high data rate capability, and overall communications and processing flexibility. These operational advances over current state-of-art transceivers will be provided to reduce the power, mass, and cost of RTs and SDRs for space communications. The open architecture for NASA communications will support existing (legacy) communications needs and capabilities while providing a path to more capable, advanced waveform development and mission concepts (e.g., ad hoc constellations with self-healing networks and high-rate science data return). A study was completed to assess the state of the art in RT architectures, implementations, and technologies. In-house researchers conducted literature searches and analysis, interviewed Government and industry contacts, and solicited information and white papers from industry on space-qualifiable RTs and SDRs and their associated technologies for space-based NASA applications. The white papers were evaluated, compiled, and used to assess RT and SDR system architectures and core technology elements to determine an appropriate investment strategy to advance these technologies to meet future mission needs. The use of these radios in the space environment represents a challenge because of the space radiation suitability of the components, which drastically reduces the processing capability. The radios available for space are considered to be RTs (as opposed to SDRs), which are digitally programmable radios with selectable changes from an architecture combining analog and digital components. The limited flexibility of this design contrasts against the desire to have a power-efficient solution and open architecture.

  17. Efficient and flexible memory architecture to alleviate data and context bandwidth bottlenecks of coarse-grained reconfigurable arrays

    NASA Astrophysics Data System (ADS)

    Yang, Chen; Liu, LeiBo; Yin, ShouYi; Wei, ShaoJun

    2014-12-01

    The computational capability of a coarse-grained reconfigurable array (CGRA) can be significantly restrained due to data and context memory bandwidth bottlenecks. Traditionally, two methods have been used to resolve this problem. One method loads the context into the CGRA at run time. This method occupies very small on-chip memory but induces very large latency, which leads to low computational efficiency. The other method adopts a multi-context structure. This method loads the context into the on-chip context memory at the boot phase. Broadcasting the pointer of a set of contexts changes the hardware configuration on a cycle-by-cycle basis. The size of the context memory induces a large area overhead in multi-context structures, which results in major restrictions on application complexity. This paper proposes a Predictable Context Cache (PCC) architecture to address the above context issues by buffering the context inside a CGRA. In this architecture, context is dynamically transferred into the CGRA. Utilizing a PCC significantly reduces the on-chip context memory and the complexity of the applications running on the CGRA is no longer restricted by the size of the on-chip context memory. Data preloading is the most frequently used approach to hide input data latency and speed up the data transmission process for the data bandwidth issue. Rather than fundamentally reducing the amount of input data, the transferred data and computations are processed in parallel. However, the data preloading method cannot work efficiently because data transmission becomes the critical path as the reconfigurable array scale increases. This paper also presents a Hierarchical Data Memory (HDM) architecture as a solution to the efficiency problem. In this architecture, high internal bandwidth is provided to buffer both reused input data and intermediate data. The HDM architecture relieves the external memory from the data transfer burden so that the performance is significantly improved. As a result of using PCC and HDM, experiments running mainstream video decoding programs achieved performance improvements of 13.57%-19.48% when there was a reasonable memory size. Therefore, 1080p@35.7fps for H.264 high profile video decoding can be achieved on PCC and HDM architecture when utilizing a 200 MHz working frequency. Further, the size of the on-chip context memory no longer restricted complex applications, which were efficiently executed on the PCC and HDM architecture.

  18. Executable Architecture Research at Old Dominion University

    NASA Technical Reports Server (NTRS)

    Tolk, Andreas; Shuman, Edwin A.; Garcia, Johnny J.

    2011-01-01

    Executable Architectures allow the evaluation of system architectures not only regarding their static, but also their dynamic behavior. However, the systems engineering community do not agree on a common formal specification of executable architectures. To close this gap and identify necessary elements of an executable architecture, a modeling language, and a modeling formalism is topic of ongoing PhD research. In addition, systems are generally defined and applied in an operational context to provide capabilities and enable missions. To maximize the benefits of executable architectures, a second PhD effort introduces the idea of creating an executable context in addition to the executable architecture. The results move the validation of architectures from the current information domain into the knowledge domain and improve the reliability of such validation efforts. The paper presents research and results of both doctoral research efforts and puts them into a common context of state-of-the-art of systems engineering methods supporting more agility.

  19. Value-based management of design reuse

    NASA Astrophysics Data System (ADS)

    Carballo, Juan Antonio; Cohn, David L.; Belluomini, Wendy; Montoye, Robert K.

    2003-06-01

    Effective design reuse in electronic products has the potential to provide very large cost savings, substantial time-to-market reduction, and extra sources of revenue. Unfortunately, critical reuse opportunities are often missed because, although they provide clear value to the corporation, they may not benefit the business performance of an internal organization. It is therefore crucial to provide tools to help reuse partners participate in a reuse transaction when the transaction provides value to the corporation as a whole. Value-based Reuse Management (VRM) addresses this challenge by (a) ensuring that all parties can quickly assess the business performance impact of a reuse opportunity, and (b) encouraging high-value reuse opportunities by supplying value-based rewards to potential parties. In this paper we introduce the Value-Based Reuse Management approach and we describe key results on electronic designs that demonstrate its advantages. Our results indicate that Value-Based Reuse Management has the potential to significantly increase the success probability of high-value electronic design reuse.

  20. A reuse-based framework for the design of analog and mixed-signal ICs

    NASA Astrophysics Data System (ADS)

    Castro-Lopez, Rafael; Fernandez, Francisco V.; Rodriguez Vazquez, Angel

    2005-06-01

    Despite the spectacular breakthroughs of the semiconductor industry, the ability to design integrated circuits (ICs) under stringent time-to-market (TTM) requirements is lagging behind integration capacity, so far keeping pace with still valid Moore"s Law. The resulting gap is threatening with slowing down such a phenomenal growth. The design community believes that it is only by means of powerful CAD tools and design methodologies - and, possibly, a design paradigm shift - that this design gap can be bridged. In this sense, reuse-based design is seen as a promising solution, and concepts such as IP Block, Virtual Component, and Design Reuse have become commonplace thanks to the significant advances in the digital arena. Unfortunately, the very nature of analog and mixed-signal (AMS) design has hindered a similar level of consensus and development. This paper presents a framework for the reuse-based design of AMS circuits. The framework is founded on three key elements: (1) a CAD-supported hierarchical design flow that facilitates the incorporation of AMS reusable blocks, reduces the overall design time, and expedites the management of increasing AMS design complexity; (2) a complete, clear definition of the AMS reusable block, structured into three separate facets or views: the behavioral, structural, and layout facets, the two first for top-down electrical synthesis and bottom-up verification, the latter used during bottom-up physical synthesis; (3) the design for reusability set of tools, methods, and guidelines that, relying on intensive parameterization as well as on design knowledge capture and encapsulation, allows to produce fully reusable AMS blocks. A case study and a functional silicon prototype demonstrate the validity of the paper"s proposals.

Top