Sample records for model driven architecture

  1. Formalism Challenges of the Cougaar Model Driven Architecture

    NASA Technical Reports Server (NTRS)

    Bohner, Shawn A.; George, Boby; Gracanin, Denis; Hinchey, Michael G.

    2004-01-01

    The Cognitive Agent Architecture (Cougaar) is one of the most sophisticated distributed agent architectures developed today. As part of its research and evolution, Cougaar is being studied for application to large, logistics-based applications for the Department of Defense (DoD). Anticipiting future complex applications of Cougaar, we are investigating the Model Driven Architecture (MDA) approach to understand how effective it would be for increasing productivity in Cougar-based development efforts. Recognizing the sophistication of the Cougaar development environment and the limitations of transformation technologies for agents, we have systematically developed an approach that combines component assembly in the large and transformation in the small. This paper describes some of the key elements that went into the Cougaar Model Driven Architecture approach and the characteristics that drove the approach.

  2. Petri net model for analysis of concurrently processed complex algorithms

    NASA Technical Reports Server (NTRS)

    Stoughton, John W.; Mielke, Roland R.

    1986-01-01

    This paper presents a Petri-net model suitable for analyzing the concurrent processing of computationally complex algorithms. The decomposed operations are to be processed in a multiple processor, data driven architecture. Of particular interest is the application of the model to both the description of the data/control flow of a particular algorithm, and to the general specification of the data driven architecture. A candidate architecture is also presented.

  3. CrossTalk. The Journal of Defense Software Engineering. Volume 23, Number 6, Nov/Dec 2010

    DTIC Science & Technology

    2010-11-01

    Model of archi- tectural design. It guides developers to apply effort to their software architecture commensurate with the risks faced by...Driven Model is the promotion of risk to prominence. It is possible to apply the Risk-Driven Model to essentially any software development process...succeed without any planned architecture work, while many high-risk projects would fail without it . The Risk-Driven Model walks a middle path

  4. Model Driven Engineering

    NASA Astrophysics Data System (ADS)

    Gaševic, Dragan; Djuric, Dragan; Devedžic, Vladan

    A relevant initiative from the software engineering community called Model Driven Engineering (MDE) is being developed in parallel with the Semantic Web (Mellor et al. 2003a). The MDE approach to software development suggests that one should first develop a model of the system under study, which is then transformed into the real thing (i.e., an executable software entity). The most important research initiative in this area is the Model Driven Architecture (MDA), which is Model Driven Architecture being developed under the umbrella of the Object Management Group (OMG). This chapter describes the basic concepts of this software engineering effort.

  5. Network-driven design principles for neuromorphic systems.

    PubMed

    Partzsch, Johannes; Schüffny, Rene

    2015-01-01

    Synaptic connectivity is typically the most resource-demanding part of neuromorphic systems. Commonly, the architecture of these systems is chosen mainly on technical considerations. As a consequence, the potential for optimization arising from the inherent constraints of connectivity models is left unused. In this article, we develop an alternative, network-driven approach to neuromorphic architecture design. We describe methods to analyse performance of existing neuromorphic architectures in emulating certain connectivity models. Furthermore, we show step-by-step how to derive a neuromorphic architecture from a given connectivity model. For this, we introduce a generalized description for architectures with a synapse matrix, which takes into account shared use of circuit components for reducing total silicon area. Architectures designed with this approach are fitted to a connectivity model, essentially adapting to its connection density. They are guaranteeing faithful reproduction of the model on chip, while requiring less total silicon area. In total, our methods allow designers to implement more area-efficient neuromorphic systems and verify usability of the connectivity resources in these systems.

  6. Network-driven design principles for neuromorphic systems

    PubMed Central

    Partzsch, Johannes; Schüffny, Rene

    2015-01-01

    Synaptic connectivity is typically the most resource-demanding part of neuromorphic systems. Commonly, the architecture of these systems is chosen mainly on technical considerations. As a consequence, the potential for optimization arising from the inherent constraints of connectivity models is left unused. In this article, we develop an alternative, network-driven approach to neuromorphic architecture design. We describe methods to analyse performance of existing neuromorphic architectures in emulating certain connectivity models. Furthermore, we show step-by-step how to derive a neuromorphic architecture from a given connectivity model. For this, we introduce a generalized description for architectures with a synapse matrix, which takes into account shared use of circuit components for reducing total silicon area. Architectures designed with this approach are fitted to a connectivity model, essentially adapting to its connection density. They are guaranteeing faithful reproduction of the model on chip, while requiring less total silicon area. In total, our methods allow designers to implement more area-efficient neuromorphic systems and verify usability of the connectivity resources in these systems. PMID:26539079

  7. Preliminary Results from a Model-Driven Architecture Methodology for Development of an Event-Driven Space Communications Service Concept

    NASA Technical Reports Server (NTRS)

    Roberts, Christopher J.; Morgenstern, Robert M.; Israel, David J.; Borky, John M.; Bradley, Thomas H.

    2017-01-01

    NASA's next generation space communications network will involve dynamic and autonomous services analogous to services provided by current terrestrial wireless networks. This architecture concept, known as the Space Mobile Network (SMN), is enabled by several technologies now in development. A pillar of the SMN architecture is the establishment and utilization of a continuous bidirectional control plane space link channel and a new User Initiated Service (UIS) protocol to enable more dynamic and autonomous mission operations concepts, reduced user space communications planning burden, and more efficient and effective provider network resource utilization. This paper provides preliminary results from the application of model driven architecture methodology to develop UIS. Such an approach is necessary to ensure systematic investigation of several open questions concerning the efficiency, robustness, interoperability, scalability and security of the control plane space link and UIS protocol.

  8. A Comparison and Evaluation of Real-Time Software Systems Modeling Languages

    NASA Technical Reports Server (NTRS)

    Evensen, Kenneth D.; Weiss, Kathryn Anne

    2010-01-01

    A model-driven approach to real-time software systems development enables the conceptualization of software, fostering a more thorough understanding of its often complex architecture and behavior while promoting the documentation and analysis of concerns common to real-time embedded systems such as scheduling, resource allocation, and performance. Several modeling languages have been developed to assist in the model-driven software engineering effort for real-time systems, and these languages are beginning to gain traction with practitioners throughout the aerospace industry. This paper presents a survey of several real-time software system modeling languages, namely the Architectural Analysis and Design Language (AADL), the Unified Modeling Language (UML), Systems Modeling Language (SysML), the Modeling and Analysis of Real-Time Embedded Systems (MARTE) UML profile, and the AADL for UML profile. Each language has its advantages and disadvantages, and in order to adequately describe a real-time software system's architecture, a complementary use of multiple languages is almost certainly necessary. This paper aims to explore these languages in the context of understanding the value each brings to the model-driven software engineering effort and to determine if it is feasible and practical to combine aspects of the various modeling languages to achieve more complete coverage in architectural descriptions. To this end, each language is evaluated with respect to a set of criteria such as scope, formalisms, and architectural coverage. An example is used to help illustrate the capabilities of the various languages.

  9. Enhancement of the Acquisition Process for a Combat System-A Case Study to Model the Workflow Processes for an Air Defense System Acquisition

    DTIC Science & Technology

    2009-12-01

    Business Process Modeling BPMN Business Process Modeling Notation SoA Service-oriented Architecture UML Unified Modeling Language CSP...system developers. Supporting technologies include Business Process Modeling Notation ( BPMN ), Unified Modeling Language (UML), model-driven architecture

  10. Consistent model driven architecture

    NASA Astrophysics Data System (ADS)

    Niepostyn, Stanisław J.

    2015-09-01

    The goal of the MDA is to produce software systems from abstract models in a way where human interaction is restricted to a minimum. These abstract models are based on the UML language. However, the semantics of UML models is defined in a natural language. Subsequently the verification of consistency of these diagrams is needed in order to identify errors in requirements at the early stage of the development process. The verification of consistency is difficult due to a semi-formal nature of UML diagrams. We propose automatic verification of consistency of the series of UML diagrams originating from abstract models implemented with our consistency rules. This Consistent Model Driven Architecture approach enables us to generate automatically complete workflow applications from consistent and complete models developed from abstract models (e.g. Business Context Diagram). Therefore, our method can be used to check practicability (feasibility) of software architecture models.

  11. The Need for Software Architecture Evaluation in the Acquisition of Software-Intensive Sysetms

    DTIC Science & Technology

    2014-01-01

    Function and Performance Specification GIG Global Information Grid ISO International Standard Organisation MDA Model Driven Architecture...architecture and design, which is a key part of knowledge-based economy UNCLASSIFIED DSTO-TR-2936 UNCLASSIFIED 24  Allow Australian SMEs to

  12. caCORE version 3: Implementation of a model driven, service-oriented architecture for semantic interoperability.

    PubMed

    Komatsoulis, George A; Warzel, Denise B; Hartel, Francis W; Shanbhag, Krishnakant; Chilukuri, Ram; Fragoso, Gilberto; Coronado, Sherri de; Reeves, Dianne M; Hadfield, Jillaine B; Ludet, Christophe; Covitz, Peter A

    2008-02-01

    One of the requirements for a federated information system is interoperability, the ability of one computer system to access and use the resources of another system. This feature is particularly important in biomedical research systems, which need to coordinate a variety of disparate types of data. In order to meet this need, the National Cancer Institute Center for Bioinformatics (NCICB) has created the cancer Common Ontologic Representation Environment (caCORE), an interoperability infrastructure based on Model Driven Architecture. The caCORE infrastructure provides a mechanism to create interoperable biomedical information systems. Systems built using the caCORE paradigm address both aspects of interoperability: the ability to access data (syntactic interoperability) and understand the data once retrieved (semantic interoperability). This infrastructure consists of an integrated set of three major components: a controlled terminology service (Enterprise Vocabulary Services), a standards-based metadata repository (the cancer Data Standards Repository) and an information system with an Application Programming Interface (API) based on Domain Model Driven Architecture. This infrastructure is being leveraged to create a Semantic Service-Oriented Architecture (SSOA) for cancer research by the National Cancer Institute's cancer Biomedical Informatics Grid (caBIG).

  13. caCORE version 3: Implementation of a model driven, service-oriented architecture for semantic interoperability

    PubMed Central

    Komatsoulis, George A.; Warzel, Denise B.; Hartel, Frank W.; Shanbhag, Krishnakant; Chilukuri, Ram; Fragoso, Gilberto; de Coronado, Sherri; Reeves, Dianne M.; Hadfield, Jillaine B.; Ludet, Christophe; Covitz, Peter A.

    2008-01-01

    One of the requirements for a federated information system is interoperability, the ability of one computer system to access and use the resources of another system. This feature is particularly important in biomedical research systems, which need to coordinate a variety of disparate types of data. In order to meet this need, the National Cancer Institute Center for Bioinformatics (NCICB) has created the cancer Common Ontologic Representation Environment (caCORE), an interoperability infrastructure based on Model Driven Architecture. The caCORE infrastructure provides a mechanism to create interoperable biomedical information systems. Systems built using the caCORE paradigm address both aspects of interoperability: the ability to access data (syntactic interoperability) and understand the data once retrieved (semantic interoperability). This infrastructure consists of an integrated set of three major components: a controlled terminology service (Enterprise Vocabulary Services), a standards-based metadata repository (the cancer Data Standards Repository) and an information system with an Application Programming Interface (API) based on Domain Model Driven Architecture. This infrastructure is being leveraged to create a Semantic Service Oriented Architecture (SSOA) for cancer research by the National Cancer Institute’s cancer Biomedical Informatics Grid (caBIG™). PMID:17512259

  14. Semantic Web-Driven LMS Architecture towards a Holistic Learning Process Model Focused on Personalization

    ERIC Educational Resources Information Center

    Kerkiri, Tania

    2010-01-01

    A comprehensive presentation is here made on the modular architecture of an e-learning platform with a distinctive emphasis on content personalization, combining advantages from semantic web technology, collaborative filtering and recommendation systems. Modules of this architecture handle information about both the domain-specific didactic…

  15. Strategies for concurrent processing of complex algorithms in data driven architectures

    NASA Technical Reports Server (NTRS)

    Stoughton, John W.; Mielke, Roland R.

    1988-01-01

    The purpose is to document research to develop strategies for concurrent processing of complex algorithms in data driven architectures. The problem domain consists of decision-free algorithms having large-grained, computationally complex primitive operations. Such are often found in signal processing and control applications. The anticipated multiprocessor environment is a data flow architecture containing between two and twenty computing elements. Each computing element is a processor having local program memory, and which communicates with a common global data memory. A new graph theoretic model called ATAMM which establishes rules for relating a decomposed algorithm to its execution in a data flow architecture is presented. The ATAMM model is used to determine strategies to achieve optimum time performance and to develop a system diagnostic software tool. In addition, preliminary work on a new multiprocessor operating system based on the ATAMM specifications is described.

  16. Model-Drive Architecture for Agent-Based Systems

    NASA Technical Reports Server (NTRS)

    Gradanin, Denis; Singh, H. Lally; Bohner, Shawn A.; Hinchey, Michael G.

    2004-01-01

    The Model Driven Architecture (MDA) approach uses a platform-independent model to define system functionality, or requirements, using some specification language. The requirements are then translated to a platform-specific model for implementation. An agent architecture based on the human cognitive model of planning, the Cognitive Agent Architecture (Cougaar) is selected for the implementation platform. The resulting Cougaar MDA prescribes certain kinds of models to be used, how those models may be prepared and the relationships of the different kinds of models. Using the existing Cougaar architecture, the level of application composition is elevated from individual components to domain level model specifications in order to generate software artifacts. The software artifacts generation is based on a metamodel. Each component maps to a UML structured component which is then converted into multiple artifacts: Cougaar/Java code, documentation, and test cases.

  17. Model-driven Service Engineering with SoaML

    NASA Astrophysics Data System (ADS)

    Elvesæter, Brian; Carrez, Cyril; Mohagheghi, Parastoo; Berre, Arne-Jørgen; Johnsen, Svein G.; Solberg, Arnor

    This chapter presents a model-driven service engineering (MDSE) methodology that uses OMG MDA specifications such as BMM, BPMN and SoaML to identify and specify services within a service-oriented architecture. The methodology takes advantage of business modelling practices and provides a guide to service modelling with SoaML. The presentation is case-driven and illuminated using the telecommunication example. The chapter focuses in particular on the use of the SoaML modelling language as a means for expressing service specifications that are aligned with business models and can be realized in different platform technologies.

  18. Model-Driven Development of Safety Architectures

    NASA Technical Reports Server (NTRS)

    Denney, Ewen; Pai, Ganesh; Whiteside, Iain

    2017-01-01

    We describe the use of model-driven development for safety assurance of a pioneering NASA flight operation involving a fleet of small unmanned aircraft systems (sUAS) flying beyond visual line of sight. The central idea is to develop a safety architecture that provides the basis for risk assessment and visualization within a safety case, the formal justification of acceptable safety required by the aviation regulatory authority. A safety architecture is composed from a collection of bow tie diagrams (BTDs), a practical approach to manage safety risk by linking the identified hazards to the appropriate mitigation measures. The safety justification for a given unmanned aircraft system (UAS) operation can have many related BTDs. In practice, however, each BTD is independently developed, which poses challenges with respect to incremental development, maintaining consistency across different safety artifacts when changes occur, and in extracting and presenting stakeholder specific information relevant for decision making. We show how a safety architecture reconciles the various BTDs of a system, and, collectively, provide an overarching picture of system safety, by considering them as views of a unified model. We also show how it enables model-driven development of BTDs, replete with validations, transformations, and a range of views. Our approach, which we have implemented in our toolset, AdvoCATE, is illustrated with a running example drawn from a real UAS safety case. The models and some of the innovations described here were instrumental in successfully obtaining regulatory flight approval.

  19. A Model-Driven Architecture Approach for Modeling, Specifying and Deploying Policies in Autonomous and Autonomic Systems

    NASA Technical Reports Server (NTRS)

    Pena, Joaquin; Hinchey, Michael G.; Sterritt, Roy; Ruiz-Cortes, Antonio; Resinas, Manuel

    2006-01-01

    Autonomic Computing (AC), self-management based on high level guidance from humans, is increasingly gaining momentum as the way forward in designing reliable systems that hide complexity and conquer IT management costs. Effectively, AC may be viewed as Policy-Based Self-Management. The Model Driven Architecture (MDA) approach focuses on building models that can be transformed into code in an automatic manner. In this paper, we look at ways to implement Policy-Based Self-Management by means of models that can be converted to code using transformations that follow the MDA philosophy. We propose a set of UML-based models to specify autonomic and autonomous features along with the necessary procedures, based on modification and composition of models, to deploy a policy as an executing system.

  20. Strategies for concurrent processing of complex algorithms in data driven architectures

    NASA Technical Reports Server (NTRS)

    Stoughton, John W.; Mielke, Roland R.

    1988-01-01

    Research directed at developing a graph theoretical model for describing data and control flow associated with the execution of large grained algorithms in a special distributed computer environment is presented. This model is identified by the acronym ATAMM which represents Algorithms To Architecture Mapping Model. The purpose of such a model is to provide a basis for establishing rules for relating an algorithm to its execution in a multiprocessor environment. Specifications derived from the model lead directly to the description of a data flow architecture which is a consequence of the inherent behavior of the data and control flow described by the model. The purpose of the ATAMM based architecture is to provide an analytical basis for performance evaluation. The ATAMM model and architecture specifications are demonstrated on a prototype system for concept validation.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Riensche, Roderick M.; Paulson, Patrick R.; Danielson, Gary R.

    We describe a methodology and architecture to support the development of games in a predictive analytics context. These games serve as part of an overall family of systems designed to gather input knowledge, calculate results of complex predictive technical and social models, and explore those results in an engaging fashion. The games provide an environment shaped and driven in part by the outputs of the models, allowing users to exert influence over a limited set of parameters, and displaying the results when those actions cause changes in the underlying model. We have crafted a prototype system in which we aremore » implementing test versions of games driven by models in such a fashion, using a flexible architecture to allow for future continuation and expansion of this work.« less

  2. Strategies for concurrent processing of complex algorithms in data driven architectures

    NASA Technical Reports Server (NTRS)

    Stoughton, John W.; Mielke, Roland R.

    1987-01-01

    The results of ongoing research directed at developing a graph theoretical model for describing data and control flow associated with the execution of large grained algorithms in a spatial distributed computer environment is presented. This model is identified by the acronym ATAMM (Algorithm/Architecture Mapping Model). The purpose of such a model is to provide a basis for establishing rules for relating an algorithm to its execution in a multiprocessor environment. Specifications derived from the model lead directly to the description of a data flow architecture which is a consequence of the inherent behavior of the data and control flow described by the model. The purpose of the ATAMM based architecture is to optimize computational concurrency in the multiprocessor environment and to provide an analytical basis for performance evaluation. The ATAMM model and architecture specifications are demonstrated on a prototype system for concept validation.

  3. Unsupervised Learning in an Ensemble of Spiking Neural Networks Mediated by ITDP.

    PubMed

    Shim, Yoonsik; Philippides, Andrew; Staras, Kevin; Husbands, Phil

    2016-10-01

    We propose a biologically plausible architecture for unsupervised ensemble learning in a population of spiking neural network classifiers. A mixture of experts type organisation is shown to be effective, with the individual classifier outputs combined via a gating network whose operation is driven by input timing dependent plasticity (ITDP). The ITDP gating mechanism is based on recent experimental findings. An abstract, analytically tractable model of the ITDP driven ensemble architecture is derived from a logical model based on the probabilities of neural firing events. A detailed analysis of this model provides insights that allow it to be extended into a full, biologically plausible, computational implementation of the architecture which is demonstrated on a visual classification task. The extended model makes use of a style of spiking network, first introduced as a model of cortical microcircuits, that is capable of Bayesian inference, effectively performing expectation maximization. The unsupervised ensemble learning mechanism, based around such spiking expectation maximization (SEM) networks whose combined outputs are mediated by ITDP, is shown to perform the visual classification task well and to generalize to unseen data. The combined ensemble performance is significantly better than that of the individual classifiers, validating the ensemble architecture and learning mechanisms. The properties of the full model are analysed in the light of extensive experiments with the classification task, including an investigation into the influence of different input feature selection schemes and a comparison with a hierarchical STDP based ensemble architecture.

  4. Unsupervised Learning in an Ensemble of Spiking Neural Networks Mediated by ITDP

    PubMed Central

    Staras, Kevin

    2016-01-01

    We propose a biologically plausible architecture for unsupervised ensemble learning in a population of spiking neural network classifiers. A mixture of experts type organisation is shown to be effective, with the individual classifier outputs combined via a gating network whose operation is driven by input timing dependent plasticity (ITDP). The ITDP gating mechanism is based on recent experimental findings. An abstract, analytically tractable model of the ITDP driven ensemble architecture is derived from a logical model based on the probabilities of neural firing events. A detailed analysis of this model provides insights that allow it to be extended into a full, biologically plausible, computational implementation of the architecture which is demonstrated on a visual classification task. The extended model makes use of a style of spiking network, first introduced as a model of cortical microcircuits, that is capable of Bayesian inference, effectively performing expectation maximization. The unsupervised ensemble learning mechanism, based around such spiking expectation maximization (SEM) networks whose combined outputs are mediated by ITDP, is shown to perform the visual classification task well and to generalize to unseen data. The combined ensemble performance is significantly better than that of the individual classifiers, validating the ensemble architecture and learning mechanisms. The properties of the full model are analysed in the light of extensive experiments with the classification task, including an investigation into the influence of different input feature selection schemes and a comparison with a hierarchical STDP based ensemble architecture. PMID:27760125

  5. Managing business compliance using model-driven security management

    NASA Astrophysics Data System (ADS)

    Lang, Ulrich; Schreiner, Rudolf

    Compliance with regulatory and governance standards is rapidly becoming one of the hot topics of information security today. This is because, especially with regulatory compliance, both business and government have to expect large financial and reputational losses if compliance cannot be ensured and demonstrated. One major difficulty of implementing such regulations is caused the fact that they are captured at a high level of abstraction that is business-centric and not IT centric. This means that the abstract intent needs to be translated in a trustworthy, traceable way into compliance and security policies that the IT security infrastructure can enforce. Carrying out this mapping process manually is time consuming, maintenance-intensive, costly, and error-prone. Compliance monitoring is also critical in order to be able to demonstrate compliance at any given point in time. The problem is further complicated because of the need for business-driven IT agility, where IT policies and enforcement can change frequently, e.g. Business Process Modelling (BPM) driven Service Oriented Architecture (SOA). Model Driven Security (MDS) is an innovative technology approach that can solve these problems as an extension of identity and access management (IAM) and authorization management (also called entitlement management). In this paper we will illustrate the theory behind Model Driven Security for compliance, provide an improved and extended architecture, as well as a case study in the healthcare industry using our OpenPMF 2.0 technology.

  6. Functional language and data flow architectures

    NASA Technical Reports Server (NTRS)

    Ercegovac, M. D.; Patel, D. R.; Lang, T.

    1983-01-01

    This is a tutorial article about language and architecture approaches for highly concurrent computer systems based on the functional style of programming. The discussion concentrates on the basic aspects of functional languages, and sequencing models such as data-flow, demand-driven and reduction which are essential at the machine organization level. Several examples of highly concurrent machines are described.

  7. C2 Product-Centric Approach to Transforming Current C4ISR Information Architectures

    DTIC Science & Technology

    2004-06-01

    each type of environment . For “Cultural Feature” Entity Kind only the “Land” Domain is defined and for “ Environmental ” Entity Kind only the...take advantage of both worlds. In particular, the unifying concept of a Model-Driven Architecture (MDA) under development by the Object Management...Exchange Requirements (IER) to an XML environment . FCS [4] developers have embraced both UML and XML for their architectures and MIP [5] too is migrating

  8. A generative tool for building health applications driven by ISO 13606 archetypes.

    PubMed

    Menárguez-Tortosa, Marcos; Martínez-Costa, Catalina; Fernández-Breis, Jesualdo Tomás

    2012-10-01

    The use of Electronic Healthcare Records (EHR) standards in the development of healthcare applications is crucial for achieving the semantic interoperability of clinical information. Advanced EHR standards make use of the dual model architecture, which provides a solution for clinical interoperability based on the separation of the information and knowledge. However, the impact of such standards is biased by the limited availability of tools that facilitate their usage and practical implementation. In this paper, we present an approach for the automatic generation of clinical applications for the ISO 13606 EHR standard, which is based on the dual model architecture. This generator has been generically designed, so it can be easily adapted to other dual model standards and can generate applications for multiple technological platforms. Such good properties are based on the combination of standards for the representation of generic user interfaces and model-driven engineering techniques.

  9. Strategies for concurrent processing of complex algorithms in data driven architectures

    NASA Technical Reports Server (NTRS)

    Stoughton, John W.; Mielke, Roland R.; Som, Sukhamony

    1990-01-01

    The performance modeling and enhancement for periodic execution of large-grain, decision-free algorithms in data flow architectures is examined. Applications include real-time implementation of control and signal processing algorithms where performance is required to be highly predictable. The mapping of algorithms onto the specified class of data flow architectures is realized by a marked graph model called ATAMM (Algorithm To Architecture Mapping Model). Performance measures and bounds are established. Algorithm transformation techniques are identified for performance enhancement and reduction of resource (computing element) requirements. A systematic design procedure is described for generating operating conditions for predictable performance both with and without resource constraints. An ATAMM simulator is used to test and validate the performance prediction by the design procedure. Experiments on a three resource testbed provide verification of the ATAMM model and the design procedure.

  10. Strategies for concurrent processing of complex algorithms in data driven architectures

    NASA Technical Reports Server (NTRS)

    Som, Sukhamoy; Stoughton, John W.; Mielke, Roland R.

    1990-01-01

    Performance modeling and performance enhancement for periodic execution of large-grain, decision-free algorithms in data flow architectures are discussed. Applications include real-time implementation of control and signal processing algorithms where performance is required to be highly predictable. The mapping of algorithms onto the specified class of data flow architectures is realized by a marked graph model called algorithm to architecture mapping model (ATAMM). Performance measures and bounds are established. Algorithm transformation techniques are identified for performance enhancement and reduction of resource (computing element) requirements. A systematic design procedure is described for generating operating conditions for predictable performance both with and without resource constraints. An ATAMM simulator is used to test and validate the performance prediction by the design procedure. Experiments on a three resource testbed provide verification of the ATAMM model and the design procedure.

  11. Model-driven methodology for rapid deployment of smart spaces based on resource-oriented architectures.

    PubMed

    Corredor, Iván; Bernardos, Ana M; Iglesias, Josué; Casar, José R

    2012-01-01

    Advances in electronics nowadays facilitate the design of smart spaces based on physical mash-ups of sensor and actuator devices. At the same time, software paradigms such as Internet of Things (IoT) and Web of Things (WoT) are motivating the creation of technology to support the development and deployment of web-enabled embedded sensor and actuator devices with two major objectives: (i) to integrate sensing and actuating functionalities into everyday objects, and (ii) to easily allow a diversity of devices to plug into the Internet. Currently, developers who are applying this Internet-oriented approach need to have solid understanding about specific platforms and web technologies. In order to alleviate this development process, this research proposes a Resource-Oriented and Ontology-Driven Development (ROOD) methodology based on the Model Driven Architecture (MDA). This methodology aims at enabling the development of smart spaces through a set of modeling tools and semantic technologies that support the definition of the smart space and the automatic generation of code at hardware level. ROOD feasibility is demonstrated by building an adaptive health monitoring service for a Smart Gym.

  12. Key design elements of a data utility for national biosurveillance: event-driven architecture, caching, and Web service model.

    PubMed

    Tsui, Fu-Chiang; Espino, Jeremy U; Weng, Yan; Choudary, Arvinder; Su, Hoah-Der; Wagner, Michael M

    2005-01-01

    The National Retail Data Monitor (NRDM) has monitored over-the-counter (OTC) medication sales in the United States since December 2002. The NRDM collects data from over 18,600 retail stores and processes over 0.6 million sales records per day. This paper describes key architectural features that we have found necessary for a data utility component in a national biosurveillance system. These elements include event-driven architecture to provide analyses of data in near real time, multiple levels of caching to improve query response time, high availability through the use of clustered servers, scalable data storage through the use of storage area networks and a web-service function for interoperation with affiliated systems. The methods and architectural principles are relevant to the design of any production data utility for public health surveillance-systems that collect data from multiple sources in near real time for use by analytic programs and user interfaces that have substantial requirements for time-series data aggregated in multiple dimensions.

  13. Virtual Sensor Web Architecture

    NASA Astrophysics Data System (ADS)

    Bose, P.; Zimdars, A.; Hurlburt, N.; Doug, S.

    2006-12-01

    NASA envisions the development of smart sensor webs, intelligent and integrated observation network that harness distributed sensing assets, their associated continuous and complex data sets, and predictive observation processing mechanisms for timely, collaborative hazard mitigation and enhanced science productivity and reliability. This paper presents Virtual Sensor Web Infrastructure for Collaborative Science (VSICS) Architecture for sustained coordination of (numerical and distributed) model-based processing, closed-loop resource allocation, and observation planning. VSICS's key ideas include i) rich descriptions of sensors as services based on semantic markup languages like OWL and SensorML; ii) service-oriented workflow composition and repair for simple and ensemble models; event-driven workflow execution based on event-based and distributed workflow management mechanisms; and iii) development of autonomous model interaction management capabilities providing closed-loop control of collection resources driven by competing targeted observation needs. We present results from initial work on collaborative science processing involving distributed services (COSEC framework) that is being extended to create VSICS.

  14. Model-Driven Theme/UML

    NASA Astrophysics Data System (ADS)

    Carton, Andrew; Driver, Cormac; Jackson, Andrew; Clarke, Siobhán

    Theme/UML is an existing approach to aspect-oriented modelling that supports the modularisation and composition of concerns, including crosscutting ones, in design. To date, its lack of integration with model-driven engineering (MDE) techniques has limited its benefits across the development lifecycle. Here, we describe our work on facilitating the use of Theme/UML as part of an MDE process. We have developed a transformation tool that adopts model-driven architecture (MDA) standards. It defines a concern composition mechanism, implemented as a model transformation, to support the enhanced modularisation features of Theme/UML. We evaluate our approach by applying it to the development of mobile, context-aware applications-an application area characterised by many non-functional requirements that manifest themselves as crosscutting concerns.

  15. Hybrid Architectural Framework for C4ISR and Discrete-Event Simulation (DES) to Support Sensor-Driven Model Synthesis in Real-World Scenarios

    DTIC Science & Technology

    2013-09-01

    which utilizes FTA and then loads it into a DES engine to generate simulation results. .......44 Figure 21. This simulation architecture is...While Discrete Event Simulation ( DES ) can provide accurate time estimation and fast simulation speed, models utilizing it often suffer...C4ISR progress in MDW is developed in this research to demonstrate the feasibility of AEMF- DES and explore its potential. The simulation (MDSIM

  16. Notification Event Architecture for Traveler Screening: Predictive Traveler Screening Using Event Driven Business Process Management

    ERIC Educational Resources Information Center

    Lynch, John Kenneth

    2013-01-01

    Using an exploratory model of the 9/11 terrorists, this research investigates the linkages between Event Driven Business Process Management (edBPM) and decision making. Although the literature on the role of technology in efficient and effective decision making is extensive, research has yet to quantify the benefit of using edBPM to aid the…

  17. Composable Framework Support for Software-FMEA Through Model Execution

    NASA Astrophysics Data System (ADS)

    Kocsis, Imre; Patricia, Andras; Brancati, Francesco; Rossi, Francesco

    2016-08-01

    Performing Failure Modes and Effect Analysis (FMEA) during software architecture design is becoming a basic requirement in an increasing number of domains; however, due to the lack of standardized early design phase model execution, classic SW-FMEA approaches carry significant risks and are human effort-intensive even in processes that use Model-Driven Engineering.Recently, modelling languages with standardized executable semantics have emerged. Building on earlier results, this paper describes framework support for generating executable error propagation models from such models during software architecture design. The approach carries the promise of increased precision, decreased risk and more automated execution for SW-FMEA during dependability- critical system development.

  18. A development framework for semantically interoperable health information systems.

    PubMed

    Lopez, Diego M; Blobel, Bernd G M E

    2009-02-01

    Semantic interoperability is a basic challenge to be met for new generations of distributed, communicating and co-operating health information systems (HIS) enabling shared care and e-Health. Analysis, design, implementation and maintenance of such systems and intrinsic architectures have to follow a unified development methodology. The Generic Component Model (GCM) is used as a framework for modeling any system to evaluate and harmonize state of the art architecture development approaches and standards for health information systems as well as to derive a coherent architecture development framework for sustainable, semantically interoperable HIS and their components. The proposed methodology is based on the Rational Unified Process (RUP), taking advantage of its flexibility to be configured for integrating other architectural approaches such as Service-Oriented Architecture (SOA), Model-Driven Architecture (MDA), ISO 10746, and HL7 Development Framework (HDF). Existing architectural approaches have been analyzed, compared and finally harmonized towards an architecture development framework for advanced health information systems. Starting with the requirements for semantic interoperability derived from paradigm changes for health information systems, and supported in formal software process engineering methods, an appropriate development framework for semantically interoperable HIS has been provided. The usability of the framework has been exemplified in a public health scenario.

  19. An Object Oriented Extensible Architecture for Affordable Aerospace Propulsion Systems

    NASA Technical Reports Server (NTRS)

    Follen, Gregory J.

    2003-01-01

    Driven by a need to explore and develop propulsion systems that exceeded current computing capabilities, NASA Glenn embarked on a novel strategy leading to the development of an architecture that enables propulsion simulations never thought possible before. Full engine 3 Dimensional Computational Fluid Dynamic propulsion system simulations were deemed impossible due to the impracticality of the hardware and software computing systems required. However, with a software paradigm shift and an embracing of parallel and distributed processing, an architecture was designed to meet the needs of future propulsion system modeling. The author suggests that the architecture designed at the NASA Glenn Research Center for propulsion system modeling has potential for impacting the direction of development of affordable weapons systems currently under consideration by the Applied Vehicle Technology Panel (AVT).

  20. Extracting business vocabularies from business process models: SBVR and BPMN standards-based approach

    NASA Astrophysics Data System (ADS)

    Skersys, Tomas; Butleris, Rimantas; Kapocius, Kestutis

    2013-10-01

    Approaches for the analysis and specification of business vocabularies and rules are very relevant topics in both Business Process Management and Information Systems Development disciplines. However, in common practice of Information Systems Development, the Business modeling activities still are of mostly empiric nature. In this paper, basic aspects of the approach for business vocabularies' semi-automated extraction from business process models are presented. The approach is based on novel business modeling-level OMG standards "Business Process Model and Notation" (BPMN) and "Semantics for Business Vocabularies and Business Rules" (SBVR), thus contributing to OMG's vision about Model-Driven Architecture (MDA) and to model-driven development in general.

  1. Model-Driven Methodology for Rapid Deployment of Smart Spaces Based on Resource-Oriented Architectures

    PubMed Central

    Corredor, Iván; Bernardos, Ana M.; Iglesias, Josué; Casar, José R.

    2012-01-01

    Advances in electronics nowadays facilitate the design of smart spaces based on physical mash-ups of sensor and actuator devices. At the same time, software paradigms such as Internet of Things (IoT) and Web of Things (WoT) are motivating the creation of technology to support the development and deployment of web-enabled embedded sensor and actuator devices with two major objectives: (i) to integrate sensing and actuating functionalities into everyday objects, and (ii) to easily allow a diversity of devices to plug into the Internet. Currently, developers who are applying this Internet-oriented approach need to have solid understanding about specific platforms and web technologies. In order to alleviate this development process, this research proposes a Resource-Oriented and Ontology-Driven Development (ROOD) methodology based on the Model Driven Architecture (MDA). This methodology aims at enabling the development of smart spaces through a set of modeling tools and semantic technologies that support the definition of the smart space and the automatic generation of code at hardware level. ROOD feasibility is demonstrated by building an adaptive health monitoring service for a Smart Gym. PMID:23012544

  2. Framework for a clinical information system.

    PubMed

    Van De Velde, R; Lansiers, R; Antonissen, G

    2002-01-01

    The design and implementation of Clinical Information System architecture is presented. This architecture has been developed and implemented based on components following a strong underlying conceptual and technological model. Common Object Request Broker and n-tier technology featuring centralised and departmental clinical information systems as the back-end store for all clinical data are used. Servers located in the "middle" tier apply the clinical (business) model and application rules. The main characteristics are the focus on modelling and reuse of both data and business logic. Scalability as well as adaptability to constantly changing requirements via component driven computing are the main reasons for that approach.

  3. A task-based support architecture for developing point-of-care clinical decision support systems for the emergency department.

    PubMed

    Wilk, S; Michalowski, W; O'Sullivan, D; Farion, K; Sayyad-Shirabad, J; Kuziemsky, C; Kukawka, B

    2013-01-01

    The purpose of this study was to create a task-based support architecture for developing clinical decision support systems (CDSSs) that assist physicians in making decisions at the point-of-care in the emergency department (ED). The backbone of the proposed architecture was established by a task-based emergency workflow model for a patient-physician encounter. The architecture was designed according to an agent-oriented paradigm. Specifically, we used the O-MaSE (Organization-based Multi-agent System Engineering) method that allows for iterative translation of functional requirements into architectural components (e.g., agents). The agent-oriented paradigm was extended with ontology-driven design to implement ontological models representing knowledge required by specific agents to operate. The task-based architecture allows for the creation of a CDSS that is aligned with the task-based emergency workflow model. It facilitates decoupling of executable components (agents) from embedded domain knowledge (ontological models), thus supporting their interoperability, sharing, and reuse. The generic architecture was implemented as a pilot system, MET3-AE--a CDSS to help with the management of pediatric asthma exacerbation in the ED. The system was evaluated in a hospital ED. The architecture allows for the creation of a CDSS that integrates support for all tasks from the task-based emergency workflow model, and interacts with hospital information systems. Proposed architecture also allows for reusing and sharing system components and knowledge across disease-specific CDSSs.

  4. Key Design Elements of a Data Utility for National Biosurveillance: Event-driven Architecture, Caching, and Web Service Model

    PubMed Central

    Tsui, Fu-Chiang; Espino, Jeremy U.; Weng, Yan; Choudary, Arvinder; Su, Hoah-Der; Wagner, Michael M.

    2005-01-01

    The National Retail Data Monitor (NRDM) has monitored over-the-counter (OTC) medication sales in the United States since December 2002. The NRDM collects data from over 18,600 retail stores and processes over 0.6 million sales records per day. This paper describes key architectural features that we have found necessary for a data utility component in a national biosurveillance system. These elements include event-driven architecture to provide analyses of data in near real time, multiple levels of caching to improve query response time, high availability through the use of clustered servers, scalable data storage through the use of storage area networks and a web-service function for interoperation with affiliated systems. The methods and architectural principles are relevant to the design of any production data utility for public health surveillance—systems that collect data from multiple sources in near real time for use by analytic programs and user interfaces that have substantial requirements for time-series data aggregated in multiple dimensions. PMID:16779138

  5. A CSP-Based Agent Modeling Framework for the Cougaar Agent-Based Architecture

    NASA Technical Reports Server (NTRS)

    Gracanin, Denis; Singh, H. Lally; Eltoweissy, Mohamed; Hinchey, Michael G.; Bohner, Shawn A.

    2005-01-01

    Cognitive Agent Architecture (Cougaar) is a Java-based architecture for large-scale distributed agent-based applications. A Cougaar agent is an autonomous software entity with behaviors that represent a real-world entity (e.g., a business process). A Cougaar-based Model Driven Architecture approach, currently under development, uses a description of system's functionality (requirements) to automatically implement the system in Cougaar. The Communicating Sequential Processes (CSP) formalism is used for the formal validation of the generated system. Two main agent components, a blackboard and a plugin, are modeled as CSP processes. A set of channels represents communications between the blackboard and individual plugins. The blackboard is represented as a CSP process that communicates with every agent in the collection. The developed CSP-based Cougaar modeling framework provides a starting point for a more complete formal verification of the automatically generated Cougaar code. Currently it is used to verify the behavior of an individual agent in terms of CSP properties and to analyze the corresponding Cougaar society.

  6. A model-driven approach for representing clinical archetypes for Semantic Web environments.

    PubMed

    Martínez-Costa, Catalina; Menárguez-Tortosa, Marcos; Fernández-Breis, Jesualdo Tomás; Maldonado, José Alberto

    2009-02-01

    The life-long clinical information of any person supported by electronic means configures his Electronic Health Record (EHR). This information is usually distributed among several independent and heterogeneous systems that may be syntactically or semantically incompatible. There are currently different standards for representing and exchanging EHR information among different systems. In advanced EHR approaches, clinical information is represented by means of archetypes. Most of these approaches use the Archetype Definition Language (ADL) to specify archetypes. However, ADL has some drawbacks when attempting to perform semantic activities in Semantic Web environments. In this work, Semantic Web technologies are used to specify clinical archetypes for advanced EHR architectures. The advantages of using the Ontology Web Language (OWL) instead of ADL are described and discussed in this work. Moreover, a solution combining Semantic Web and Model-driven Engineering technologies is proposed to transform ADL into OWL for the CEN EN13606 EHR architecture.

  7. Using Data-Driven Model-Brain Mappings to Constrain Formal Models of Cognition

    PubMed Central

    Borst, Jelmer P.; Nijboer, Menno; Taatgen, Niels A.; van Rijn, Hedderik; Anderson, John R.

    2015-01-01

    In this paper we propose a method to create data-driven mappings from components of cognitive models to brain regions. Cognitive models are notoriously hard to evaluate, especially based on behavioral measures alone. Neuroimaging data can provide additional constraints, but this requires a mapping from model components to brain regions. Although such mappings can be based on the experience of the modeler or on a reading of the literature, a formal method is preferred to prevent researcher-based biases. In this paper we used model-based fMRI analysis to create a data-driven model-brain mapping for five modules of the ACT-R cognitive architecture. We then validated this mapping by applying it to two new datasets with associated models. The new mapping was at least as powerful as an existing mapping that was based on the literature, and indicated where the models were supported by the data and where they have to be improved. We conclude that data-driven model-brain mappings can provide strong constraints on cognitive models, and that model-based fMRI is a suitable way to create such mappings. PMID:25747601

  8. Exploring a model-driven architecture (MDA) approach to health care information systems development.

    PubMed

    Raghupathi, Wullianallur; Umar, Amjad

    2008-05-01

    To explore the potential of the model-driven architecture (MDA) in health care information systems development. An MDA is conceptualized and developed for a health clinic system to track patient information. A prototype of the MDA is implemented using an advanced MDA tool. The UML provides the underlying modeling support in the form of the class diagram. The PIM to PSM transformation rules are applied to generate the prototype application from the model. The result of the research is a complete MDA methodology to developing health care information systems. Additional insights gained include development of transformation rules and documentation of the challenges in the application of MDA to health care. Design guidelines for future MDA applications are described. The model has the potential for generalizability. The overall approach supports limited interoperability and portability. The research demonstrates the applicability of the MDA approach to health care information systems development. When properly implemented, it has the potential to overcome the challenges of platform (vendor) dependency, lack of open standards, interoperability, portability, scalability, and the high cost of implementation.

  9. Integrating geo web services for a user driven exploratory analysis

    NASA Astrophysics Data System (ADS)

    Moncrieff, Simon; Turdukulov, Ulanbek; Gulland, Elizabeth-Kate

    2016-04-01

    In data exploration, several online data sources may need to be dynamically aggregated or summarised over spatial region, time interval, or set of attributes. With respect to thematic data, web services are mainly used to present results leading to a supplier driven service model limiting the exploration of the data. In this paper we propose a user need driven service model based on geo web processing services. The aim of the framework is to provide a method for the scalable and interactive access to various geographic data sources on the web. The architecture combines a data query, processing technique and visualisation methodology to rapidly integrate and visually summarise properties of a dataset. We illustrate the environment on a health related use case that derives Age Standardised Rate - a dynamic index that needs integration of the existing interoperable web services of demographic data in conjunction with standalone non-spatial secure database servers used in health research. Although the example is specific to the health field, the architecture and the proposed approach are relevant and applicable to other fields that require integration and visualisation of geo datasets from various web services and thus, we believe is generic in its approach.

  10. PDS4 - Some Principles for Agile Data Curation

    NASA Astrophysics Data System (ADS)

    Hughes, J. S.; Crichton, D. J.; Hardman, S. H.; Joyner, R.; Algermissen, S.; Padams, J.

    2015-12-01

    PDS4, a research data management and curation system for NASA's Planetary Science Archive, was developed using principles that promote the characteristics of agile development. The result is an efficient system that produces better research data products while using less resources (time, effort, and money) and maximizes their usefulness for current and future scientists. The key principle is architectural. The PDS4 information architecture is developed and maintained independent of the infrastructure's process, application and technology architectures. The information architecture is based on an ontology-based information model developed to leverage best practices from standard reference models for digital archives, digital object registries, and metadata registries and capture domain knowledge from a panel of planetary science domain experts. The information model provides a sharable, stable, and formal set of information requirements for the system and is the primary source for information to configure most system components, including the product registry, search engine, validation and display tools, and production pipelines. Multi-level governance is also allowed for the effective management of the informational elements at the common, discipline, and project level. This presentation will describe the development principles, components, and uses of the information model and how an information model-driven architecture exhibits characteristics of agile curation including early delivery, evolutionary development, adaptive planning, continuous improvement, and rapid and flexible response to change.

  11. High performance cellular level agent-based simulation with FLAME for the GPU.

    PubMed

    Richmond, Paul; Walker, Dawn; Coakley, Simon; Romano, Daniela

    2010-05-01

    Driven by the availability of experimental data and ability to simulate a biological scale which is of immediate interest, the cellular scale is fast emerging as an ideal candidate for middle-out modelling. As with 'bottom-up' simulation approaches, cellular level simulations demand a high degree of computational power, which in large-scale simulations can only be achieved through parallel computing. The flexible large-scale agent modelling environment (FLAME) is a template driven framework for agent-based modelling (ABM) on parallel architectures ideally suited to the simulation of cellular systems. It is available for both high performance computing clusters (www.flame.ac.uk) and GPU hardware (www.flamegpu.com) and uses a formal specification technique that acts as a universal modelling format. This not only creates an abstraction from the underlying hardware architectures, but avoids the steep learning curve associated with programming them. In benchmarking tests and simulations of advanced cellular systems, FLAME GPU has reported massive improvement in performance over more traditional ABM frameworks. This allows the time spent in the development and testing stages of modelling to be drastically reduced and creates the possibility of real-time visualisation for simple visual face-validation.

  12. Coordination control of flexible manufacturing systems

    NASA Astrophysics Data System (ADS)

    Menon, Satheesh R.

    One of the first attempts was made to develop a model driven system for coordination control of Flexible Manufacturing Systems (FMS). The structure and activities of the FMS are modeled using a colored Petri Net based system. This approach has the advantage of being able to model the concurrency inherent in the system. It provides a method for encoding the system state, state transitions and the feasible transitions at any given state. Further structural analysis (for detecting conflicting actions, deadlocks which might occur during operation, etc.) can be performed. The problem is also addressed of implementing and testing the behavior of existing dynamic scheduling approaches in simulations of realistic situations. A simulation architecture was proposed and performance evaluation was carried out for establishing the correctness of the model, stability of the system from a structural (deadlocks) and temporal (boundedness of backlogs) points of view, and for collection of statistics for performance measures such as machine and robot utilizations, average wait times and idle times of resources. A real-time implementation architecture for the coordination controller was also developed and implemented in a software simulated environment. Given the current technology of FMS control, the model-driven colored Petri net-based approach promises to develop a very flexible control environment.

  13. A generic architecture for an adaptive, interoperable and intelligent type 2 diabetes mellitus care system.

    PubMed

    Uribe, Gustavo A; Blobel, Bernd; López, Diego M; Schulz, Stefan

    2015-01-01

    Chronic diseases such as Type 2 Diabetes Mellitus (T2DM) constitute a big burden to the global health economy. T2DM Care Management requires a multi-disciplinary and multi-organizational approach. Because of different languages and terminologies, education, experiences, skills, etc., such an approach establishes a special interoperability challenge. The solution is a flexible, scalable, business-controlled, adaptive, knowledge-based, intelligent system following a systems-oriented, architecture-centric, ontology-based and policy-driven approach. The architecture of real systems is described, using the basics and principles of the Generic Component Model (GCM). For representing the functional aspects of a system the Business Process Modeling Notation (BPMN) is used. The system architecture obtained is presented using a GCM graphical notation, class diagrams and BPMN diagrams. The architecture-centric approach considers the compositional nature of the real world system and its functionalities, guarantees coherence, and provides right inferences. The level of generality provided in this paper facilitates use case specific adaptations of the system. By that way, intelligent, adaptive and interoperable T2DM care systems can be derived from the presented model as presented in another publication.

  14. Complex and hierarchical micelle architectures from diblock copolymers using living, crystallization-driven polymerizations.

    PubMed

    Gädt, Torben; Ieong, Nga Sze; Cambridge, Graeme; Winnik, Mitchell A; Manners, Ian

    2009-02-01

    Block copolymers consist of two or more chemically distinct polymer segments, or blocks, connected by a covalent link. In a selective solvent for one of the blocks, core-corona micelle structures are formed. We demonstrate that living polymerizations driven by the epitaxial crystallization of a core-forming metalloblock represent a synthetic tool that can be used to generate complex and hierarchical micelle architectures from diblock copolymers. The use of platelet micelles as initiators enables the formation of scarf-like architectures in which cylindrical micelle tassels of controlled length are grown from specific crystal faces. A similar process enables the fabrication of brushes of cylindrical micelles on a crystalline homopolymer substrate. Living polymerizations driven by heteroepitaxial growth can also be accomplished and are illustrated by the formation of tri- and pentablock and scarf architectures with cylinder-cylinder and platelet-cylinder connections, respectively, that involve different core-forming metalloblocks.

  15. A Distributed Laboratory for Event-Driven Coastal Prediction and Hazard Planning

    NASA Astrophysics Data System (ADS)

    Bogden, P.; Allen, G.; MacLaren, J.; Creager, G. J.; Flournoy, L.; Sheng, Y. P.; Graber, H.; Graves, S.; Conover, H.; Luettich, R.; Perrie, W.; Ramakrishnan, L.; Reed, D. A.; Wang, H. V.

    2006-12-01

    The 2005 Atlantic hurricane season was the most active in recorded history. Collectively, 2005 hurricanes caused more than 2,280 deaths and record damages of over 100 billion dollars. Of the storms that made landfall, Dennis, Emily, Katrina, Rita, and Wilma caused most of the destruction. Accurate predictions of storm-driven surge, wave height, and inundation can save lives and help keep recovery costs down, provided the information gets to emergency response managers in time. The information must be available well in advance of landfall so that responders can weigh the costs of unnecessary evacuation against the costs of inadequate preparation. The SURA Coastal Ocean Observing and Prediction (SCOOP) Program is a multi-institution collaboration implementing a modular, distributed service-oriented architecture for real time prediction and visualization of the impacts of extreme atmospheric events. The modular infrastructure enables real-time prediction of multi- scale, multi-model, dynamic, data-driven applications. SURA institutions are working together to create a virtual and distributed laboratory integrating coastal models, simulation data, and observations with computational resources and high speed networks. The loosely coupled architecture allows teams of computer and coastal scientists at multiple institutions to innovate complex system components that are interconnected with relatively stable interfaces. The operational system standardizes at the interface level to enable substantial innovation by complementary communities of coastal and computer scientists. This architectural philosophy solves a long-standing problem associated with the transition from research to operations. The SCOOP Program thereby implements a prototype laboratory consistent with the vision of a national, multi-agency initiative called the Integrated Ocean Observing System (IOOS). Several service- oriented components of the SCOOP enterprise architecture have already been designed and implemented, including data archive and transport services, metadata registry and retrieval (catalog), resource management, and portal interfaces. SCOOP partners are integrating these at the service level and implementing reconfigurable workflows for several kinds of user scenarios, and are working with resource providers to prototype new policies and technologies for on-demand computing.

  16. MoSeS: Modelling and Simulation for e-Social Science.

    PubMed

    Townend, Paul; Xu, Jie; Birkin, Mark; Turner, Andy; Wu, Belinda

    2009-07-13

    MoSeS (Modelling and Simulation for e-Social Science) is a research node of the National Centre for e-Social Science. MoSeS uses e-Science techniques to execute an events-driven model that simulates discrete demographic processes; this allows us to project the UK population 25 years into the future. This paper describes the architecture, simulation methodology and latest results obtained by MoSeS.

  17. Intelligent fuzzy controller for event-driven real time systems

    NASA Technical Reports Server (NTRS)

    Grantner, Janos; Patyra, Marek; Stachowicz, Marian S.

    1992-01-01

    Most of the known linguistic models are essentially static, that is, time is not a parameter in describing the behavior of the object's model. In this paper we show a model for synchronous finite state machines based on fuzzy logic. Such finite state machines can be used to build both event-driven, time-varying, rule-based systems and the control unit section of a fuzzy logic computer. The architecture of a pipelined intelligent fuzzy controller is presented, and the linguistic model is represented by an overall fuzzy relation stored in a single rule memory. A VLSI integrated circuit implementation of the fuzzy controller is suggested. At a clock rate of 30 MHz, the controller can perform 3 MFLIPS on multi-dimensional fuzzy data.

  18. MODULAR APPLICATION OF COMPUTATIONAL MODELS OF INHALED REACTIVE GAS DOSIMETRY FOR RISK ASSESSMENT OF RESPIRATORY TRACT TOXICITY: CHLORINE

    EPA Science Inventory

    Inhaled reactive gases typically cause respiratory tract toxicity with a prominent proximal to distal lesion pattern. This pattern is largely driven by airflow and interspecies differences between rodents and humans result from factors such as airway architecture, ventilation ra...

  19. Inter-species activity correlations reveal functional correspondences between monkey and human brain areas

    PubMed Central

    Mantini, Dante; Hasson, Uri; Betti, Viviana; Perrucci, Mauro G.; Romani, Gian Luca; Corbetta, Maurizio; Orban, Guy A.; Vanduffel, Wim

    2012-01-01

    Evolution-driven functional changes in the primate brain are typically assessed by aligning monkey and human activation maps using cortical surface expansion models. These models use putative homologous areas as registration landmarks, assuming they are functionally correspondent. In cases where functional changes have occurred in an area, this assumption prohibits to reveal whether other areas may have assumed lost functions. Here we describe a method to examine functional correspondences across species. Without making spatial assumptions, we assess similarities in sensory-driven functional magnetic resonance imaging responses between monkey (Macaca mulatta) and human brain areas by means of temporal correlation. Using natural vision data, we reveal regions for which functional processing has shifted to topologically divergent locations during evolution. We conclude that substantial evolution-driven functional reorganizations have occurred, not always consistent with cortical expansion processes. This novel framework for evaluating changes in functional architecture is crucial to building more accurate evolutionary models. PMID:22306809

  20. Modular design, application architecture, and usage of a self-service model for enterprise data delivery: The Duke Enterprise Data Unified Content Explorer (DEDUCE)

    PubMed Central

    Horvath, Monica M.; Rusincovitch, Shelley A.; Brinson, Stephanie; Shang, Howard C.; Evans, Steve; Ferranti, Jeffrey M.

    2015-01-01

    Purpose Data generated in the care of patients are widely used to support clinical research and quality improvement, which has hastened the development of self-service query tools. User interface design for such tools, execution of query activity, and underlying application architecture have not been widely reported, and existing tools reflect a wide heterogeneity of methods and technical frameworks. We describe the design, application architecture, and use of a self-service model for enterprise data delivery within Duke Medicine. Methods Our query platform, the Duke Enterprise Data Unified Content Explorer (DEDUCE), supports enhanced data exploration, cohort identification, and data extraction from our enterprise data warehouse (EDW) using a series of modular environments that interact with a central keystone module, Cohort Manager (CM). A data-driven application architecture is implemented through three components: an application data dictionary, the concept of “smart dimensions”, and dynamically-generated user interfaces. Results DEDUCE CM allows flexible hierarchies of EDW queries within a grid-like workspace. A cohort “join” functionality allows switching between filters based on criteria occurring within or across patient encounters. To date, 674 users have been trained and activated in DEDUCE, and logon activity shows a steady increase, with variability between months. A comparison of filter conditions and export criteria shows that these activities have different patterns of usage across subject areas. Conclusions Organizations with sophisticated EDWs may find that users benefit from development of advanced query functionality, complimentary to the user interfaces and infrastructure used in other well-published models. Driven by its EDW context, the DEDUCE application architecture was also designed to be responsive to source data and to allow modification through alterations in metadata rather than programming, allowing an agile response to source system changes. PMID:25051403

  1. Modular design, application architecture, and usage of a self-service model for enterprise data delivery: the Duke Enterprise Data Unified Content Explorer (DEDUCE).

    PubMed

    Horvath, Monica M; Rusincovitch, Shelley A; Brinson, Stephanie; Shang, Howard C; Evans, Steve; Ferranti, Jeffrey M

    2014-12-01

    Data generated in the care of patients are widely used to support clinical research and quality improvement, which has hastened the development of self-service query tools. User interface design for such tools, execution of query activity, and underlying application architecture have not been widely reported, and existing tools reflect a wide heterogeneity of methods and technical frameworks. We describe the design, application architecture, and use of a self-service model for enterprise data delivery within Duke Medicine. Our query platform, the Duke Enterprise Data Unified Content Explorer (DEDUCE), supports enhanced data exploration, cohort identification, and data extraction from our enterprise data warehouse (EDW) using a series of modular environments that interact with a central keystone module, Cohort Manager (CM). A data-driven application architecture is implemented through three components: an application data dictionary, the concept of "smart dimensions", and dynamically-generated user interfaces. DEDUCE CM allows flexible hierarchies of EDW queries within a grid-like workspace. A cohort "join" functionality allows switching between filters based on criteria occurring within or across patient encounters. To date, 674 users have been trained and activated in DEDUCE, and logon activity shows a steady increase, with variability between months. A comparison of filter conditions and export criteria shows that these activities have different patterns of usage across subject areas. Organizations with sophisticated EDWs may find that users benefit from development of advanced query functionality, complimentary to the user interfaces and infrastructure used in other well-published models. Driven by its EDW context, the DEDUCE application architecture was also designed to be responsive to source data and to allow modification through alterations in metadata rather than programming, allowing an agile response to source system changes. Copyright © 2014 Elsevier Inc. All rights reserved.

  2. The Best of all Possible Worlds: Applying the Model Driven Architecture Approach to a JC3IEDM OWL Ontology Modeled in UML

    DTIC Science & Technology

    2014-04-25

    EA’s Java application programming interface (API), the team built a tool called OWL2EA that can ingest an OWL file and generate the corresponding UML...ObjectItemStructure specification shown in Figure 10. Running this script in the relational database server MySQL creates the physical schema that

  3. Managing Complex Interoperability Solutions using Model-Driven Architecture

    DTIC Science & Technology

    2011-06-01

    such as Oracle or MySQL . Each data model for a specific RDBMS is a distinct PSM. Or the system may want to exchange information with other C2...reduced number of transformations, e.g., from an RDBMS physical schema to the corresponding SQL script needed to instantiate the tables in a relational...tance of models. In engineering, a model serves several purposes: 1. It presents an abstract view of a complex system or of a complex information

  4. Reference Architecture Model Enabling Standards Interoperability.

    PubMed

    Blobel, Bernd

    2017-01-01

    Advanced health and social services paradigms are supported by a comprehensive set of domains managed by different scientific disciplines. Interoperability has to evolve beyond information and communication technology (ICT) concerns, including the real world business domains and their processes, but also the individual context of all actors involved. So, the system must properly reflect the environment in front and around the computer as essential and even defining part of the health system. This paper introduces an ICT-independent system-theoretical, ontology-driven reference architecture model allowing the representation and harmonization of all domains involved including the transformation into an appropriate ICT design and implementation. The entire process is completely formalized and can therefore be fully automated.

  5. Framework for a clinical information system.

    PubMed

    Van de Velde, R

    2000-01-01

    The current status of our work towards the design and implementation of a reference architecture for a Clinical Information System is presented. This architecture has been developed and implemented based on components following a strong underlying conceptual and technological model. Common Object Request Broker and n-tier technology featuring centralised and departmental clinical information systems as the back-end store for all clinical data are used. Servers located in the 'middle' tier apply the clinical (business) model and application rules to communicate with so-called 'thin client' workstations. The main characteristics are the focus on modelling and reuse of both data and business logic as there is a shift away from data and functional modelling towards object modelling. Scalability as well as adaptability to constantly changing requirements via component driven computing are the main reasons for that approach.

  6. Strategic Industrial Alliances in Paper Industry: XML- vs Ontology-Based Integration Platforms

    ERIC Educational Resources Information Center

    Naumenko, Anton; Nikitin, Sergiy; Terziyan, Vagan; Zharko, Andriy

    2005-01-01

    Purpose: To identify cases related to design of ICT platforms for industrial alliances, where the use of Ontology-driven architectures based on Semantic web standards is more advantageous than application of conventional modeling together with XML standards. Design/methodology/approach: A comparative analysis of the two latest and the most obvious…

  7. Semantic interoperability--HL7 Version 3 compared to advanced architecture standards.

    PubMed

    Blobel, B G M E; Engel, K; Pharow, P

    2006-01-01

    To meet the challenge for high quality and efficient care, highly specialized and distributed healthcare establishments have to communicate and co-operate in a semantically interoperable way. Information and communication technology must be open, flexible, scalable, knowledge-based and service-oriented as well as secure and safe. For enabling semantic interoperability, a unified process for defining and implementing the architecture, i.e. structure and functions of the cooperating systems' components, as well as the approach for knowledge representation, i.e. the used information and its interpretation, algorithms, etc. have to be defined in a harmonized way. Deploying the Generic Component Model, systems and their components, underlying concepts and applied constraints must be formally modeled, strictly separating platform-independent from platform-specific models. As HL7 Version 3 claims to represent the most successful standard for semantic interoperability, HL7 has been analyzed regarding the requirements for model-driven, service-oriented design of semantic interoperable information systems, thereby moving from a communication to an architecture paradigm. The approach is compared with advanced architectural approaches for information systems such as OMG's CORBA 3 or EHR systems such as GEHR/openEHR and CEN EN 13606 Electronic Health Record Communication. HL7 Version 3 is maturing towards an architectural approach for semantic interoperability. Despite current differences, there is a close collaboration between the teams involved guaranteeing a convergence between competing approaches.

  8. The Action Execution Process Implemented in Different Cognitive Architectures: A Review

    NASA Astrophysics Data System (ADS)

    Dong, Daqi; Franklin, Stan

    2014-12-01

    An agent achieves its goals by interacting with its environment, cyclically choosing and executing suitable actions. An action execution process is a reasonable and critical part of an entire cognitive architecture, because the process of generating executable motor commands is not only driven by low-level environmental information, but is also initiated and affected by the agent's high-level mental processes. This review focuses on cognitive models of action, or more specifically, of the action execution process, as implemented in a set of popular cognitive architectures. We examine the representations and procedures inside the action execution process, as well as the cooperation between action execution and other high-level cognitive modules. We finally conclude with some general observations regarding the nature of action execution.

  9. Integrating MPI and deduplication engines: a software architecture roadmap.

    PubMed

    Baksi, Dibyendu

    2009-03-01

    The objective of this paper is to clarify the major concepts related to architecture and design of patient identity management software systems so that an implementor looking to solve a specific integration problem in the context of a Master Patient Index (MPI) and a deduplication engine can address the relevant issues. The ideas presented are illustrated in the context of a reference use case from Integrating the Health Enterprise Patient Identifier Cross-referencing (IHE PIX) profile. Sound software engineering principles using the latest design paradigm of model driven architecture (MDA) are applied to define different views of the architecture. The main contribution of the paper is a clear software architecture roadmap for implementors of patient identity management systems. Conceptual design in terms of static and dynamic views of the interfaces is provided as an example of platform independent model. This makes the roadmap applicable to any specific solutions of MPI, deduplication library or software platform. Stakeholders in need of integration of MPIs and deduplication engines can evaluate vendor specific solutions and software platform technologies in terms of fundamental concepts and can make informed decisions that preserve investment. This also allows freedom from vendor lock-in and the ability to kick-start integration efforts based on a solid architecture.

  10. Virtual Plants Need Water Too: Functional-Structural Root System Models in the Context of Drought Tolerance Breeding

    PubMed Central

    Ndour, Adama; Vadez, Vincent; Pradal, Christophe; Lucas, Mikaël

    2017-01-01

    Developing a sustainable agricultural model is one of the great challenges of the coming years. The agricultural practices inherited from the Green Revolution of the 1960s show their limits today, and new paradigms need to be explored to counter rising issues such as the multiplication of climate-change related drought episodes. Two such new paradigms are the use of functional-structural plant models to complement and rationalize breeding approaches and a renewed focus on root systems as untapped sources of plant amelioration. Since the late 1980s, numerous functional and structural models of root systems were developed and used to investigate the properties of root systems in soil or lab-conditions. In this review, we focus on the conception and use of such root models in the broader context of research on root-driven drought tolerance, on the basis of root system architecture (RSA) phenotyping. Such models result from the integration of architectural, physiological and environmental data. Here, we consider the different phenotyping techniques allowing for root architectural and physiological study and their limits. We discuss how QTL and breeding studies support the manipulation of RSA as a way to improve drought resistance. We then go over the integration of the generated data within architectural models, how those architectural models can be coupled with functional hydraulic models, and how functional parameters can be measured to feed those models. We then consider the assessment and validation of those hydraulic models through confrontation of simulations to experimentations. Finally, we discuss the up and coming challenges facing root systems functional-structural modeling approaches in the context of breeding. PMID:29018456

  11. Virtual Plants Need Water Too: Functional-Structural Root System Models in the Context of Drought Tolerance Breeding.

    PubMed

    Ndour, Adama; Vadez, Vincent; Pradal, Christophe; Lucas, Mikaël

    2017-01-01

    Developing a sustainable agricultural model is one of the great challenges of the coming years. The agricultural practices inherited from the Green Revolution of the 1960s show their limits today, and new paradigms need to be explored to counter rising issues such as the multiplication of climate-change related drought episodes. Two such new paradigms are the use of functional-structural plant models to complement and rationalize breeding approaches and a renewed focus on root systems as untapped sources of plant amelioration. Since the late 1980s, numerous functional and structural models of root systems were developed and used to investigate the properties of root systems in soil or lab-conditions. In this review, we focus on the conception and use of such root models in the broader context of research on root-driven drought tolerance, on the basis of root system architecture (RSA) phenotyping. Such models result from the integration of architectural, physiological and environmental data. Here, we consider the different phenotyping techniques allowing for root architectural and physiological study and their limits. We discuss how QTL and breeding studies support the manipulation of RSA as a way to improve drought resistance. We then go over the integration of the generated data within architectural models, how those architectural models can be coupled with functional hydraulic models, and how functional parameters can be measured to feed those models. We then consider the assessment and validation of those hydraulic models through confrontation of simulations to experimentations. Finally, we discuss the up and coming challenges facing root systems functional-structural modeling approaches in the context of breeding.

  12. Goal-Driven Autonomy and Robust Architecture for Long-Duration Missions (Year 1: 1 July 2013 - 31 July 2014)

    DTIC Science & Technology

    2014-09-30

    Mental Domain = Ω Goal Management goal change goal input World =Ψ Memory Mission & Goals( ) World Model (-Ψ) Episodic Memory Semantic Memory ...Activations Trace Meta-Level Control Introspective Monitoring Memory Reasoning Trace ( ) Strategies Episodic Memory Metaknowledge Self Model...it is from incorrect or missing memory associations (i.e., indices). Similarly, correct information may exist in the input stream, but may not be

  13. The Best of All Possible Worlds: Applying the Model Driven Architecture Approach to a JC3IEDM OWL Ontology Modeled in UML

    DTIC Science & Technology

    2014-06-01

    from the ODM standard. Leveraging SPARX EA’s Java application programming interface (API), the team built a tool called OWL2EA that can ingest an OWL...server MySQL creates the physical schema that enables a user to store and retrieve data conforming to the vocabulary of the JC3IEDM. 6. GENERATING AN

  14. Software Product Lines: Report of the 2009 U.S. Army Software Product Line Workshop

    DTIC Science & Technology

    2009-04-01

    record system was fielded in 2008. One early challenge for Overwatch was coming up with a funding model that would support core asset development (a...match the organizational model to the funding model . Product line architecture is essential. Address product line requirements up front. Put processes...when trying to move from a customer-driven, product-specific funding model to one in which at least some of the funds are allocated to the creation and

  15. Variation of the hydraulic properties within gravity-driven deposits in basinal carbonates

    NASA Astrophysics Data System (ADS)

    Jablonska, D.; Zambrano, M.; Emanuele, T.; Di Celma, C.

    2017-12-01

    Deepwater gravity-driven deposits represent important stratigraphic heterogeneities within basinal sedimentary successions. A poor understanding of their distribution, internal architecture (at meso- and micro-scale) and hydraulic properties (porosity and permeability), may lead to unexpected compartmentalization issues in reservoir analysis. In this study, we examine gravity-driven deposits within the basinal-carbonate Maiolica Formation adjacent to the Apulian Carbonate Plaftorm, southern Italy. Maiolica formation is represented by horizontal layers of thin-bedded cherty pelagic limestones often intercalated by mass-transport deposits (slumps, debris-flow deposits) and calcarenites of diverse thickness (0.1 m - 40 m) and lateral extent (100 m - >500 m). Locally, gravity-driven deposits compose up to 60 % of the exposed succession. These deposits display broad array of internal architectures (from faulted and folded strata to conglomerates) and various texture. In order to further constrain the variation of the internal architectures and fracture distribution within gravity-driven deposits, field sedimentological and structural analyses were performed. To examine the texture and hydraulic properties of various lithofacies, the laboratory porosity measurements of suitable rock samples were undertaken. These data were supported by 3D pore network quantitative analysis of X-ray Computed microtomography (MicroCT) images performed at resolutions 1.25 and 2.0 microns. This analysis helped to describe the pores and grains geometrical and morphological properties (such as size, shape, specific surface area) and the hydraulic properties (porosity and permeability) of various lithofacies. The integration of the analyses allowed us to show how the internal architecture and the hydraulic properties vary in different types of gravity-driven deposits within the basinal carbonate succession.

  16. An Object Oriented Extensible Architecture for Affordable Aerospace Propulsion Systems

    NASA Technical Reports Server (NTRS)

    Follen, Gregory J.; Lytle, John K. (Technical Monitor)

    2002-01-01

    Driven by a need to explore and develop propulsion systems that exceeded current computing capabilities, NASA Glenn embarked on a novel strategy leading to the development of an architecture that enables propulsion simulations never thought possible before. Full engine 3 Dimensional Computational Fluid Dynamic propulsion system simulations were deemed impossible due to the impracticality of the hardware and software computing systems required. However, with a software paradigm shift and an embracing of parallel and distributed processing, an architecture was designed to meet the needs of future propulsion system modeling. The author suggests that the architecture designed at the NASA Glenn Research Center for propulsion system modeling has potential for impacting the direction of development of affordable weapons systems currently under consideration by the Applied Vehicle Technology Panel (AVT). This paper discusses the salient features of the NPSS Architecture including its interface layer, object layer, implementation for accessing legacy codes, numerical zooming infrastructure and its computing layer. The computing layer focuses on the use and deployment of these propulsion simulations on parallel and distributed computing platforms which has been the focus of NASA Ames. Additional features of the object oriented architecture that support MultiDisciplinary (MD) Coupling, computer aided design (CAD) access and MD coupling objects will be discussed. Included will be a discussion of the successes, challenges and benefits of implementing this architecture.

  17. MDA-based EHR application security services.

    PubMed

    Blobel, Bernd; Pharow, Peter

    2004-01-01

    Component-oriented, distributed, virtual EHR systems have to meet enhanced security and privacy requirements. In the context of advanced architectural paradigms such as component-orientation, model-driven, and knowledge-based, standardised security services needed have to be specified and implemented in an integrated way following the same paradigm. This concerns the deployment of formal models, meta-languages, reference models such as the ISO RM-ODP, and development as well as implementation tools. International projects' results presented proceed on that streamline.

  18. MBSE-Driven Visualization of Requirements Allocation and Traceability

    NASA Technical Reports Server (NTRS)

    Jackson, Maddalena; Wilkerson, Marcus

    2016-01-01

    In a Model Based Systems Engineering (MBSE) infusion effort, there is a usually a concerted effort to define the information architecture, ontologies, and patterns that drive the construction and architecture of MBSE models, but less attention is given to the logical follow-on of that effort: how to practically leverage the resulting semantic richness of a well-formed populated model to enable systems engineers to work more effectively, as MBSE promises. While ontologies and patterns are absolutely necessary, an MBSE effort must also design and provide practical demonstration of value (through human-understandable representations of model data that address stakeholder concerns) or it will not succeed. This paper will discuss opportunities that exist for visualization in making the richness of a well-formed model accessible to stakeholders, specifically stakeholders who rely on the model for their day-to-day work. This paper will discuss the value added by MBSE-driven visualizations in the context of a small case study of interactive visualizations created and used on NASA's proposed Europa Mission. The case study visualizations were created for the purpose of understanding and exploring targeted aspects of requirements flow, allocation, and comparing the structure of that flow-down to a conceptual project decomposition. The work presented in this paper is an example of a product that leverages the richness and formalisms of our knowledge representation while also responding to the quality attributes SEs care about.

  19. An Emotion Aware Task Automation Architecture Based on Semantic Technologies for Smart Offices

    PubMed Central

    2018-01-01

    The evolution of the Internet of Things leads to new opportunities for the contemporary notion of smart offices, where employees can benefit from automation to maximize their productivity and performance. However, although extensive research has been dedicated to analyze the impact of workers’ emotions on their job performance, there is still a lack of pervasive environments that take into account emotional behaviour. In addition, integrating new components in smart environments is not straightforward. To face these challenges, this article proposes an architecture for emotion aware automation platforms based on semantic event-driven rules to automate the adaptation of the workplace to the employee’s needs. The main contributions of this paper are: (i) the design of an emotion aware automation platform architecture for smart offices; (ii) the semantic modelling of the system; and (iii) the implementation and evaluation of the proposed architecture in a real scenario. PMID:29748468

  20. An Emotion Aware Task Automation Architecture Based on Semantic Technologies for Smart Offices.

    PubMed

    Muñoz, Sergio; Araque, Oscar; Sánchez-Rada, J Fernando; Iglesias, Carlos A

    2018-05-10

    The evolution of the Internet of Things leads to new opportunities for the contemporary notion of smart offices, where employees can benefit from automation to maximize their productivity and performance. However, although extensive research has been dedicated to analyze the impact of workers’ emotions on their job performance, there is still a lack of pervasive environments that take into account emotional behaviour. In addition, integrating new components in smart environments is not straightforward. To face these challenges, this article proposes an architecture for emotion aware automation platforms based on semantic event-driven rules to automate the adaptation of the workplace to the employee’s needs. The main contributions of this paper are: (i) the design of an emotion aware automation platform architecture for smart offices; (ii) the semantic modelling of the system; and (iii) the implementation and evaluation of the proposed architecture in a real scenario.

  1. Architectural approaches for HL7-based health information systems implementation.

    PubMed

    López, D M; Blobel, B

    2010-01-01

    Information systems integration is hard, especially when semantic and business process interoperability requirements need to be met. To succeed, a unified methodology, approaching different aspects of systems architecture such as business, information, computational, engineering and technology viewpoints, has to be considered. The paper contributes with an analysis and demonstration on how the HL7 standard set can support health information systems integration. Based on the Health Information Systems Development Framework (HIS-DF), common architectural models for HIS integration are analyzed. The framework is a standard-based, consistent, comprehensive, customizable, scalable methodology that supports the design of semantically interoperable health information systems and components. Three main architectural models for system integration are analyzed: the point to point interface, the messages server and the mediator models. Point to point interface and messages server models are completely supported by traditional HL7 version 2 and version 3 messaging. The HL7 v3 standard specification, combined with service-oriented, model-driven approaches provided by HIS-DF, makes the mediator model possible. The different integration scenarios are illustrated by describing a proof-of-concept implementation of an integrated public health surveillance system based on Enterprise Java Beans technology. Selecting the appropriate integration architecture is a fundamental issue of any software development project. HIS-DF provides a unique methodological approach guiding the development of healthcare integration projects. The mediator model - offered by the HIS-DF and supported in HL7 v3 artifacts - is the more promising one promoting the development of open, reusable, flexible, semantically interoperable, platform-independent, service-oriented and standard-based health information systems.

  2. Integrating Software-Architecture-Centric Methods into the Rational Unified Process

    DTIC Science & Technology

    2004-07-01

    Architecture Design ...................................................................................... 19...QAW in a life- cycle context. One issue that needs to be addressed is how scenarios produced in a QAW can be used by a software architecture design method...implementation testing. 18 CMU/SEI-2004-TR-011 CMU/SEI-2004-TR-011 19 4 Architecture Design The Attribute-Driven Design (ADD) method

  3. Data-driven forecasting of high-dimensional chaotic systems with long short-term memory networks.

    PubMed

    Vlachas, Pantelis R; Byeon, Wonmin; Wan, Zhong Y; Sapsis, Themistoklis P; Koumoutsakos, Petros

    2018-05-01

    We introduce a data-driven forecasting method for high-dimensional chaotic systems using long short-term memory (LSTM) recurrent neural networks. The proposed LSTM neural networks perform inference of high-dimensional dynamical systems in their reduced order space and are shown to be an effective set of nonlinear approximators of their attractor. We demonstrate the forecasting performance of the LSTM and compare it with Gaussian processes (GPs) in time series obtained from the Lorenz 96 system, the Kuramoto-Sivashinsky equation and a prototype climate model. The LSTM networks outperform the GPs in short-term forecasting accuracy in all applications considered. A hybrid architecture, extending the LSTM with a mean stochastic model (MSM-LSTM), is proposed to ensure convergence to the invariant measure. This novel hybrid method is fully data-driven and extends the forecasting capabilities of LSTM networks.

  4. Science-Driven Computing: NERSC's Plan for 2006-2010

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Simon, Horst D.; Kramer, William T.C.; Bailey, David H.

    NERSC has developed a five-year strategic plan focusing on three components: Science-Driven Systems, Science-Driven Services, and Science-Driven Analytics. (1) Science-Driven Systems: Balanced introduction of the best new technologies for complete computational systems--computing, storage, networking, visualization and analysis--coupled with the activities necessary to engage vendors in addressing the DOE computational science requirements in their future roadmaps. (2) Science-Driven Services: The entire range of support activities, from high-quality operations and user services to direct scientific support, that enable a broad range of scientists to effectively use NERSC systems in their research. NERSC will concentrate on resources needed to realize the promise ofmore » the new highly scalable architectures for scientific discovery in multidisciplinary computational science projects. (3) Science-Driven Analytics: The architectural and systems enhancements and services required to integrate NERSC's powerful computational and storage resources to provide scientists with new tools to effectively manipulate, visualize, and analyze the huge data sets derived from simulations and experiments.« less

  5. Asynchronous Data Retrieval from an Object-Oriented Database

    NASA Astrophysics Data System (ADS)

    Gilbert, Jonathan P.; Bic, Lubomir

    We present an object-oriented semantic database model which, similar to other object-oriented systems, combines the virtues of four concepts: the functional data model, a property inheritance hierarchy, abstract data types and message-driven computation. The main emphasis is on the last of these four concepts. We describe generic procedures that permit queries to be processed in a purely message-driven manner. A database is represented as a network of nodes and directed arcs, in which each node is a logical processing element, capable of communicating with other nodes by exchanging messages. This eliminates the need for shared memory and for centralized control during query processing. Hence, the model is suitable for implementation on a multiprocessor computer architecture, consisting of large numbers of loosely coupled processing elements.

  6. A microengineered collagen scaffold for generating a polarized crypt-villus architecture of human small intestinal epithelium.

    PubMed

    Wang, Yuli; Gunasekara, Dulan B; Reed, Mark I; DiSalvo, Matthew; Bultman, Scott J; Sims, Christopher E; Magness, Scott T; Allbritton, Nancy L

    2017-06-01

    The human small intestinal epithelium possesses a distinct crypt-villus architecture and tissue polarity in which proliferative cells reside inside crypts while differentiated cells are localized to the villi. Indirect evidence has shown that the processes of differentiation and migration are driven in part by biochemical gradients of factors that specify the polarity of these cellular compartments; however, direct evidence for gradient-driven patterning of this in vivo architecture has been hampered by limitations of the in vitro systems available. Enteroid cultures are a powerful in vitro system; nevertheless, these spheroidal structures fail to replicate the architecture and lineage compartmentalization found in vivo, and are not easily subjected to gradients of growth factors. In the current work, we report the development of a micropatterned collagen scaffold with suitable extracellular matrix and stiffness to generate an in vitro self-renewing human small intestinal epithelium that replicates key features of the in vivo small intestine: a crypt-villus architecture with appropriate cell-lineage compartmentalization and an open and accessible luminal surface. Chemical gradients applied to the crypt-villus axis promoted the creation of a stem/progenitor-cell zone and supported cell migration along the crypt-villus axis. This new approach combining microengineered scaffolds, biophysical cues and chemical gradients to control the intestinal epithelium ex vivo can serve as a physiologically relevant mimic of the human small intestinal epithelium, and is broadly applicable to model other tissues that rely on gradients for physiological function. Copyright © 2017 Elsevier Ltd. All rights reserved.

  7. Conceptual Modeling in the Time of the Revolution: Part II

    NASA Astrophysics Data System (ADS)

    Mylopoulos, John

    Conceptual Modeling was a marginal research topic at the very fringes of Computer Science in the 60s and 70s, when the discipline was dominated by topics focusing on programs, systems and hardware architectures. Over the years, however, the field has moved to centre stage and has come to claim a central role both in Computer Science research and practice in diverse areas, such as Software Engineering, Databases, Information Systems, the Semantic Web, Business Process Management, Service-Oriented Computing, Multi-Agent Systems, Knowledge Management, and more. The transformation was greatly aided by the adoption of standards in modeling languages (e.g., UML), and model-based methodologies (e.g., Model-Driven Architectures) by the Object Management Group (OMG) and other standards organizations. We briefly review the history of the field over the past 40 years, focusing on the evolution of key ideas. We then note some open challenges and report on-going research, covering topics such as the representation of variability in conceptual models, capturing model intentions, and models of laws.

  8. Implementing An Image Understanding System Architecture Using Pipe

    NASA Astrophysics Data System (ADS)

    Luck, Randall L.

    1988-03-01

    This paper will describe PIPE and how it can be used to implement an image understanding system. Image understanding is the process of developing a description of an image in order to make decisions about its contents. The tasks of image understanding are generally split into low level vision and high level vision. Low level vision is performed by PIPE -a high performance parallel processor with an architecture specifically designed for processing video images at up to 60 fields per second. High level vision is performed by one of several types of serial or parallel computers - depending on the application. An additional processor called ISMAP performs the conversion from iconic image space to symbolic feature space. ISMAP plugs into one of PIPE's slots and is memory mapped into the high level processor. Thus it forms the high speed link between the low and high level vision processors. The mechanisms for bottom-up, data driven processing and top-down, model driven processing are discussed.

  9. A real-time biomimetic acoustic localizing system using time-shared architecture

    NASA Astrophysics Data System (ADS)

    Nourzad Karl, Marianne; Karl, Christian; Hubbard, Allyn

    2008-04-01

    In this paper a real-time sound source localizing system is proposed, which is based on previously developed mammalian auditory models. Traditionally, following the models, which use interaural time delay (ITD) estimates, the amount of parallel computations needed by a system to achieve real-time sound source localization is a limiting factor and a design challenge for hardware implementations. Therefore a new approach using a time-shared architecture implementation is introduced. The proposed architecture is a purely sample-base-driven digital system, and it follows closely the continuous-time approach described in the models. Rather than having dedicated hardware on a per frequency channel basis, a specialized core channel, shared for all frequency bands is used. Having an optimized execution time, which is much less than the system's sample rate, the proposed time-shared solution allows the same number of virtual channels to be processed as the dedicated channels in the traditional approach. Hence, the time-shared approach achieves a highly economical and flexible implementation using minimal silicon area. These aspects are particularly important in efficient hardware implementation of a real time biomimetic sound source localization system.

  10. Brahms Mobile Agents: Architecture and Field Tests

    NASA Technical Reports Server (NTRS)

    Clancey, William J.; Sierhuis, Maarten; Kaskiris, Charis; vanHoof, Ron

    2002-01-01

    We have developed a model-based, distributed architecture that integrates diverse components in a system designed for lunar and planetary surface operations: an astronaut's space suit, cameras, rover/All-Terrain Vehicle (ATV), robotic assistant, other personnel in a local habitat, and a remote mission support team (with time delay). Software processes, called agents, implemented in the Brahms language, run on multiple, mobile platforms. These mobile agents interpret and transform available data to help people and robotic systems coordinate their actions to make operations more safe and efficient. The Brahms-based mobile agent architecture (MAA) uses a novel combination of agent types so the software agents may understand and facilitate communications between people and between system components. A state-of-the-art spoken dialogue interface is integrated with Brahms models, supporting a speech-driven field observation record and rover command system (e.g., return here later and bring this back to the habitat ). This combination of agents, rover, and model-based spoken dialogue interface constitutes a personal assistant. An important aspect of the methodology involves first simulating the entire system in Brahms, then configuring the agents into a run-time system.

  11. Cloud Computing for Mission Design and Operations

    NASA Technical Reports Server (NTRS)

    Arrieta, Juan; Attiyah, Amy; Beswick, Robert; Gerasimantos, Dimitrios

    2012-01-01

    The space mission design and operations community already recognizes the value of cloud computing and virtualization. However, natural and valid concerns, like security, privacy, up-time, and vendor lock-in, have prevented a more widespread and expedited adoption into official workflows. In the interest of alleviating these concerns, we propose a series of guidelines for internally deploying a resource-oriented hub of data and algorithms. These guidelines provide a roadmap for implementing an architecture inspired in the cloud computing model: associative, elastic, semantical, interconnected, and adaptive. The architecture can be summarized as exposing data and algorithms as resource-oriented Web services, coordinated via messaging, and running on virtual machines; it is simple, and based on widely adopted standards, protocols, and tools. The architecture may help reduce common sources of complexity intrinsic to data-driven, collaborative interactions and, most importantly, it may provide the means for teams and agencies to evaluate the cloud computing model in their specific context, with minimal infrastructure changes, and before committing to a specific cloud services provider.

  12. Cascade photonic integrated circuit architecture for electro-optic in-phase quadrature/single sideband modulation or frequency conversion.

    PubMed

    Hasan, Mehedi; Hall, Trevor

    2015-11-01

    A photonic integrated circuit architecture for implementing frequency upconversion is proposed. The circuit consists of a 1×2 splitter and 2×1 combiner interconnected by two stages of differentially driven phase modulators having 2×2 multimode interference coupler between the stages. A transfer matrix approach is used to model the operation of the architecture. The predictions of the model are validated by simulations performed using an industry standard software tool. The intrinsic conversion efficiency of the proposed design is improved by 6 dB over the alternative functionally equivalent circuit based on dual parallel Mach-Zehnder modulators known in the prior art. A two-tone analysis is presented to study the linearity of the proposed circuit, and a comparison is provided over the alternative. The proposed circuit is suitable for integration in any platform that offers linear electro-optic phase modulation such as LiNbO(3), silicon, III-V, or hybrid technology.

  13. The Sigma Cognitive Architecture and System: Towards Functionally Elegant Grand Unification

    NASA Astrophysics Data System (ADS)

    Rosenbloom, Paul S.; Demski, Abram; Ustun, Volkan

    2016-12-01

    Sigma (Σ) is a cognitive architecture and system whose development is driven by a combination of four desiderata: grand unification, generic cognition, functional elegance, and sufficient efficiency. Work towards these desiderata is guided by the graphical architecture hypothesis, that key to progress on them is combining what has been learned from over three decades' worth of separate work on cognitive architectures and graphical models. In this article, these four desiderata are motivated and explained, and then combined with the graphical architecture hypothesis to yield a rationale for the development of Sigma. The current state of the cognitive architecture is then introduced in detail, along with the graphical architecture that sits below it and implements it. Progress in extending Sigma beyond these architectures and towards a full cognitive system is then detailed in terms of both a systematic set of higher level cognitive idioms that have been developed and several virtual humans that are built from combinations of these idioms. Sigma as a whole is then analyzed in terms of how well the progress to date satisfies the desiderata. This article thus provides the first full motivation, presentation and analysis of Sigma, along with a diversity of more specific results that have been generated during its development.

  14. Echo State Networks for data-driven downhole pressure estimation in gas-lift oil wells.

    PubMed

    Antonelo, Eric A; Camponogara, Eduardo; Foss, Bjarne

    2017-01-01

    Process measurements are of vital importance for monitoring and control of industrial plants. When we consider offshore oil production platforms, wells that require gas-lift technology to yield oil production from low pressure oil reservoirs can become unstable under some conditions. This undesirable phenomenon is usually called slugging flow, and can be identified by an oscillatory behavior of the downhole pressure measurement. Given the importance of this measurement and the unreliability of the related sensor, this work aims at designing data-driven soft-sensors for downhole pressure estimation in two contexts: one for speeding up first-principle model simulation of a vertical riser model; and another for estimating the downhole pressure using real-world data from an oil well from Petrobras based only on topside platform measurements. Both tasks are tackled by employing Echo State Networks (ESN) as an efficient technique for training Recurrent Neural Networks. We show that a single ESN is capable of robustly modeling both the slugging flow behavior and a steady state based only on a square wave input signal representing the production choke opening in the vertical riser. Besides, we compare the performance of a standard network to the performance of a multiple timescale hierarchical architecture in the second task and show that the latter architecture performs better in modeling both large irregular transients and more commonly occurring small oscillations. Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. Affordable non-traditional source data mining for context assessment to improve distributed fusion system robustness

    NASA Astrophysics Data System (ADS)

    Bowman, Christopher; Haith, Gary; Steinberg, Alan; Morefield, Charles; Morefield, Michael

    2013-05-01

    This paper describes methods to affordably improve the robustness of distributed fusion systems by opportunistically leveraging non-traditional data sources. Adaptive methods help find relevant data, create models, and characterize the model quality. These methods also can measure the conformity of this non-traditional data with fusion system products including situation modeling and mission impact prediction. Non-traditional data can improve the quantity, quality, availability, timeliness, and diversity of the baseline fusion system sources and therefore can improve prediction and estimation accuracy and robustness at all levels of fusion. Techniques are described that automatically learn to characterize and search non-traditional contextual data to enable operators integrate the data with the high-level fusion systems and ontologies. These techniques apply the extension of the Data Fusion & Resource Management Dual Node Network (DNN) technical architecture at Level 4. The DNN architecture supports effectively assessment and management of the expanded portfolio of data sources, entities of interest, models, and algorithms including data pattern discovery and context conformity. Affordable model-driven and data-driven data mining methods to discover unknown models from non-traditional and `big data' sources are used to automatically learn entity behaviors and correlations with fusion products, [14 and 15]. This paper describes our context assessment software development, and the demonstration of context assessment of non-traditional data to compare to an intelligence surveillance and reconnaissance fusion product based upon an IED POIs workflow.

  16. A new software-based architecture for quantum computer

    NASA Astrophysics Data System (ADS)

    Wu, Nan; Song, FangMin; Li, Xiangdong

    2010-04-01

    In this paper, we study a reliable architecture of a quantum computer and a new instruction set and machine language for the architecture, which can improve the performance and reduce the cost of the quantum computing. We also try to address some key issues in detail in the software-driven universal quantum computers.

  17. Use of the Collaborative Optimization Architecture for Launch Vehicle Design

    NASA Technical Reports Server (NTRS)

    Braun, R. D.; Moore, A. A.; Kroo, I. M.

    1996-01-01

    Collaborative optimization is a new design architecture specifically created for large-scale distributed-analysis applications. In this approach, problem is decomposed into a user-defined number of subspace optimization problems that are driven towards interdisciplinary compatibility and the appropriate solution by a system-level coordination process. This decentralized design strategy allows domain-specific issues to be accommodated by disciplinary analysts, while requiring interdisciplinary decisions to be reached by consensus. The present investigation focuses on application of the collaborative optimization architecture to the multidisciplinary design of a single-stage-to-orbit launch vehicle. Vehicle design, trajectory, and cost issues are directly modeled. Posed to suit the collaborative architecture, the design problem is characterized by 5 design variables and 16 constraints. Numerous collaborative solutions are obtained. Comparison of these solutions demonstrates the influence which an priori ascent-abort criterion has on development cost. Similarly, objective-function selection is discussed, demonstrating the difference between minimum weight and minimum cost concepts. The operational advantages of the collaborative optimization

  18. Optimal Design of Cable-Driven Manipulators Using Particle Swarm Optimization.

    PubMed

    Bryson, Joshua T; Jin, Xin; Agrawal, Sunil K

    2016-08-01

    The design of cable-driven manipulators is complicated by the unidirectional nature of the cables, which results in extra actuators and limited workspaces. Furthermore, the particular arrangement of the cables and the geometry of the robot pose have a significant effect on the cable tension required to effect a desired joint torque. For a sufficiently complex robot, the identification of a satisfactory cable architecture can be difficult and can result in multiply redundant actuators and performance limitations based on workspace size and cable tensions. This work leverages previous research into the workspace analysis of cable systems combined with stochastic optimization to develop a generalized methodology for designing optimized cable routings for a given robot and desired task. A cable-driven robot leg performing a walking-gait motion is used as a motivating example to illustrate the methodology application. The components of the methodology are described, and the process is applied to the example problem. An optimal cable routing is identified, which provides the necessary controllable workspace to perform the desired task and enables the robot to perform that task with minimal cable tensions. A robot leg is constructed according to this routing and used to validate the theoretical model and to demonstrate the effectiveness of the resulting cable architecture.

  19. Macro-actor execution on multilevel data-driven architectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gaudiot, J.L.; Najjar, W.

    1988-12-31

    The data-flow model of computation brings to multiprocessors high programmability at the expense of increased overhead. Applying the model at a higher level leads to better performance but also introduces loss of parallelism. We demonstrate here syntax directed program decomposition methods for the creation of large macro-actors in numerical algorithms. In order to alleviate some of the problems introduced by the lower resolution interpretation, we describe a multi-level of resolution and analyze the requirements for its actual hardware and software integration.

  20. A decade of experience in the development and implementation of tissue banking informatics tools for intra and inter-institutional translational research

    PubMed Central

    Amin, Waqas; Singh, Harpreet; Pople, Andre K.; Winters, Sharon; Dhir, Rajiv; Parwani, Anil V.; Becich, Michael J.

    2010-01-01

    Context: Tissue banking informatics deals with standardized annotation, collection and storage of biospecimens that can further be shared by researchers. Over the last decade, the Department of Biomedical Informatics (DBMI) at the University of Pittsburgh has developed various tissue banking informatics tools to expedite translational medicine research. In this review, we describe the technical approach and capabilities of these models. Design: Clinical annotation of biospecimens requires data retrieval from various clinical information systems and the de-identification of the data by an honest broker. Based upon these requirements, DBMI, with its collaborators, has developed both Oracle-based organ-specific data marts and a more generic, model-driven architecture for biorepositories. The organ-specific models are developed utilizing Oracle 9.2.0.1 server tools and software applications and the model-driven architecture is implemented in a J2EE framework. Result: The organ-specific biorepositories implemented by DBMI include the Cooperative Prostate Cancer Tissue Resource (http://www.cpctr.info/), Pennsylvania Cancer Alliance Bioinformatics Consortium (http://pcabc.upmc.edu/main.cfm), EDRN Colorectal and Pancreatic Neoplasm Database (http://edrn.nci.nih.gov/) and Specialized Programs of Research Excellence (SPORE) Head and Neck Neoplasm Database (http://spores.nci.nih.gov/current/hn/index.htm). The model-based architecture is represented by the National Mesothelioma Virtual Bank (http://mesotissue.org/). These biorepositories provide thousands of well annotated biospecimens for the researchers that are searchable through query interfaces available via the Internet. Conclusion: These systems, developed and supported by our institute, serve to form a common platform for cancer research to accelerate progress in clinical and translational research. In addition, they provide a tangible infrastructure and resource for exposing research resources and biospecimen services in collaboration with the clinical anatomic pathology laboratory information system (APLIS) and the cancer registry information systems. PMID:20922029

  1. A decade of experience in the development and implementation of tissue banking informatics tools for intra and inter-institutional translational research.

    PubMed

    Amin, Waqas; Singh, Harpreet; Pople, Andre K; Winters, Sharon; Dhir, Rajiv; Parwani, Anil V; Becich, Michael J

    2010-08-10

    Tissue banking informatics deals with standardized annotation, collection and storage of biospecimens that can further be shared by researchers. Over the last decade, the Department of Biomedical Informatics (DBMI) at the University of Pittsburgh has developed various tissue banking informatics tools to expedite translational medicine research. In this review, we describe the technical approach and capabilities of these models. Clinical annotation of biospecimens requires data retrieval from various clinical information systems and the de-identification of the data by an honest broker. Based upon these requirements, DBMI, with its collaborators, has developed both Oracle-based organ-specific data marts and a more generic, model-driven architecture for biorepositories. The organ-specific models are developed utilizing Oracle 9.2.0.1 server tools and software applications and the model-driven architecture is implemented in a J2EE framework. The organ-specific biorepositories implemented by DBMI include the Cooperative Prostate Cancer Tissue Resource (http://www.cpctr.info/), Pennsylvania Cancer Alliance Bioinformatics Consortium (http://pcabc.upmc.edu/main.cfm), EDRN Colorectal and Pancreatic Neoplasm Database (http://edrn.nci.nih.gov/) and Specialized Programs of Research Excellence (SPORE) Head and Neck Neoplasm Database (http://spores.nci.nih.gov/current/hn/index.htm). The model-based architecture is represented by the National Mesothelioma Virtual Bank (http://mesotissue.org/). These biorepositories provide thousands of well annotated biospecimens for the researchers that are searchable through query interfaces available via the Internet. These systems, developed and supported by our institute, serve to form a common platform for cancer research to accelerate progress in clinical and translational research. In addition, they provide a tangible infrastructure and resource for exposing research resources and biospecimen services in collaboration with the clinical anatomic pathology laboratory information system (APLIS) and the cancer registry information systems.

  2. Models for Experimental High Density Housing

    NASA Astrophysics Data System (ADS)

    Bradecki, Tomasz; Swoboda, Julia; Nowak, Katarzyna; Dziechciarz, Klaudia

    2017-10-01

    The article presents the effects of research on models of high density housing. The authors present urban projects for experimental high density housing estates. The design was based on research performed on 38 examples of similar housing in Poland that have been built after 2003. Some of the case studies show extreme density and that inspired the researchers to test individual virtual solutions that would answer the question: How far can we push the limits? The experimental housing projects show strengths and weaknesses of design driven only by such indexes as FAR (floor attenuation ratio - housing density) and DPH (dwellings per hectare). Although such projects are implemented, the authors believe that there are reasons for limits since high index values may be in contradiction to the optimum character of housing environment. Virtual models on virtual plots presented by the authors were oriented toward maximising the DPH index and DAI (dwellings area index) which is very often the main driver for developers. The authors also raise the question of sustainability of such solutions. The research was carried out in the URBAN model research group (Gliwice, Poland) that consists of academic researchers and architecture students. The models reflect architectural and urban regulations that are valid in Poland. Conclusions might be helpful for urban planners, urban designers, developers, architects and architecture students.

  3. An architecture for a continuous, user-driven, and data-driven application of clinical guidelines and its evaluation.

    PubMed

    Shalom, Erez; Shahar, Yuval; Lunenfeld, Eitan

    2016-02-01

    Design, implement, and evaluate a new architecture for realistic continuous guideline (GL)-based decision support, based on a series of requirements that we have identified, such as support for continuous care, for multiple task types, and for data-driven and user-driven modes. We designed and implemented a new continuous GL-based support architecture, PICARD, which accesses a temporal reasoning engine, and provides several different types of application interfaces. We present the new architecture in detail in the current paper. To evaluate the architecture, we first performed a technical evaluation of the PICARD architecture, using 19 simulated scenarios in the preeclampsia/toxemia domain. We then performed a functional evaluation with the help of two domain experts, by generating patient records that simulate 60 decision points from six clinical guideline-based scenarios, lasting from two days to four weeks. Finally, 36 clinicians made manual decisions in half of the scenarios, and had access to the automated GL-based support in the other half. The measures used in all three experiments were correctness and completeness of the decisions relative to the GL. Mean correctness and completeness in the technical evaluation were 1±0.0 and 0.96±0.03 respectively. The functional evaluation produced only several minor comments from the two experts, mostly regarding the output's style; otherwise the system's recommendations were validated. In the clinically oriented evaluation, the 36 clinicians applied manually approximately 41% of the GL's recommended actions. Completeness increased to approximately 93% when using PICARD. Manual correctness was approximately 94.5%, and remained similar when using PICARD; but while 68% of the manual decisions included correct but redundant actions, only 3% of the actions included in decisions made when using PICARD were redundant. The PICARD architecture is technically feasible and is functionally valid, and addresses the realistic continuous GL-based application requirements that we have defined; in particular, the requirement for care over significant time frames. The use of the PICARD architecture in the domain we examined resulted in enhanced completeness and in reduction of redundancies, and is potentially beneficial for general GL-based management of chronic patients. Copyright © 2015 Elsevier Inc. All rights reserved.

  4. Schematic driven layout of Reed Solomon encoders

    NASA Technical Reports Server (NTRS)

    Arave, Kari; Canaris, John; Miles, Lowell; Whitaker, Sterling

    1992-01-01

    Two Reed Solomon error correcting encoders are presented. Schematic driven layout tools were used to create the encoder layouts. Special consideration had to be given to the architecture and logic to provide scalability of the encoder designs. Knowledge gained from these projects was used to create a more flexible schematic driven layout system.

  5. Lifelong learning of human actions with deep neural network self-organization.

    PubMed

    Parisi, German I; Tani, Jun; Weber, Cornelius; Wermter, Stefan

    2017-12-01

    Lifelong learning is fundamental in autonomous robotics for the acquisition and fine-tuning of knowledge through experience. However, conventional deep neural models for action recognition from videos do not account for lifelong learning but rather learn a batch of training data with a predefined number of action classes and samples. Thus, there is the need to develop learning systems with the ability to incrementally process available perceptual cues and to adapt their responses over time. We propose a self-organizing neural architecture for incrementally learning to classify human actions from video sequences. The architecture comprises growing self-organizing networks equipped with recurrent neurons for processing time-varying patterns. We use a set of hierarchically arranged recurrent networks for the unsupervised learning of action representations with increasingly large spatiotemporal receptive fields. Lifelong learning is achieved in terms of prediction-driven neural dynamics in which the growth and the adaptation of the recurrent networks are driven by their capability to reconstruct temporally ordered input sequences. Experimental results on a classification task using two action benchmark datasets show that our model is competitive with state-of-the-art methods for batch learning also when a significant number of sample labels are missing or corrupted during training sessions. Additional experiments show the ability of our model to adapt to non-stationary input avoiding catastrophic interference. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.

  6. The emergence of overlapping scale-free genetic architecture in digital organisms.

    PubMed

    Gerlee, P; Lundh, T

    2008-01-01

    We have studied the evolution of genetic architecture in digital organisms and found that the gene overlap follows a scale-free distribution, which is commonly found in metabolic networks of many organisms. Our results show that the slope of the scale-free distribution depends on the mutation rate and that the gene development is driven by expansion of already existing genes, which is in direct correspondence to the preferential growth algorithm that gives rise to scale-free networks. To further validate our results we have constructed a simple model of gene development, which recapitulates the results from the evolutionary process and shows that the mutation rate affects the tendency of genes to cluster. In addition we could relate the slope of the scale-free distribution to the genetic complexity of the organisms and show that a high mutation rate gives rise to a more complex genetic architecture.

  7. Advanced and secure architectural EHR approaches.

    PubMed

    Blobel, Bernd

    2006-01-01

    Electronic Health Records (EHRs) provided as a lifelong patient record advance towards core applications of distributed and co-operating health information systems and health networks. For meeting the challenge of scalable, flexible, portable, secure EHR systems, the underlying EHR architecture must be based on the component paradigm and model driven, separating platform-independent and platform-specific models. Allowing manageable models, real systems must be decomposed and simplified. The resulting modelling approach has to follow the ISO Reference Model - Open Distributing Processing (RM-ODP). The ISO RM-ODP describes any system component from different perspectives. Platform-independent perspectives contain the enterprise view (business process, policies, scenarios, use cases), the information view (classes and associations) and the computational view (composition and decomposition), whereas platform-specific perspectives concern the engineering view (physical distribution and realisation) and the technology view (implementation details from protocols up to education and training) on system components. Those views have to be established for components reflecting aspects of all domains involved in healthcare environments including administrative, legal, medical, technical, etc. Thus, security-related component models reflecting all view mentioned have to be established for enabling both application and communication security services as integral part of the system's architecture. Beside decomposition and simplification of system regarding the different viewpoint on their components, different levels of systems' granularity can be defined hiding internals or focusing on properties of basic components to form a more complex structure. The resulting models describe both structure and behaviour of component-based systems. The described approach has been deployed in different projects defining EHR systems and their underlying architectural principles. In that context, the Australian GEHR project, the openEHR initiative, the revision of CEN ENV 13606 "Electronic Health Record communication", all based on Archetypes, but also the HL7 version 3 activities are discussed in some detail. The latter include the HL7 RIM, the HL7 Development Framework, the HL7's clinical document architecture (CDA) as well as the set of models from use cases, activity diagrams, sequence diagrams up to Domain Information Models (DMIMs) and their building blocks Common Message Element Types (CMET) Constraining Models to their underlying concepts. The future-proof EHR architecture as open, user-centric, user-friendly, flexible, scalable, portable core application in health information systems and health networks has to follow advanced architectural paradigms.

  8. A Spiking Neural Simulator Integrating Event-Driven and Time-Driven Computation Schemes Using Parallel CPU-GPU Co-Processing: A Case Study.

    PubMed

    Naveros, Francisco; Luque, Niceto R; Garrido, Jesús A; Carrillo, Richard R; Anguita, Mancia; Ros, Eduardo

    2015-07-01

    Time-driven simulation methods in traditional CPU architectures perform well and precisely when simulating small-scale spiking neural networks. Nevertheless, they still have drawbacks when simulating large-scale systems. Conversely, event-driven simulation methods in CPUs and time-driven simulation methods in graphic processing units (GPUs) can outperform CPU time-driven methods under certain conditions. With this performance improvement in mind, we have developed an event-and-time-driven spiking neural network simulator suitable for a hybrid CPU-GPU platform. Our neural simulator is able to efficiently simulate bio-inspired spiking neural networks consisting of different neural models, which can be distributed heterogeneously in both small layers and large layers or subsystems. For the sake of efficiency, the low-activity parts of the neural network can be simulated in CPU using event-driven methods while the high-activity subsystems can be simulated in either CPU (a few neurons) or GPU (thousands or millions of neurons) using time-driven methods. In this brief, we have undertaken a comparative study of these different simulation methods. For benchmarking the different simulation methods and platforms, we have used a cerebellar-inspired neural-network model consisting of a very dense granular layer and a Purkinje layer with a smaller number of cells (according to biological ratios). Thus, this cerebellar-like network includes a dense diverging neural layer (increasing the dimensionality of its internal representation and sparse coding) and a converging neural layer (integration) similar to many other biologically inspired and also artificial neural networks.

  9. Modeling the Sedimentary Infill of Lakes in the East African Rift: A Case Study of Multiple versus Single Rift Basin Segments

    NASA Astrophysics Data System (ADS)

    Zhang, C.; Scholz, C. A.

    2016-12-01

    The sedimentary basins in the East African Rift are considered excellent modern examples for investigating sedimentary infilling and evolution of extensional systems. Some lakes in the western branch of the rift have formed within single-segment systems, and include Lake Albert and Lake Edward. The largest and oldest lakes developed within multi-segment systems, and these include Lake Tanganyika and Lake Malawi. This research aims to explore processes of erosion and sedimentary infilling of the catchment area in single-segment rift (SSR) and multi-segment rift (MSR) systems. We consider different conditions of regional precipitation and evaporation, and assess the resulting facies architecture through forward modeling, using state-of-the-art commercial basin modeling software. Dionisos is a three-dimensional numerical stratigraphic forward modeling software program, which simulates basin-scale sediment transport based on empirical water- and gravity-driven diffusion equations. It was classically used to quantify the sedimentary architecture and basin infilling of both marine siliciclastic and carbonate environments. However, we apply this approach to continental rift basin environments. In this research, two scenarios are developed, one for a MSR and the other for a SSR. The modeled systems simulate the ratio of drainage area and lake surface area observed in modern Lake Tanganyika and Lake Albert, which are examples of MSRs and SSRs, respectively. The main parameters, such as maximum subsidence rate, water- and gravity-driven diffusion coefficients, rainfall, and evaporation, are approximated using these real-world examples. The results of 5 million year model runs with 50,000 year time steps show that MSRs are characterized by a deep water lake with relatively modest sediment accumulation, while the SSRs are characterized by a nearly overfilled lake with shallow water depths and thick sediment accumulation. The preliminary modeling results conform to the features of sedimentary infills revealed by seismic reflection data acquired in Lake Tanganyika and Lake Albert. Future models will refine the parameters of rainfall and evaporation in these two scenarios to better evaluate detailed basin facies architecture.

  10. Implementing partnership-driven clinical federated electronic health record data sharing networks.

    PubMed

    Stephens, Kari A; Anderson, Nicholas; Lin, Ching-Ping; Estiri, Hossein

    2016-09-01

    Building federated data sharing architectures requires supporting a range of data owners, effective and validated semantic alignment between data resources, and consistent focus on end-users. Establishing these resources requires development methodologies that support internal validation of data extraction and translation processes, sustaining meaningful partnerships, and delivering clear and measurable system utility. We describe findings from two federated data sharing case examples that detail critical factors, shared outcomes, and production environment results. Two federated data sharing pilot architectures developed to support network-based research associated with the University of Washington's Institute of Translational Health Sciences provided the basis for the findings. A spiral model for implementation and evaluation was used to structure iterations of development and support knowledge share between the two network development teams, which cross collaborated to support and manage common stages. We found that using a spiral model of software development and multiple cycles of iteration was effective in achieving early network design goals. Both networks required time and resource intensive efforts to establish a trusted environment to create the data sharing architectures. Both networks were challenged by the need for adaptive use cases to define and test utility. An iterative cyclical model of development provided a process for developing trust with data partners and refining the design, and supported measureable success in the development of new federated data sharing architectures. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  11. A Multi-mission Event-Driven Component-Based System for Support of Flight Software Development, ATLO, and Operations first used by the Mars Science Laboratory (MSL) Project

    NASA Technical Reports Server (NTRS)

    Dehghani, Navid; Tankenson, Michael

    2006-01-01

    This paper details an architectural description of the Mission Data Processing and Control System (MPCS), an event-driven, multi-mission ground data processing components providing uplink, downlink, and data management capabilities which will support the Mars Science Laboratory (MSL) project as its first target mission. MPCS is developed based on a set of small reusable components, implemented in Java, each designed with a specific function and well-defined interfaces. An industry standard messaging bus is used to transfer information among system components. Components generate standard messages which are used to capture system information, as well as triggers to support the event-driven architecture of the system. Event-driven systems are highly desirable for processing high-rate telemetry (science and engineering) data, and for supporting automation for many mission operations processes.

  12. A synchronized computational architecture for generalized bilateral control of robot arms

    NASA Technical Reports Server (NTRS)

    Bejczy, Antal K.; Szakaly, Zoltan

    1987-01-01

    This paper describes a computational architecture for an interconnected high speed distributed computing system for generalized bilateral control of robot arms. The key method of the architecture is the use of fully synchronized, interrupt driven software. Since an objective of the development is to utilize the processing resources efficiently, the synchronization is done in the hardware level to reduce system software overhead. The architecture also achieves a balaced load on the communication channel. The paper also describes some architectural relations to trading or sharing manual and automatic control.

  13. System on chip module configured for event-driven architecture

    DOEpatents

    Robbins, Kevin; Brady, Charles E.; Ashlock, Tad A.

    2017-10-17

    A system on chip (SoC) module is described herein, wherein the SoC modules comprise a processor subsystem and a hardware logic subsystem. The processor subsystem and hardware logic subsystem are in communication with one another, and transmit event messages between one another. The processor subsystem executes software actors, while the hardware logic subsystem includes hardware actors, the software actors and hardware actors conform to an event-driven architecture, such that the software actors receive and generate event messages and the hardware actors receive and generate event messages.

  14. Table-driven software architecture for a stitching system

    NASA Technical Reports Server (NTRS)

    Thrash, Patrick J. (Inventor); Miller, Jeffrey L. (Inventor); Pallas, Ken (Inventor); Trank, Robert C. (Inventor); Fox, Rhoda (Inventor); Korte, Mike (Inventor); Codos, Richard (Inventor); Korolev, Alexandre (Inventor); Collan, William (Inventor)

    2001-01-01

    Native code for a CNC stitching machine is generated by generating a geometry model of a preform; generating tool paths from the geometry model, the tool paths including stitching instructions for making stitches; and generating additional instructions indicating thickness values. The thickness values are obtained from a lookup table. When the stitching machine runs the native code, it accesses a lookup table to determine a thread tension value corresponding to the thickness value. The stitching machine accesses another lookup table to determine a thread path geometry value corresponding to the thickness value.

  15. Defining and using open architecture levels

    NASA Astrophysics Data System (ADS)

    Cramer, M. A.; Morrison, A. W.; Cordes, B.; Stack, J. R.

    2012-05-01

    Open architecture (OA) within military systems enables delivery of increased warfighter capabilities in a shorter time at a reduced cost.i In fact in today's standards-aware environment, solutions are often proposed to the government that include OA as one of its basics design tenets. Yet the ability to measure and assess OA in an objective manner, particularly at the subsystem/component level within a system, remains an elusive proposition. Furthermore, it is increasingly apparent that the establishment of an innovation ecosystem of an open business model that leverages thirdparty development requires more than just technical modifications that promote openness. This paper proposes a framework to migrate not only towards technical openness, but also towards enabling and facilitating an open business model, driven by third party development, for military systems. This framework was developed originally for the U.S. Navy Littoral and Mine Warfare community; however, the principles and approach may be applied elsewhere within the Navy and Department of Defense.

  16. Method to suppress DDFS spurious signals in a frequency-hopping synthesizer with DDFS-driven PLL architecture.

    PubMed

    Kwon, Kun-Sup; Yoon, Won-Sang

    2010-01-01

    In this paper we propose a method of removing from synthesizer output spurious signals due to quasi-amplitude modulation and superposition effect in a frequency-hopping synthesizer with direct digital frequency synthesizer (DDFS)-driven phase-locked loop (PLL) architecture, which has the advantages of high frequency resolution, fast transition time, and small size. There are spurious signals that depend on normalized frequency of DDFS. They can be dominant if they occur within the PLL loop bandwidth. We suggest that such signals can be eliminated by purposefully creating frequency errors in the developed synthesizer.

  17. Reference architecture of application services for personal wellbeing information management.

    PubMed

    Tuomainen, Mika; Mykkänen, Juha

    2011-01-01

    Personal information management has been proposed as an important enabler for individual empowerment concerning citizens' wellbeing and health information. In the MyWellbeing project in Finland, a strictly citizen-driven concept of "Coper" and related architectural and functional guidelines have been specified. We present a reference architecture and a set of identified application services to support personal wellbeing information management. In addition, the related standards and developments are discussed.

  18. Reference architecture and interoperability model for data mining and fusion in scientific cross-domain infrastructures

    NASA Astrophysics Data System (ADS)

    Haener, Rainer; Waechter, Joachim; Grellet, Sylvain; Robida, Francois

    2017-04-01

    Interoperability is the key factor in establishing scientific research environments and infrastructures, as well as in bringing together heterogeneous, geographically distributed risk management, monitoring, and early warning systems. Based on developments within the European Plate Observing System (EPOS), a reference architecture has been devised that comprises architectural blue-prints and interoperability models regarding the specification of business processes and logic as well as the encoding of data, metadata, and semantics. The architectural blueprint is developed on the basis of the so called service-oriented architecture (SOA) 2.0 paradigm, which combines intelligence and proactiveness of event-driven with service-oriented architectures. SOA 2.0 supports analysing (Data Mining) both, static and real-time data in order to find correlations of disparate information that do not at first appear to be intuitively obvious: Analysed data (e.g., seismological monitoring) can be enhanced with relationships discovered by associating them (Data Fusion) with other data (e.g., creepmeter monitoring), with digital models of geological structures, or with the simulation of geological processes. The interoperability model describes the information, communication (conversations) and the interactions (choreographies) of all participants involved as well as the processes for registering, providing, and retrieving information. It is based on the principles of functional integration, implemented via dedicated services, communicating via service-oriented and message-driven infrastructures. The services provide their functionality via standardised interfaces: Instead of requesting data directly, users share data via services that are built upon specific adapters. This approach replaces the tight coupling at data level by a flexible dependency on loosely coupled services. The main component of the interoperability model is the comprehensive semantic description of the information, business logic and processes on the basis of a minimal set of well-known, established standards. It implements the representation of knowledge with the application of domain-controlled vocabularies to statements about resources, information, facts, and complex matters (ontologies). Seismic experts for example, would be interested in geological models or borehole measurements at a certain depth, based on which it is possible to correlate and verify seismic profiles. The entire model is built upon standards from the Open Geospatial Consortium (Dictionaries, Service Layer), the International Organisation for Standardisation (Registries, Metadata), and the World Wide Web Consortium (Resource Description Framework, Spatial Data on the Web Best Practices). It has to be emphasised that this approach is scalable to the greatest possible extent: All information, necessary in the context of cross-domain infrastructures is referenced via vocabularies and knowledge bases containing statements that provide either the information itself or resources (service-endpoints), the information can be retrieved from. The entire infrastructure communication is subject to a broker-based business logic integration platform where the information exchanged between involved participants, is managed on the basis of standardised dictionaries, repositories, and registries. This approach also enables the development of Systems-of-Systems (SoS), which allow the collaboration of autonomous, large scale concurrent, and distributed systems, yet cooperatively interacting as a collective in a common environment.

  19. Prediction model of velocity field around circular cylinder over various Reynolds numbers by fusion convolutional neural networks based on pressure on the cylinder

    NASA Astrophysics Data System (ADS)

    Jin, Xiaowei; Cheng, Peng; Chen, Wen-Li; Li, Hui

    2018-04-01

    A data-driven model is proposed for the prediction of the velocity field around a cylinder by fusion convolutional neural networks (CNNs) using measurements of the pressure field on the cylinder. The model is based on the close relationship between the Reynolds stresses in the wake, the wake formation length, and the base pressure. Numerical simulations of flow around a cylinder at various Reynolds numbers are carried out to establish a dataset capturing the effect of the Reynolds number on various flow properties. The time series of pressure fluctuations on the cylinder is converted into a grid-like spatial-temporal topology to be handled as the input of a CNN. A CNN architecture composed of a fusion of paths with and without a pooling layer is designed. This architecture can capture both accurate spatial-temporal information and the features that are invariant of small translations in the temporal dimension of pressure fluctuations on the cylinder. The CNN is trained using the computational fluid dynamics (CFD) dataset to establish the mapping relationship between the pressure fluctuations on the cylinder and the velocity field around the cylinder. Adam (adaptive moment estimation), an efficient method for processing large-scale and high-dimensional machine learning problems, is employed to implement the optimization algorithm. The trained model is then tested over various Reynolds numbers. The predictions of this model are found to agree well with the CFD results, and the data-driven model successfully learns the underlying flow regimes, i.e., the relationship between wake structure and pressure experienced on the surface of a cylinder is well established.

  20. Data to Decisions: Creating a Culture of Model-Driven Drug Discovery.

    PubMed

    Brown, Frank K; Kopti, Farida; Chang, Charlie Zhenyu; Johnson, Scott A; Glick, Meir; Waller, Chris L

    2017-09-01

    Merck & Co., Inc., Kenilworth, NJ, USA, is undergoing a transformation in the way that it prosecutes R&D programs. Through the adoption of a "model-driven" culture, enhanced R&D productivity is anticipated, both in the form of decreased attrition at each stage of the process and by providing a rational framework for understanding and learning from the data generated along the way. This new approach focuses on the concept of a "Design Cycle" that makes use of all the data possible, internally and externally, to drive decision-making. These data can take the form of bioactivity, 3D structures, genomics, pathway, PK/PD, safety data, etc. Synthesis of high-quality data into models utilizing both well-established and cutting-edge methods has been shown to yield high confidence predictions to prioritize decision-making and efficiently reposition resources within R&D. The goal is to design an adaptive research operating plan that uses both modeled data and experiments, rather than just testing, to drive project decision-making. To support this emerging culture, an ambitious information management (IT) program has been initiated to implement a harmonized platform to facilitate the construction of cross-domain workflows to enable data-driven decision-making and the construction and validation of predictive models. These goals are achieved through depositing model-ready data, agile persona-driven access to data, a unified cross-domain predictive model lifecycle management platform, and support for flexible scientist-developed workflows that simplify data manipulation and consume model services. The end-to-end nature of the platform, in turn, not only supports but also drives the culture change by enabling scientists to apply predictive sciences throughout their work and over the lifetime of a project. This shift in mindset for both scientists and IT was driven by an early impactful demonstration of the potential benefits of the platform, in which expert-level early discovery predictive models were made available from familiar desktop tools, such as ChemDraw. This was built using a workflow-driven service-oriented architecture (SOA) on top of the rigorous registration of all underlying model entities.

  1. Porting plasma physics simulation codes to modern computing architectures using the libmrc framework

    NASA Astrophysics Data System (ADS)

    Germaschewski, Kai; Abbott, Stephen

    2015-11-01

    Available computing power has continued to grow exponentially even after single-core performance satured in the last decade. The increase has since been driven by more parallelism, both using more cores and having more parallelism in each core, e.g. in GPUs and Intel Xeon Phi. Adapting existing plasma physics codes is challenging, in particular as there is no single programming model that covers current and future architectures. We will introduce the open-source libmrc framework that has been used to modularize and port three plasma physics codes: The extended MHD code MRCv3 with implicit time integration and curvilinear grids; the OpenGGCM global magnetosphere model; and the particle-in-cell code PSC. libmrc consolidates basic functionality needed for simulations based on structured grids (I/O, load balancing, time integrators), and also introduces a parallel object model that makes it possible to maintain multiple implementations of computational kernels, on e.g. conventional processors and GPUs. It handles data layout conversions and enables us to port performance-critical parts of a code to a new architecture step-by-step, while the rest of the code can remain unchanged. We will show examples of the performance gains and some physics applications.

  2. Can We Practically Bring Physics-based Modeling Into Operational Analytics Tools?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Granderson, Jessica; Bonvini, Marco; Piette, Mary Ann

    We present that analytics software is increasingly used to improve and maintain operational efficiency in commercial buildings. Energy managers, owners, and operators are using a diversity of commercial offerings often referred to as Energy Information Systems, Fault Detection and Diagnostic (FDD) systems, or more broadly Energy Management and Information Systems, to cost-effectively enable savings on the order of ten to twenty percent. Most of these systems use data from meters and sensors, with rule-based and/or data-driven models to characterize system and building behavior. In contrast, physics-based modeling uses first-principles and engineering models (e.g., efficiency curves) to characterize system and buildingmore » behavior. Historically, these physics-based approaches have been used in the design phase of the building life cycle or in retrofit analyses. Researchers have begun exploring the benefits of integrating physics-based models with operational data analytics tools, bridging the gap between design and operations. In this paper, we detail the development and operator use of a software tool that uses hybrid data-driven and physics-based approaches to cooling plant FDD and optimization. Specifically, we describe the system architecture, models, and FDD and optimization algorithms; advantages and disadvantages with respect to purely data-driven approaches; and practical implications for scaling and replicating these techniques. Finally, we conclude with an evaluation of the future potential for such tools and future research opportunities.« less

  3. The dynamics of architectural complexity on coral reefs under climate change.

    PubMed

    Bozec, Yves-Marie; Alvarez-Filip, Lorenzo; Mumby, Peter J

    2015-01-01

    One striking feature of coral reef ecosystems is the complex benthic architecture which supports diverse and abundant fauna, particularly of reef fish. Reef-building corals are in decline worldwide, with a corresponding loss of live coral cover resulting in a loss of architectural complexity. Understanding the dynamics of the reef architecture is therefore important to envision the ability of corals to maintain functional habitats in an era of climate change. Here, we develop a mechanistic model of reef topographical complexity for contemporary Caribbean reefs. The model describes the dynamics of corals and other benthic taxa under climate-driven disturbances (hurricanes and coral bleaching). Corals have a simplified shape with explicit diameter and height, allowing species-specific calculation of their colony surface and volume. Growth and the mechanical (hurricanes) and biological erosion (parrotfish) of carbonate skeletons are important in driving the pace of extension/reduction in the upper reef surface, the net outcome being quantified by a simple surface roughness index (reef rugosity). The model accurately simulated the decadal changes of coral cover observed in Cozumel (Mexico) between 1984 and 2008, and provided a realistic hindcast of coral colony-scale (1-10 m) changing rugosity over the same period. We then projected future changes of Caribbean reef rugosity in response to global warming. Under severe and frequent thermal stress, the model predicted a dramatic loss of rugosity over the next two or three decades. Critically, reefs with managed parrotfish populations were able to delay the general loss of architectural complexity, as the benefits of grazing in maintaining living coral outweighed the bioerosion of dead coral skeletons. Overall, this model provides the first explicit projections of reef rugosity in a warming climate, and highlights the need of combining local (protecting and restoring high grazing) to global (mitigation of greenhouse gas emissions) interventions for the persistence of functional reef habitats. © 2014 John Wiley & Sons Ltd.

  4. Modeling Adaptable Business Service for Enterprise Collaboration

    NASA Astrophysics Data System (ADS)

    Boukadi, Khouloud; Vincent, Lucien; Burlat, Patrick

    Nowadays, a Service Oriented Architecture (SOA) seems to be one of the most promising paradigms for leveraging enterprise information systems. SOA creates opportunities for enterprises to provide value added service tailored for on demand enterprise collaboration. With the emergence and rapid development of Web services technologies, SOA is being paid increasing attention and has become widespread. In spite of the popularity of SOA, a standardized framework for modeling and implementing business services are still in progress. For the purpose of supporting these service-oriented solutions, we adopt a model driven development approach. This paper outlines the Contextual Service Oriented Modeling and Analysis (CSOMA) methodology and presents UML profiles for the PIM level service-oriented architectural modeling, as well as its corresponding meta-models. The proposed PIM (Platform Independent Model) describes the business SOA at a high level of abstraction regardless of techniques involved in the application employment. In addition, all essential service-specific concerns required for delivering quality and context-aware service are covered. Some of the advantages of this approach are that it is generic and thus not closely allied with Web service technology as well as specifically treating the service adaptability during the design stage.

  5. End-to-end observatory software modeling using domain specific languages

    NASA Astrophysics Data System (ADS)

    Filgueira, José M.; Bec, Matthieu; Liu, Ning; Peng, Chien; Soto, José

    2014-07-01

    The Giant Magellan Telescope (GMT) is a 25-meter extremely large telescope that is being built by an international consortium of universities and research institutions. Its software and control system is being developed using a set of Domain Specific Languages (DSL) that supports a model driven development methodology integrated with an Agile management process. This approach promotes the use of standardized models that capture the component architecture of the system, that facilitate the construction of technical specifications in a uniform way, that facilitate communication between developers and domain experts and that provide a framework to ensure the successful integration of the software subsystems developed by the GMT partner institutions.

  6. Evaluation of software maintain ability with open EHR - a comparison of architectures.

    PubMed

    Atalag, Koray; Yang, Hong Yul; Tempero, Ewan; Warren, James R

    2014-11-01

    To assess whether it is easier to maintain a clinical information system developed using open EHR model driven development versus mainstream methods. A new open source application (GastrOS) has been developed following open EHR's multi-level modelling approach using .Net/C# based on the same requirements of an existing clinically used application developed using Microsoft Visual Basic and Access database. Almost all the domain knowledge was embedded into the software code and data model in the latter. The same domain knowledge has been expressed as a set of open EHR Archetypes in GastrOS. We then introduced eight real-world change requests that had accumulated during live clinical usage, and implemented these in both systems while measuring time for various development tasks and change in software size for each change request. Overall it took half the time to implement changes in GastrOS. However it was the more difficult application to modify for one change request, suggesting the nature of change is also important. It was not possible to implement changes by modelling only. Comparison of relative measures of time and software size change within each application highlights how architectural differences affected maintain ability across change requests. The use of open EHR model driven development can result in better software maintain ability. The degree to which open EHR affects software maintain ability depends on the extent and nature of domain knowledge involved in changes. Although we used relative measures for time and software size, confounding factors could not be totally excluded as a controlled study design was not feasible. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  7. Real-Time MENTAT programming language and architecture

    NASA Technical Reports Server (NTRS)

    Grimshaw, Andrew S.; Silberman, Ami; Liu, Jane W. S.

    1989-01-01

    Real-time MENTAT, a programming environment designed to simplify the task of programming real-time applications in distributed and parallel environments, is described. It is based on the same data-driven computation model and object-oriented programming paradigm as MENTAT. It provides an easy-to-use mechanism to exploit parallelism, language constructs for the expression and enforcement of timing constraints, and run-time support for scheduling and exciting real-time programs. The real-time MENTAT programming language is an extended C++. The extensions are added to facilitate automatic detection of data flow and generation of data flow graphs, to express the timing constraints of individual granules of computation, and to provide scheduling directives for the runtime system. A high-level view of the real-time MENTAT system architecture and programming language constructs is provided.

  8. Using Real and Simulated TNOs to Constrain the Outer Solar System

    NASA Astrophysics Data System (ADS)

    Kaib, Nathan

    2018-04-01

    Over the past 2-3 decades our understanding of the outer solar system’s history and current state has evolved dramatically. An explosion in the number of detected trans-Neptunian objects (TNOs) coupled with simultaneous advances in numerical models of orbital dynamics has driven this rapid evolution. However, successfully constraining the orbital architecture and evolution of the outer solar system requires accurately comparing simulation results with observational datasets. This process is challenging because observed datasets are influenced by orbital discovery biases as well as TNO size and albedo distributions. Meanwhile, such influences are generally absent from numerical results. Here I will review recent work I and others have undertaken using numerical simulations in concert with catalogs of observed TNOs to constrain the outer solar system’s current orbital architecture and past evolution.

  9. Failure analysis of woven and braided fabric reinforced composites

    NASA Technical Reports Server (NTRS)

    Naik, Rajiv A.

    1994-01-01

    A general purpose micromechanics analysis that discretely models the yarn architecture within the textile repeating unit cell was developed to predict overall, three dimensional, thermal and mechanical properties, damage initiation and progression, and strength. This analytical technique was implemented in a user-friendly, personal computer-based, menu-driven code called Textile Composite Analysis for Design (TEXCAD). TEXCAD was used to analyze plain weave and 2x2, 2-D triaxial braided composites. The calculated tension, compression, and shear strengths correlated well with available test data for both woven and braided composites. Parametric studies were performed on both woven and braided architectures to investigate the effects of parameters such as yarn size, yarn spacing, yarn crimp, braid angle, and overall fiber volume fraction on the strength properties of the textile composite.

  10. Educational JavaBeans: a Requirements Driven Architecture.

    ERIC Educational Resources Information Center

    Hall, Jon; Rapanotti, Lucia

    This paper investigates, through a case study, the development of a software architecture that is compatible with a system's high-level requirements. The case study is an example of an extended customer/supplier relationship (post-point of sale support) involved in e-universities and is representative of a class of enterprise without current…

  11. A Socio-Cognitive Approach to Knowledge Construction in Design Studio through Blended Learning

    ERIC Educational Resources Information Center

    Kocaturk, Tuba

    2017-01-01

    This paper results from an educational research project that was undertaken by the School of Architecture, at the University of Liverpool funded by the Higher Education Academy in UK. The research explored technology driven shifts in architectural design studio education, identified their cognitive effects on design learning and developed an…

  12. Fluid-driven origami-inspired artificial muscles.

    PubMed

    Li, Shuguang; Vogt, Daniel M; Rus, Daniela; Wood, Robert J

    2017-12-12

    Artificial muscles hold promise for safe and powerful actuation for myriad common machines and robots. However, the design, fabrication, and implementation of artificial muscles are often limited by their material costs, operating principle, scalability, and single-degree-of-freedom contractile actuation motions. Here we propose an architecture for fluid-driven origami-inspired artificial muscles. This concept requires only a compressible skeleton, a flexible skin, and a fluid medium. A mechanical model is developed to explain the interaction of the three components. A fabrication method is introduced to rapidly manufacture low-cost artificial muscles using various materials and at multiple scales. The artificial muscles can be programed to achieve multiaxial motions including contraction, bending, and torsion. These motions can be aggregated into systems with multiple degrees of freedom, which are able to produce controllable motions at different rates. Our artificial muscles can be driven by fluids at negative pressures (relative to ambient). This feature makes actuation safer than most other fluidic artificial muscles that operate with positive pressures. Experiments reveal that these muscles can contract over 90% of their initial lengths, generate stresses of ∼600 kPa, and produce peak power densities over 2 kW/kg-all equal to, or in excess of, natural muscle. This architecture for artificial muscles opens the door to rapid design and low-cost fabrication of actuation systems for numerous applications at multiple scales, ranging from miniature medical devices to wearable robotic exoskeletons to large deployable structures for space exploration. Copyright © 2017 the Author(s). Published by PNAS.

  13. Fluid-driven origami-inspired artificial muscles

    PubMed Central

    Li, Shuguang; Vogt, Daniel M.; Rus, Daniela; Wood, Robert J.

    2017-01-01

    Artificial muscles hold promise for safe and powerful actuation for myriad common machines and robots. However, the design, fabrication, and implementation of artificial muscles are often limited by their material costs, operating principle, scalability, and single-degree-of-freedom contractile actuation motions. Here we propose an architecture for fluid-driven origami-inspired artificial muscles. This concept requires only a compressible skeleton, a flexible skin, and a fluid medium. A mechanical model is developed to explain the interaction of the three components. A fabrication method is introduced to rapidly manufacture low-cost artificial muscles using various materials and at multiple scales. The artificial muscles can be programed to achieve multiaxial motions including contraction, bending, and torsion. These motions can be aggregated into systems with multiple degrees of freedom, which are able to produce controllable motions at different rates. Our artificial muscles can be driven by fluids at negative pressures (relative to ambient). This feature makes actuation safer than most other fluidic artificial muscles that operate with positive pressures. Experiments reveal that these muscles can contract over 90% of their initial lengths, generate stresses of ∼600 kPa, and produce peak power densities over 2 kW/kg—all equal to, or in excess of, natural muscle. This architecture for artificial muscles opens the door to rapid design and low-cost fabrication of actuation systems for numerous applications at multiple scales, ranging from miniature medical devices to wearable robotic exoskeletons to large deployable structures for space exploration. PMID:29180416

  14. Fluid-driven origami-inspired artificial muscles

    NASA Astrophysics Data System (ADS)

    Li, Shuguang; Vogt, Daniel M.; Rus, Daniela; Wood, Robert J.

    2017-12-01

    Artificial muscles hold promise for safe and powerful actuation for myriad common machines and robots. However, the design, fabrication, and implementation of artificial muscles are often limited by their material costs, operating principle, scalability, and single-degree-of-freedom contractile actuation motions. Here we propose an architecture for fluid-driven origami-inspired artificial muscles. This concept requires only a compressible skeleton, a flexible skin, and a fluid medium. A mechanical model is developed to explain the interaction of the three components. A fabrication method is introduced to rapidly manufacture low-cost artificial muscles using various materials and at multiple scales. The artificial muscles can be programed to achieve multiaxial motions including contraction, bending, and torsion. These motions can be aggregated into systems with multiple degrees of freedom, which are able to produce controllable motions at different rates. Our artificial muscles can be driven by fluids at negative pressures (relative to ambient). This feature makes actuation safer than most other fluidic artificial muscles that operate with positive pressures. Experiments reveal that these muscles can contract over 90% of their initial lengths, generate stresses of ˜600 kPa, and produce peak power densities over 2 kW/kg—all equal to, or in excess of, natural muscle. This architecture for artificial muscles opens the door to rapid design and low-cost fabrication of actuation systems for numerous applications at multiple scales, ranging from miniature medical devices to wearable robotic exoskeletons to large deployable structures for space exploration.

  15. EVA/ORU model architecture using RAMCOST

    NASA Technical Reports Server (NTRS)

    Ntuen, Celestine A.; Park, Eui H.; Wang, Y. M.; Bretoi, R.

    1990-01-01

    A parametrically driven simulation model is presented in order to provide a detailed insight into the effects of various input parameters in the life testing of a modular space suit. The RAMCOST model employed is a user-oriented simulation model for studying the life-cycle costs of designs under conditions of uncertainty. The results obtained from the EVA simulated model are used to assess various mission life testing parameters such as the number of joint motions per EVA cycle time, part availability, and number of inspection requirements. RAMCOST first simulates EVA completion for NASA application using a probabilistic like PERT network. With the mission time heuristically determined, RAMCOST then models different orbital replacement unit policies with special application to the astronaut's space suit functional designs.

  16. A Roadmap for caGrid, an Enterprise Grid Architecture for Biomedical Research

    PubMed Central

    Saltz, Joel; Hastings, Shannon; Langella, Stephen; Oster, Scott; Kurc, Tahsin; Payne, Philip; Ferreira, Renato; Plale, Beth; Goble, Carole; Ervin, David; Sharma, Ashish; Pan, Tony; Permar, Justin; Brezany, Peter; Siebenlist, Frank; Madduri, Ravi; Foster, Ian; Shanbhag, Krishnakant; Mead, Charlie; Hong, Neil Chue

    2012-01-01

    caGrid is a middleware system which combines the Grid computing, the service oriented architecture, and the model driven architecture paradigms to support development of interoperable data and analytical resources and federation of such resources in a Grid environment. The functionality provided by caGrid is an essential and integral component of the cancer Biomedical Informatics Grid (caBIG™) program. This program is established by the National Cancer Institute as a nationwide effort to develop enabling informatics technologies for collaborative, multi-institutional biomedical research with the overarching goal of accelerating translational cancer research. Although the main application domain for caGrid is cancer research, the infrastructure provides a generic framework that can be employed in other biomedical research and healthcare domains. The development of caGrid is an ongoing effort, adding new functionality and improvements based on feedback and use cases from the community. This paper provides an overview of potential future architecture and tooling directions and areas of improvement for caGrid and caGrid-like systems. This summary is based on discussions at a roadmap workshop held in February with participants from biomedical research, Grid computing, and high performance computing communities. PMID:18560123

  17. A roadmap for caGrid, an enterprise Grid architecture for biomedical research.

    PubMed

    Saltz, Joel; Hastings, Shannon; Langella, Stephen; Oster, Scott; Kurc, Tahsin; Payne, Philip; Ferreira, Renato; Plale, Beth; Goble, Carole; Ervin, David; Sharma, Ashish; Pan, Tony; Permar, Justin; Brezany, Peter; Siebenlist, Frank; Madduri, Ravi; Foster, Ian; Shanbhag, Krishnakant; Mead, Charlie; Chue Hong, Neil

    2008-01-01

    caGrid is a middleware system which combines the Grid computing, the service oriented architecture, and the model driven architecture paradigms to support development of interoperable data and analytical resources and federation of such resources in a Grid environment. The functionality provided by caGrid is an essential and integral component of the cancer Biomedical Informatics Grid (caBIG) program. This program is established by the National Cancer Institute as a nationwide effort to develop enabling informatics technologies for collaborative, multi-institutional biomedical research with the overarching goal of accelerating translational cancer research. Although the main application domain for caGrid is cancer research, the infrastructure provides a generic framework that can be employed in other biomedical research and healthcare domains. The development of caGrid is an ongoing effort, adding new functionality and improvements based on feedback and use cases from the community. This paper provides an overview of potential future architecture and tooling directions and areas of improvement for caGrid and caGrid-like systems. This summary is based on discussions at a roadmap workshop held in February with participants from biomedical research, Grid computing, and high performance computing communities.

  18. Linking and Combining Distributed Operations Facilities using NASA's "GMSEC" Systems Architectures

    NASA Technical Reports Server (NTRS)

    Smith, Danford; Grubb, Thomas; Esper, Jaime

    2008-01-01

    NASA's Goddard Mission Services Evolution Center (GMSEC) ground system architecture has been in development since late 2001, has successfully supported eight orbiting satellites and is being applied to many of NASA's future missions. GMSEC can be considered an event-driven service-oriented architecture built around a publish/subscribe message bus middleware. This paper briefly discusses the GMSEC technical approaches which have led to significant cost savings and risk reduction for NASA missions operated at the Goddard Space Flight Center (GSFC). The paper then focuses on the development and operational impacts of extending the architecture across multiple mission operations facilities.

  19. Enabling Data-Driven Methodologies Across the Data Lifecycle and Ecosystem

    NASA Astrophysics Data System (ADS)

    Doyle, R. J.; Crichton, D.

    2017-12-01

    NASA has unlocked unprecedented scientific knowledge through exploration of the Earth, our solar system, and the larger universe. NASA is generating enormous amounts of data that are challenging traditional approaches to capturing, managing, analyzing and ultimately gaining scientific understanding from science data. New architectures, capabilities and methodologies are needed to span the entire observing system, from spacecraft to archive, while integrating data-driven discovery and analytic capabilities. NASA data have a definable lifecycle, from remote collection point to validated accessibility in multiple archives. Data challenges must be addressed across this lifecycle, to capture opportunities and avoid decisions that may limit or compromise what is achievable once data arrives at the archive. Data triage may be necessary when the collection capacity of the sensor or instrument overwhelms data transport or storage capacity. By migrating computational and analytic capability to the point of data collection, informed decisions can be made about which data to keep; in some cases, to close observational decision loops onboard, to enable attending to unexpected or transient phenomena. Along a different dimension than the data lifecycle, scientists and other end-users must work across an increasingly complex data ecosystem, where the range of relevant data is rarely owned by a single institution. To operate effectively, scalable data architectures and community-owned information models become essential. NASA's Planetary Data System is having success with this approach. Finally, there is the difficult challenge of reproducibility and trust. While data provenance techniques will be part of the solution, future interactive analytics environments must support an ability to provide a basis for a result: relevant data source and algorithms, uncertainty tracking, etc., to assure scientific integrity and to enable confident decision making. Advances in data science offer opportunities to gain new insights from space missions and their vast data collections. We are working to innovate new architectures, exploit emerging technologies, develop new data-driven methodologies, and transfer them across disciplines, while working across the dual dimensions of the data lifecycle and the data ecosystem.

  20. SATware: A Semantic Approach for Building Sentient Spaces

    NASA Astrophysics Data System (ADS)

    Massaguer, Daniel; Mehrotra, Sharad; Vaisenberg, Ronen; Venkatasubramanian, Nalini

    This chapter describes the architecture of a semantic-based middleware environment for building sensor-driven sentient spaces. The proposed middleware explicitly models sentient space semantics (i.e., entities, spaces, activities) and supports mechanisms to map sensor observations to the state of the sentient space. We argue how such a semantic approach provides a powerful programming environment for building sensor spaces. In addition, the approach provides natural ways to exploit semantics for variety of purposes including scheduling under resource constraints and sensor recalibration.

  1. Compiler-Driven Performance Optimization and Tuning for Multicore Architectures

    DTIC Science & Technology

    2015-04-10

    develop a powerful system for auto-tuning of library routines and compute-intensive kernels, driven by the Pluto system for multicores that we are...kernels, driven by the Pluto system for multicores that we are developing. The work here is motivated by recent advances in two major areas of...automatic C-to-CUDA code generator using a polyhedral compiler transformation framework. We have used and adapted PLUTO (our state-of-the-art tool

  2. Implementing a Dynamic Database-Driven Course Using LAMP

    ERIC Educational Resources Information Center

    Laverty, Joseph Packy; Wood, David; Turchek, John

    2011-01-01

    This paper documents the formulation of a database driven open source architecture web development course. The design of a web-based curriculum faces many challenges: a) relative emphasis of client and server-side technologies, b) choice of a server-side language, and c) the cost and efficient delivery of a dynamic web development, database-driven…

  3. Advantages of Brahms for Specifying and Implementing a Multiagent Human-Robotic Exploration System

    NASA Technical Reports Server (NTRS)

    Clancey, William J.; Sierhuis, Maarten; Kaskiris, Charis; vanHoof, Ron

    2003-01-01

    We have developed a model-based, distributed architecture that integrates diverse components in a system designed for lunar and planetary surface operations: an astronaut's space suit, cameras, all-terrain vehicles, robotic assistant, crew in a local habitat, and mission support team. Software processes ('agents') implemented in the Brahms language, run on multiple, mobile platforms. These mobile agents interpret and transform available data to help people and robotic systems coordinate their actions to make operations more safe and efficient. The Brahms-based mobile agent architecture (MAA) uses a novel combination of agent types so the software agents may understand and facilitate communications between people and between system components. A state-of-the-art spoken dialogue interface is integrated with Brahms models, supporting a speech-driven field observation record and rover command system. An important aspect of the methodology involves first simulating the entire system in Brahms, then configuring the agents into a runtime system Thus, Brahms provides a language, engine, and system builder's toolkit for specifying and implementing multiagent systems.

  4. LDRD project final report : hybrid AI/cognitive tactical behavior framework for LVC.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Djordjevich, Donna D.; Xavier, Patrick Gordon; Brannon, Nathan Gregory

    This Lab-Directed Research and Development (LDRD) sought to develop technology that enhances scenario construction speed, entity behavior robustness, and scalability in Live-Virtual-Constructive (LVC) simulation. We investigated issues in both simulation architecture and behavior modeling. We developed path-planning technology that improves the ability to express intent in the planning task while still permitting an efficient search algorithm. An LVC simulation demonstrated how this enables 'one-click' layout of squad tactical paths, as well as dynamic re-planning for simulated squads and for real and simulated mobile robots. We identified human response latencies that can be exploited in parallel/distributed architectures. We did an experimentalmore » study to determine where parallelization would be productive in Umbra-based force-on-force (FOF) simulations. We developed and implemented a data-driven simulation composition approach that solves entity class hierarchy issues and supports assurance of simulation fairness. Finally, we proposed a flexible framework to enable integration of multiple behavior modeling components that model working memory phenomena with different degrees of sophistication.« less

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Knirsch, Fabian; Engel, Dominik; Neureiter, Christian

    In a smart grid, data and information are transported, transmitted, stored, and processed with various stakeholders having to cooperate effectively. Furthermore, personal data is the key to many smart grid applications and therefore privacy impacts have to be taken into account. For an effective smart grid, well integrated solutions are crucial and for achieving a high degree of customer acceptance, privacy should already be considered at design time of the system. To assist system engineers in early design phase, frameworks for the automated privacy evaluation of use cases are important. For evaluation, use cases for services and software architectures needmore » to be formally captured in a standardized and commonly understood manner. In order to ensure this common understanding for all kinds of stakeholders, reference models have recently been developed. In this paper we present a model-driven approach for the automated assessment of such services and software architectures in the smart grid that builds on the standardized reference models. The focus of qualitative and quantitative evaluation is on privacy. For evaluation, the framework draws on use cases from the University of Southern California microgrid.« less

  6. Climate, weather, space weather: model development in an operational context

    NASA Astrophysics Data System (ADS)

    Folini, Doris

    2018-05-01

    Aspects of operational modeling for climate, weather, and space weather forecasts are contrasted, with a particular focus on the somewhat conflicting demands of "operational stability" versus "dynamic development" of the involved models. Some common key elements are identified, indicating potential for fruitful exchange across communities. Operational model development is compelling, driven by factors that broadly fall into four categories: model skill, basic physics, advances in computer architecture, and new aspects to be covered, from costumer needs over physics to observational data. Evaluation of model skill as part of the operational chain goes beyond an automated skill score. Permanent interaction between "pure research" and "operational forecast" people is beneficial to both sides. This includes joint model development projects, although ultimate responsibility for the operational code remains with the forecast provider. The pace of model development reflects operational lead times. The points are illustrated with selected examples, many of which reflect the author's background and personal contacts, notably with the Swiss Weather Service and the Max Planck Institute for Meteorology, Hamburg, Germany. In view of current and future challenges, large collaborations covering a range of expertise are a must - within and across climate, weather, and space weather. To profit from and cope with the rapid progress of computer architectures, supercompute centers must form part of the team.

  7. The PDS4 Information Model and its Role in Agile Science Data Curation

    NASA Astrophysics Data System (ADS)

    Hughes, J. S.; Crichton, D.

    2017-12-01

    PDS4 is an information model-driven service architecture supporting the capture, management, distribution and integration of massive planetary science data captured in distributed data archives world-wide. The PDS4 Information Model (IM), the core element of the architecture, was developed using lessons learned from 20 years of archiving Planetary Science Data and best practices for information model development. The foundational principles were adopted from the Open Archival Information System (OAIS) Reference Model (ISO 14721), the Metadata Registry Specification (ISO/IEC 11179), and W3C XML (Extensible Markup Language) specifications. These provided respectively an object oriented model for archive information systems, a comprehensive schema for data dictionaries and hierarchical governance, and rules for rules for encoding documents electronically. The PDS4 Information model is unique in that it drives the PDS4 infrastructure by providing the representation of concepts and their relationships, constraints, rules, and operations; a sharable, stable, and organized set of information requirements; and machine parsable definitions that are suitable for configuring and generating code. This presentation will provide an over of the PDS4 Information Model and how it is being leveraged to develop and evolve the PDS4 infrastructure and enable agile curation of over 30 years of science data collected by the international Planetary Science community.

  8. Cooperating reduction machines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kluge, W.E.

    1983-11-01

    This paper presents a concept and a system architecture for the concurrent execution of program expressions of a concrete reduction language based on lamda-expressions. If formulated appropriately, these expressions are well-suited for concurrent execution, following a demand-driven model of computation. In particular, recursive program expressions with nonlinear expansion may, at run time, recursively be partitioned into a hierarchy of independent subexpressions which can be reduced by a corresponding hierarchy of virtual reduction machines. This hierarchy unfolds and collapses dynamically, with virtual machines recursively assuming the role of masters that create and eventually terminate, or synchronize with, slaves. The paper alsomore » proposes a nonhierarchically organized system of reduction machines, each featuring a stack architecture, that effectively supports the allocation of virtual machines to the real machines of the system in compliance with their hierarchical order of creation and termination. 25 references.« less

  9. Localization Framework for Real-Time UAV Autonomous Landing: An On-Ground Deployed Visual Approach

    PubMed Central

    Kong, Weiwei; Hu, Tianjiang; Zhang, Daibing; Shen, Lincheng; Zhang, Jianwei

    2017-01-01

    One of the greatest challenges for fixed-wing unmanned aircraft vehicles (UAVs) is safe landing. Hereafter, an on-ground deployed visual approach is developed in this paper. This approach is definitely suitable for landing within the global navigation satellite system (GNSS)-denied environments. As for applications, the deployed guidance system makes full use of the ground computing resource and feedbacks the aircraft’s real-time localization to its on-board autopilot. Under such circumstances, a separate long baseline stereo architecture is proposed to possess an extendable baseline and wide-angle field of view (FOV) against the traditional fixed baseline schemes. Furthermore, accuracy evaluation of the new type of architecture is conducted by theoretical modeling and computational analysis. Dataset-driven experimental results demonstrate the feasibility and effectiveness of the developed approach. PMID:28629189

  10. Localization Framework for Real-Time UAV Autonomous Landing: An On-Ground Deployed Visual Approach.

    PubMed

    Kong, Weiwei; Hu, Tianjiang; Zhang, Daibing; Shen, Lincheng; Zhang, Jianwei

    2017-06-19

    [-5]One of the greatest challenges for fixed-wing unmanned aircraft vehicles (UAVs) is safe landing. Hereafter, an on-ground deployed visual approach is developed in this paper. This approach is definitely suitable for landing within the global navigation satellite system (GNSS)-denied environments. As for applications, the deployed guidance system makes full use of the ground computing resource and feedbacks the aircraft's real-time localization to its on-board autopilot. Under such circumstances, a separate long baseline stereo architecture is proposed to possess an extendable baseline and wide-angle field of view (FOV) against the traditional fixed baseline schemes. Furthermore, accuracy evaluation of the new type of architecture is conducted by theoretical modeling and computational analysis. Dataset-driven experimental results demonstrate the feasibility and effectiveness of the developed approach.

  11. HOTS: A Hierarchy of Event-Based Time-Surfaces for Pattern Recognition.

    PubMed

    Lagorce, Xavier; Orchard, Garrick; Galluppi, Francesco; Shi, Bertram E; Benosman, Ryad B

    2017-07-01

    This paper describes novel event-based spatio-temporal features called time-surfaces and how they can be used to create a hierarchical event-based pattern recognition architecture. Unlike existing hierarchical architectures for pattern recognition, the presented model relies on a time oriented approach to extract spatio-temporal features from the asynchronously acquired dynamics of a visual scene. These dynamics are acquired using biologically inspired frameless asynchronous event-driven vision sensors. Similarly to cortical structures, subsequent layers in our hierarchy extract increasingly abstract features using increasingly large spatio-temporal windows. The central concept is to use the rich temporal information provided by events to create contexts in the form of time-surfaces which represent the recent temporal activity within a local spatial neighborhood. We demonstrate that this concept can robustly be used at all stages of an event-based hierarchical model. First layer feature units operate on groups of pixels, while subsequent layer feature units operate on the output of lower level feature units. We report results on a previously published 36 class character recognition task and a four class canonical dynamic card pip task, achieving near 100 percent accuracy on each. We introduce a new seven class moving face recognition task, achieving 79 percent accuracy.This paper describes novel event-based spatio-temporal features called time-surfaces and how they can be used to create a hierarchical event-based pattern recognition architecture. Unlike existing hierarchical architectures for pattern recognition, the presented model relies on a time oriented approach to extract spatio-temporal features from the asynchronously acquired dynamics of a visual scene. These dynamics are acquired using biologically inspired frameless asynchronous event-driven vision sensors. Similarly to cortical structures, subsequent layers in our hierarchy extract increasingly abstract features using increasingly large spatio-temporal windows. The central concept is to use the rich temporal information provided by events to create contexts in the form of time-surfaces which represent the recent temporal activity within a local spatial neighborhood. We demonstrate that this concept can robustly be used at all stages of an event-based hierarchical model. First layer feature units operate on groups of pixels, while subsequent layer feature units operate on the output of lower level feature units. We report results on a previously published 36 class character recognition task and a four class canonical dynamic card pip task, achieving near 100 percent accuracy on each. We introduce a new seven class moving face recognition task, achieving 79 percent accuracy.

  12. A computer architecture for intelligent machines

    NASA Technical Reports Server (NTRS)

    Lefebvre, D. R.; Saridis, G. N.

    1992-01-01

    The theory of intelligent machines proposes a hierarchical organization for the functions of an autonomous robot based on the principle of increasing precision with decreasing intelligence. An analytic formulation of this theory using information-theoretic measures of uncertainty for each level of the intelligent machine has been developed. The authors present a computer architecture that implements the lower two levels of the intelligent machine. The architecture supports an event-driven programming paradigm that is independent of the underlying computer architecture and operating system. Execution-level controllers for motion and vision systems are briefly addressed, as well as the Petri net transducer software used to implement coordination-level functions. A case study illustrates how this computer architecture integrates real-time and higher-level control of manipulator and vision systems.

  13. Architecture-driven reuse of code in KASE

    NASA Technical Reports Server (NTRS)

    Bhansali, Sanjay

    1993-01-01

    In order to support the synthesis of large, complex software systems, we need to focus on issues pertaining to the architectural design of a system in addition to algorithm and data structure design. An approach that is based on abstracting the architectural design of a set of problems in the form of a generic architecture, and providing tools that can be used to instantiate the generic architecture for specific problem instances is presented. Such an approach also facilitates reuse of code between different systems belonging to the same problem class. An application of our approach on a realistic problem is described; the results of the exercise are presented; and how our approach compares to other work in this area is discussed.

  14. Decision Aids Using Heterogeneous Intelligence Analysis

    DTIC Science & Technology

    2010-08-20

    developing a Geocultural service, a software framework and inferencing engine for the Transparent Urban Structures program. The scope of the effort...has evolved as the program has matured and is including multiple data sources, as well as interfaces out to the ONR architectural framework . Tasks...Interface; Application Program Interface; Application Programmer Interface CAF Common Application Framework EDA Event Driven Architecture a 16. SECURITY

  15. Optical, analog and digital domain architectural considerations for visual communications

    NASA Astrophysics Data System (ADS)

    Metz, W. A.

    2008-01-01

    The end of the performance entitlement historically achieved by classic scaling of CMOS devices is within sight, driven ultimately by fundamental limits. Performance entitlements predicted by classic CMOS scaling have progressively failed to be realized in recent process generations due to excessive leakage, increasing interconnect delays and scaling of gate dielectrics. Prior to reaching fundamental limits, trends in technology, architecture and economics will pressure the industry to adopt new paradigms. A likely response is to repartition system functions away from digital implementations and into new architectures. Future architectures for visual communications will require extending the implementation into the optical and analog processing domains. The fundamental properties of these domains will in turn give rise to new architectural concepts. The limits of CMOS scaling and impact on architectures will be briefly reviewed. Alternative approaches in the optical, electronic and analog domains will then be examined for advantages, architectural impact and drawbacks.

  16. Model-centric approaches for the development of health information systems.

    PubMed

    Tuomainen, Mika; Mykkänen, Juha; Luostarinen, Heli; Pöyhölä, Assi; Paakkanen, Esa

    2007-01-01

    Modeling is used increasingly in healthcare to increase shared knowledge, to improve the processes, and to document the requirements of the solutions related to health information systems (HIS). There are numerous modeling approaches which aim to support these aims, but a careful assessment of their strengths, weaknesses and deficiencies is needed. In this paper, we compare three model-centric approaches in the context of HIS development: the Model-Driven Architecture, Business Process Modeling with BPMN and BPEL and the HL7 Development Framework. The comparison reveals that all these approaches are viable candidates for the development of HIS. However, they have distinct strengths and abstraction levels, they require local and project-specific adaptation and offer varying levels of automation. In addition, illustration of the solutions to the end users must be improved.

  17. Modeling interdependencies between business and communication processes in hospitals.

    PubMed

    Brigl, Birgit; Wendt, Thomas; Winter, Alfred

    2003-01-01

    The optimization and redesign of business processes in hospitals is an important challenge for the hospital information management who has to design and implement a suitable HIS architecture. Nevertheless, there are no tools available specializing in modeling information-driven business processes and the consequences on the communication between information processing, tools. Therefore, we will present an approach which facilitates the representation and analysis of business processes and resulting communication processes between application components and their interdependencies. This approach aims not only to visualize those processes, but to also to evaluate if there are weaknesses concerning the information processing infrastructure which hinder the smooth implementation of the business processes.

  18. Process-oriented integration and coordination of healthcare services across organizational boundaries.

    PubMed

    Tello-Leal, Edgar; Chiotti, Omar; Villarreal, Pablo David

    2012-12-01

    The paper presents a methodology that follows a top-down approach based on a Model-Driven Architecture for integrating and coordinating healthcare services through cross-organizational processes to enable organizations providing high quality healthcare services and continuous process improvements. The methodology provides a modeling language that enables organizations conceptualizing an integration agreement, and identifying and designing cross-organizational process models. These models are used for the automatic generation of: the private view of processes each organization should perform to fulfill its role in cross-organizational processes, and Colored Petri Net specifications to implement these processes. A multi-agent system platform provides agents able to interpret Colored Petri-Nets to enable the communication between the Healthcare Information Systems for executing the cross-organizational processes. Clinical documents are defined using the HL7 Clinical Document Architecture. This methodology guarantees that important requirements for healthcare services integration and coordination are fulfilled: interoperability between heterogeneous Healthcare Information Systems; ability to cope with changes in cross-organizational processes; guarantee of alignment between the integrated healthcare service solution defined at the organizational level and the solution defined at technological level; and the distributed execution of cross-organizational processes keeping the organizations autonomy.

  19. Model-driven requirements engineering (MDRE) for real-time ultra-wide instantaneous bandwidth signal simulation

    NASA Astrophysics Data System (ADS)

    Chang, Daniel Y.; Rowe, Neil C.

    2013-05-01

    While conducting a cutting-edge research in a specific domain, we realize that (1) requirements clarity and correctness are crucial to our success [1], (2) hardware is hard to change, most work is in software requirements development, coding and testing [2], (3) requirements are constantly changing, so that configurability, reusability, scalability, adaptability, modularity and testability are important non-functional attributes [3], (4) cross-domain knowledge is necessary for complex systems [4], and (5) if our research is successful, the results could be applied to other domains with similar problems. In this paper, we propose to use model-driven requirements engineering (MDRE) to model and guide our requirements/development, since models are easy to understand, execute, and modify. The domain for our research is Electronic Warfare (EW) real-time ultra-wide instantaneous bandwidth (IBW1) signal simulation. The proposed four MDRE models are (1) Switch-and-Filter architecture, (2) multiple parallel data bit streams alignment, (3) post-ADC and pre-DAC bits re-mapping, and (4) Discrete Fourier Transform (DFT) filter bank. This research is unique since the instantaneous bandwidth we are dealing with is in gigahertz range instead of conventional megahertz.

  20. Generic Software Architecture for Launchers

    NASA Astrophysics Data System (ADS)

    Carre, Emilien; Gast, Philippe; Hiron, Emmanuel; Leblanc, Alain; Lesens, David; Mescam, Emmanuelle; Moro, Pierre

    2015-09-01

    The definition and reuse of generic software architecture for launchers is not so usual for several reasons: the number of European launcher families is very small (Ariane 5 and Vega for these last decades); the real time constraints (reactivity and determinism needs) are very hard; low levels of versatility are required (implying often an ad hoc development of the launcher mission). In comparison, satellites are often built on a generic platform made up of reusable hardware building blocks (processors, star-trackers, gyroscopes, etc.) and reusable software building blocks (middleware, TM/TC, On Board Control Procedure, etc.). If some of these reasons are still valid (e.g. the limited number of development), the increase of the available CPU power makes today an approach based on a generic time triggered middleware (ensuring the full determinism of the system) and a centralised mission and vehicle management (offering more flexibility in the design and facilitating the long term maintenance) achievable. This paper presents an example of generic software architecture which could be envisaged for future launchers, based on the previously described principles and supported by model driven engineering and automatic code generation.

  1. Sirepo - Warp

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nagler, Robert; Moeller, Paul

    Sirepo is an open source framework for cloud computing. The graphical user interface (GUI) for Sirepo, also known as the client, executes in any HTML5 compliant web browser on any computing platform, including tablets. The client is built in JavaScript, making use of the following open source libraries: Bootstrap, which is fundamental for cross-platform web applications; AngularJS, which provides a model–view–controller (MVC) architecture and GUI components; and D3.js, which provides interactive plots and data-driven transformations. The Sirepo server is built on the following Python technologies: Flask, which is a lightweight framework for web development; Jin-ja, which is a secure andmore » widely used templating language; and Werkzeug, a utility library that is compliant with the WSGI standard. We use Nginx as the HTTP server and proxy, which provides a scalable event-driven architecture. The physics codes supported by Sirepo execute inside a Docker container. One of the codes supported by Sirepo is Warp. Warp is a particle-in-cell (PIC) code de-signed to simulate high-intensity charged particle beams and plasmas in both the electrostatic and electromagnetic regimes, with a wide variety of integrated physics models and diagnostics. At pre-sent, Sirepo supports a small subset of Warp’s capabilities. Warp is open source and is part of the Berkeley Lab Accelerator Simulation Toolkit.« less

  2. A disassembly-driven mechanism explains F-actin-mediated chromosome transport in starfish oocytes

    PubMed Central

    Bun, Philippe; Dmitrieff, Serge; Belmonte, Julio M

    2018-01-01

    While contraction of sarcomeric actomyosin assemblies is well understood, this is not the case for disordered networks of actin filaments (F-actin) driving diverse essential processes in animal cells. For example, at the onset of meiosis in starfish oocytes a contractile F-actin network forms in the nuclear region transporting embedded chromosomes to the assembling microtubule spindle. Here, we addressed the mechanism driving contraction of this 3D disordered F-actin network by comparing quantitative observations to computational models. We analyzed 3D chromosome trajectories and imaged filament dynamics to monitor network behavior under various physical and chemical perturbations. We found no evidence of myosin activity driving network contractility. Instead, our observations are well explained by models based on a disassembly-driven contractile mechanism. We reconstitute this disassembly-based contractile system in silico revealing a simple architecture that robustly drives chromosome transport to prevent aneuploidy in the large oocyte, a prerequisite for normal embryonic development. PMID:29350616

  3. Advanced Mirror Technology Development (AMTD) Project: 3.0 Year Status

    NASA Technical Reports Server (NTRS)

    Stahl, H. Philip

    2015-01-01

    Advanced Mirror Technology Development (AMTD) is a funded NASA Strategic Astrophysics Technology project. Begun in 2011, we are in Phase 2 of a multi-year effort. Our objective is to mature towards TRL6 critical technologies needed to produce 4-m or larger flight-qualified UVOIR mirrors by 2018 so that a viable astronomy mission can be considered by the 2020 Decadal Review. The developed technology must enable missions capable of both general astrophysics and ultra-high contrast observations of exoplanets. Just as JWST's architecture was driven by launch vehicle, a future UVOIR mission's architecture (monolithic, segmented or interferometric) will depend on capacities of future launch vehicles (and budget). Since we cannot predict the future, we must prepare for all potential futures. Therefore, we are pursuing multiple technology paths. AMTD uses a science-driven systems engineering approach. We mature technologies required to enable the highest priority science AND result in a high-performance low-cost low-risk system. One of our key accomplishments is that we have derived engineering specifications for advanced normal-incidence monolithic and segmented mirror systems needed to enable both general astrophysics and ultra-high contrast observations of exoplanets missions as a function of potential launch vehicle and its inherent mass and volume constraints. Another key accomplishment is that we have matured our technology by building and testing hardware. To demonstrate stacked core technology, we built a 400 mm thick mirror. Currently, to demonstrate lateral scalability, we are manufacturing a 1.5 meter mirror. To assist in architecture trade studies, the Engineering team develops Structural, Thermal and Optical Performance (STOP) models of candidate mirror assembly systems including substrates, structures, and mechanisms. These models are validated by test of full- and subscale components in relevant thermo-vacuum environments. Specific analyses include: maximum mirror substrate size, first fundamental mode frequency (i.e., stiffness) and mass required to fabricate without quilting, survive launch, and achieve stable pointing and maximum thermal time constant.

  4. Security in the Cache and Forward Architecture for the Next Generation Internet

    NASA Astrophysics Data System (ADS)

    Hadjichristofi, G. C.; Hadjicostis, C. N.; Raychaudhuri, D.

    The future Internet architecture will be comprised predominately of wireless devices. It is evident at this stage that the TCP/IP protocol that was developed decades ago will not properly support the required network functionalities since contemporary communication profiles tend to be data-driven rather than host-based. To address this paradigm shift in data propagation, a next generation architecture has been proposed, the Cache and Forward (CNF) architecture. This research investigates security aspects of this new Internet architecture. More specifically, we discuss content privacy, secure routing, key management and trust management. We identify security weaknesses of this architecture that need to be addressed and we derive security requirements that should guide future research directions. Aspects of the research can be adopted as a step-stone as we build the future Internet.

  5. A Distributed Architecture for Tsunami Early Warning and Collaborative Decision-support in Crises

    NASA Astrophysics Data System (ADS)

    Moßgraber, J.; Middleton, S.; Hammitzsch, M.; Poslad, S.

    2012-04-01

    The presentation will describe work on the system architecture that is being developed in the EU FP7 project TRIDEC on "Collaborative, Complex and Critical Decision-Support in Evolving Crises". The challenges for a Tsunami Early Warning System (TEWS) are manifold and the success of a system depends crucially on the system's architecture. A modern warning system following a system-of-systems approach has to integrate various components and sub-systems such as different information sources, services and simulation systems. Furthermore, it has to take into account the distributed and collaborative nature of warning systems. In order to create an architecture that supports the whole spectrum of a modern, distributed and collaborative warning system one must deal with multiple challenges. Obviously, one cannot expect to tackle these challenges adequately with a monolithic system or with a single technology. Therefore, a system architecture providing the blueprints to implement the system-of-systems approach has to combine multiple technologies and architectural styles. At the bottom layer it has to reliably integrate a large set of conventional sensors, such as seismic sensors and sensor networks, buoys and tide gauges, and also innovative and unconventional sensors, such as streams of messages from social media services. At the top layer it has to support collaboration on high-level decision processes and facilitates information sharing between organizations. In between, the system has to process all data and integrate information on a semantic level in a timely manner. This complex communication follows an event-driven mechanism allowing events to be published, detected and consumed by various applications within the architecture. Therefore, at the upper layer the event-driven architecture (EDA) aspects are combined with principles of service-oriented architectures (SOA) using standards for communication and data exchange. The most prominent challenges on this layer include providing a framework for information integration on a syntactic and semantic level, leveraging distributed processing resources for a scalable data processing platform, and automating data processing and decision support workflows.

  6. What Can Causal Networks Tell Us about Metabolic Pathways?

    PubMed Central

    Blair, Rachael Hageman; Kliebenstein, Daniel J.; Churchill, Gary A.

    2012-01-01

    Graphical models describe the linear correlation structure of data and have been used to establish causal relationships among phenotypes in genetic mapping populations. Data are typically collected at a single point in time. Biological processes on the other hand are often non-linear and display time varying dynamics. The extent to which graphical models can recapitulate the architecture of an underlying biological processes is not well understood. We consider metabolic networks with known stoichiometry to address the fundamental question: “What can causal networks tell us about metabolic pathways?”. Using data from an Arabidopsis BaySha population and simulated data from dynamic models of pathway motifs, we assess our ability to reconstruct metabolic pathways using graphical models. Our results highlight the necessity of non-genetic residual biological variation for reliable inference. Recovery of the ordering within a pathway is possible, but should not be expected. Causal inference is sensitive to subtle patterns in the correlation structure that may be driven by a variety of factors, which may not emphasize the substrate-product relationship. We illustrate the effects of metabolic pathway architecture, epistasis and stochastic variation on correlation structure and graphical model-derived networks. We conclude that graphical models should be interpreted cautiously, especially if the implied causal relationships are to be used in the design of intervention strategies. PMID:22496633

  7. A Multi-mission Event-Driven Component-Based System for Support of Flight Software Development, ATLO, and Operations first used by the Mars Science Laboratory (MSL) Project

    NASA Technical Reports Server (NTRS)

    Dehghani, Navid; Tankenson, Michael

    2006-01-01

    This viewgraph presentation reviews the architectural description of the Mission Data Processing and Control System (MPCS). MPCS is an event-driven, multi-mission ground data processing components providing uplink, downlink, and data management capabilities which will support the Mars Science Laboratory (MSL) project as its first target mission. MPCS is designed with these factors (1) Enabling plug and play architecture (2) MPCS has strong inheritance from GDS components that have been developed for other Flight Projects (MER, MRO, DAWN, MSAP), and are currently being used in operations and ATLO, and (3) MPCS components are Java-based, platform independent, and are designed to consume and produce XML-formatted data

  8. Space Missions Trade Space Generation and Assessment Using JPL Rapid Mission Architecture (RMA) Team Approach

    NASA Technical Reports Server (NTRS)

    Moeller, Robert C.; Borden, Chester; Spilker, Thomas; Smythe, William; Lock, Robert

    2011-01-01

    The JPL Rapid Mission Architecture (RMA) capability is a novel collaborative team-based approach to generate new mission architectures, explore broad trade space options, and conduct architecture-level analyses. RMA studies address feasibility and identify best candidates to proceed to further detailed design studies. Development of RMA first began at JPL in 2007 and has evolved to address the need for rapid, effective early mission architectural development and trade space exploration as a precursor to traditional point design evaluations. The RMA approach integrates a small team of architecture-level experts (typically 6-10 people) to generate and explore a wide-ranging trade space of mission architectures driven by the mission science (or technology) objectives. Group brainstorming and trade space analyses are conducted at a higher level of assessment across multiple mission architectures and systems to enable rapid assessment of a set of diverse, innovative concepts. This paper describes the overall JPL RMA team, process, and high-level approach. Some illustrative results from previous JPL RMA studies are discussed.

  9. Impact of Material and Architecture Model Parameters on the Failure of Woven Ceramic Matrix Composites (CMCs) via the Multiscale Generalized Method of Cells

    NASA Technical Reports Server (NTRS)

    Liu, Kuang C.; Arnold, Steven M.

    2011-01-01

    It is well known that failure of a material is a locally driven event. In the case of ceramic matrix composites (CMCs), significant variations in the microstructure of the composite exist and their significance on both deformation and life response need to be assessed. Examples of these variations include changes in the fiber tow shape, tow shifting/nesting and voids within and between tows. In the present work, the effects of many of these architectural parameters and material scatter of woven ceramic composite properties at the macroscale (woven RUC) will be studied to assess their sensitivity. The recently developed Multiscale Generalized Method of Cells methodology is used to determine the overall deformation response, proportional elastic limit (first matrix cracking), and failure under tensile loading conditions. The macroscale responses investigated illustrate the effect of architectural and material parameters on a single RUC representing a five harness satin weave fabric. Results shows that the most critical architectural parameter is weave void shape and content with other parameters being less in severity. Variation of the matrix material properties was also studied to illustrate the influence of the material variability on the overall features of the composite stress-strain response.

  10. Dshell++: A Component Based, Reusable Space System Simulation Framework

    NASA Technical Reports Server (NTRS)

    Lim, Christopher S.; Jain, Abhinandan

    2009-01-01

    This paper describes the multi-mission Dshell++ simulation framework for high fidelity, physics-based simulation of spacecraft, robotic manipulation and mobility systems. Dshell++ is a C++/Python library which uses modern script driven object-oriented techniques to allow component reuse and a dynamic run-time interface for complex, high-fidelity simulation of spacecraft and robotic systems. The goal of the Dshell++ architecture is to manage the inherent complexity of physicsbased simulations while supporting component model reuse across missions. The framework provides several features that support a large degree of simulation configurability and usability.

  11. Reconfiguration of Brain Network Architectures between Resting-State and Complexity-Dependent Cognitive Reasoning.

    PubMed

    Hearne, Luke J; Cocchi, Luca; Zalesky, Andrew; Mattingley, Jason B

    2017-08-30

    Our capacity for higher cognitive reasoning has a measurable limit. This limit is thought to arise from the brain's capacity to flexibly reconfigure interactions between spatially distributed networks. Recent work, however, has suggested that reconfigurations of task-related networks are modest when compared with intrinsic "resting-state" network architecture. Here we combined resting-state and task-driven functional magnetic resonance imaging to examine how flexible, task-specific reconfigurations associated with increasing reasoning demands are integrated within a stable intrinsic brain topology. Human participants (21 males and 28 females) underwent an initial resting-state scan, followed by a cognitive reasoning task involving different levels of complexity, followed by a second resting-state scan. The reasoning task required participants to deduce the identity of a missing element in a 4 × 4 matrix, and item difficulty was scaled parametrically as determined by relational complexity theory. Analyses revealed that external task engagement was characterized by a significant change in functional brain modules. Specifically, resting-state and null-task demand conditions were associated with more segregated brain-network topology, whereas increases in reasoning complexity resulted in merging of resting-state modules. Further increments in task complexity did not change the established modular architecture, but affected selective patterns of connectivity between frontoparietal, subcortical, cingulo-opercular, and default-mode networks. Larger increases in network efficiency within the newly established task modules were associated with higher reasoning accuracy. Our results shed light on the network architectures that underlie external task engagement, and highlight selective changes in brain connectivity supporting increases in task complexity. SIGNIFICANCE STATEMENT Humans have clear limits in their ability to solve complex reasoning problems. It is thought that such limitations arise from flexible, moment-to-moment reconfigurations of functional brain networks. It is less clear how such task-driven adaptive changes in connectivity relate to stable, intrinsic networks of the brain and behavioral performance. We found that increased reasoning demands rely on selective patterns of connectivity within cortical networks that emerged in addition to a more general, task-induced modular architecture. This task-driven architecture reverted to a more segregated resting-state architecture both immediately before and after the task. These findings reveal how flexibility in human brain networks is integral to achieving successful reasoning performance across different levels of cognitive demand. Copyright © 2017 the authors 0270-6474/17/378399-13$15.00/0.

  12. Fault tolerant architectures for integrated aircraft electronics systems, task 2

    NASA Technical Reports Server (NTRS)

    Levitt, K. N.; Melliar-Smith, P. M.; Schwartz, R. L.

    1984-01-01

    The architectural basis for an advanced fault tolerant on-board computer to succeed the current generation of fault tolerant computers is examined. The network error tolerant system architecture is studied with particular attention to intercluster configurations and communication protocols, and to refined reliability estimates. The diagnosis of faults, so that appropriate choices for reconfiguration can be made is discussed. The analysis relates particularly to the recognition of transient faults in a system with tasks at many levels of priority. The demand driven data-flow architecture, which appears to have possible application in fault tolerant systems is described and work investigating the feasibility of automatic generation of aircraft flight control programs from abstract specifications is reported.

  13. Keep it on the edge: The post-mitotic midbody as a polarity signal unit

    PubMed Central

    Lujan, Pablo; Rubio, Teresa; Varsano, Giulia; Köhn, Maja

    2017-01-01

    ABSTRACT The maintenance of the epithelial architecture during tissue proliferation is achieved by apical positioning of the midbody after cell division. Consequently, midbody mislocalization contributes to epithelial architecture disruption, a fundamental event during epithelial tumorigenesis. Studies in 3D polarized epithelial MDCK or Caco2 cell models, where midbody misplacement leads to multiple ectopic but fully polarized lumen-containing cysts, revealed that this phenotype can be caused by 2 different scenarios: the loss of mitotic spindle orientation or the loss of asymmetric abscission. In addition, we have recently proposed a third cellular mechanism where the midbody mislocalization is achieved through cytokinesis acceleration driven by the cancer-promoting phosphatase of regenerating liver (PRL)-3. Here we critically review these findings, and we furthermore present new data indicating that midbodies themselves might act as signal unit for polarization since they can infer apical characteristics to a basal membrane. PMID:28919938

  14. AMP: a science-driven web-based application for the TeraGrid

    NASA Astrophysics Data System (ADS)

    Woitaszek, M.; Metcalfe, T.; Shorrock, I.

    The Asteroseismic Modeling Portal (AMP) provides a web-based interface for astronomers to run and view simulations that derive the properties of Sun-like stars from observations of their pulsation frequencies. In this paper, we describe the architecture and implementation of AMP, highlighting the lightweight design principles and tools used to produce a functional fully-custom web-based science application in less than a year. Targeted as a TeraGrid science gateway, AMP's architecture and implementation are intended to simplify its orchestration of TeraGrid computational resources. AMP's web-based interface was developed as a traditional standalone database-backed web application using the Python-based Django web development framework, allowing us to leverage the Django framework's capabilities while cleanly separating the user interface development from the grid interface development. We have found this combination of tools flexible and effective for rapid gateway development and deployment.

  15. Model of rhythmic ball bouncing using a visually controlled neural oscillator.

    PubMed

    Avrin, Guillaume; Siegler, Isabelle A; Makarov, Maria; Rodriguez-Ayerbe, Pedro

    2017-10-01

    The present paper investigates the sensory-driven modulations of central pattern generator dynamics that can be expected to reproduce human behavior during rhythmic hybrid tasks. We propose a theoretical model of human sensorimotor behavior able to account for the observed data from the ball-bouncing task. The novel control architecture is composed of a Matsuoka neural oscillator coupled with the environment through visual sensory feedback. The architecture's ability to reproduce human-like performance during the ball-bouncing task in the presence of perturbations is quantified by comparison of simulated and recorded trials. The results suggest that human visual control of the task is achieved online. The adaptive behavior is made possible by a parametric and state control of the limit cycle emerging from the interaction of the rhythmic pattern generator, the musculoskeletal system, and the environment. NEW & NOTEWORTHY The study demonstrates that a behavioral model based on a neural oscillator controlled by visual information is able to accurately reproduce human modulations in a motor action with respect to sensory information during the rhythmic ball-bouncing task. The model attractor dynamics emerging from the interaction between the neuromusculoskeletal system and the environment met task requirements, environmental constraints, and human behavioral choices without relying on movement planning and explicit internal models of the environment. Copyright © 2017 the American Physiological Society.

  16. The Orion GN and C Data-Driven Flight Software Architecture for Automated Sequencing and Fault Recovery

    NASA Technical Reports Server (NTRS)

    King, Ellis; Hart, Jeremy; Odegard, Ryan

    2010-01-01

    The Orion Crew Exploration Vehicle (CET) is being designed to include significantly more automation capability than either the Space Shuttle or the International Space Station (ISS). In particular, the vehicle flight software has requirements to accommodate increasingly automated missions throughout all phases of flight. A data-driven flight software architecture will provide an evolvable automation capability to sequence through Guidance, Navigation & Control (GN&C) flight software modes and configurations while maintaining the required flexibility and human control over the automation. This flexibility is a key aspect needed to address the maturation of operational concepts, to permit ground and crew operators to gain trust in the system and mitigate unpredictability in human spaceflight. To allow for mission flexibility and reconfrgurability, a data driven approach is being taken to load the mission event plan as well cis the flight software artifacts associated with the GN&C subsystem. A database of GN&C level sequencing data is presented which manages and tracks the mission specific and algorithm parameters to provide a capability to schedule GN&C events within mission segments. The flight software data schema for performing automated mission sequencing is presented with a concept of operations for interactions with ground and onboard crew members. A prototype architecture for fault identification, isolation and recovery interactions with the automation software is presented and discussed as a forward work item.

  17. Federated ontology-based queries over cancer data

    PubMed Central

    2012-01-01

    Background Personalised medicine provides patients with treatments that are specific to their genetic profiles. It requires efficient data sharing of disparate data types across a variety of scientific disciplines, such as molecular biology, pathology, radiology and clinical practice. Personalised medicine aims to offer the safest and most effective therapeutic strategy based on the gene variations of each subject. In particular, this is valid in oncology, where knowledge about genetic mutations has already led to new therapies. Current molecular biology techniques (microarrays, proteomics, epigenetic technology and improved DNA sequencing technology) enable better characterisation of cancer tumours. The vast amounts of data, however, coupled with the use of different terms - or semantic heterogeneity - in each discipline makes the retrieval and integration of information difficult. Results Existing software infrastructures for data-sharing in the cancer domain, such as caGrid, support access to distributed information. caGrid follows a service-oriented model-driven architecture. Each data source in caGrid is associated with metadata at increasing levels of abstraction, including syntactic, structural, reference and domain metadata. The domain metadata consists of ontology-based annotations associated with the structural information of each data source. However, caGrid's current querying functionality is given at the structural metadata level, without capitalising on the ontology-based annotations. This paper presents the design of and theoretical foundations for distributed ontology-based queries over cancer research data. Concept-based queries are reformulated to the target query language, where join conditions between multiple data sources are found by exploiting the semantic annotations. The system has been implemented, as a proof of concept, over the caGrid infrastructure. The approach is applicable to other model-driven architectures. A graphical user interface has been developed, supporting ontology-based queries over caGrid data sources. An extensive evaluation of the query reformulation technique is included. Conclusions To support personalised medicine in oncology, it is crucial to retrieve and integrate molecular, pathology, radiology and clinical data in an efficient manner. The semantic heterogeneity of the data makes this a challenging task. Ontologies provide a formal framework to support querying and integration. This paper provides an ontology-based solution for querying distributed databases over service-oriented, model-driven infrastructures. PMID:22373043

  18. The neurobiology of syntax: beyond string sets.

    PubMed

    Petersson, Karl Magnus; Hagoort, Peter

    2012-07-19

    The human capacity to acquire language is an outstanding scientific challenge to understand. Somehow our language capacities arise from the way the human brain processes, develops and learns in interaction with its environment. To set the stage, we begin with a summary of what is known about the neural organization of language and what our artificial grammar learning (AGL) studies have revealed. We then review the Chomsky hierarchy in the context of the theory of computation and formal learning theory. Finally, we outline a neurobiological model of language acquisition and processing based on an adaptive, recurrent, spiking network architecture. This architecture implements an asynchronous, event-driven, parallel system for recursive processing. We conclude that the brain represents grammars (or more precisely, the parser/generator) in its connectivity, and its ability for syntax is based on neurobiological infrastructure for structured sequence processing. The acquisition of this ability is accounted for in an adaptive dynamical systems framework. Artificial language learning (ALL) paradigms might be used to study the acquisition process within such a framework, as well as the processing properties of the underlying neurobiological infrastructure. However, it is necessary to combine and constrain the interpretation of ALL results by theoretical models and empirical studies on natural language processing. Given that the faculty of language is captured by classical computational models to a significant extent, and that these can be embedded in dynamic network architectures, there is hope that significant progress can be made in understanding the neurobiology of the language faculty.

  19. The neurobiology of syntax: beyond string sets

    PubMed Central

    Petersson, Karl Magnus; Hagoort, Peter

    2012-01-01

    The human capacity to acquire language is an outstanding scientific challenge to understand. Somehow our language capacities arise from the way the human brain processes, develops and learns in interaction with its environment. To set the stage, we begin with a summary of what is known about the neural organization of language and what our artificial grammar learning (AGL) studies have revealed. We then review the Chomsky hierarchy in the context of the theory of computation and formal learning theory. Finally, we outline a neurobiological model of language acquisition and processing based on an adaptive, recurrent, spiking network architecture. This architecture implements an asynchronous, event-driven, parallel system for recursive processing. We conclude that the brain represents grammars (or more precisely, the parser/generator) in its connectivity, and its ability for syntax is based on neurobiological infrastructure for structured sequence processing. The acquisition of this ability is accounted for in an adaptive dynamical systems framework. Artificial language learning (ALL) paradigms might be used to study the acquisition process within such a framework, as well as the processing properties of the underlying neurobiological infrastructure. However, it is necessary to combine and constrain the interpretation of ALL results by theoretical models and empirical studies on natural language processing. Given that the faculty of language is captured by classical computational models to a significant extent, and that these can be embedded in dynamic network architectures, there is hope that significant progress can be made in understanding the neurobiology of the language faculty. PMID:22688633

  20. Plant growth modelling and applications: the increasing importance of plant architecture in growth models.

    PubMed

    Fourcaud, Thierry; Zhang, Xiaopeng; Stokes, Alexia; Lambers, Hans; Körner, Christian

    2008-05-01

    Modelling plant growth allows us to test hypotheses and carry out virtual experiments concerning plant growth processes that could otherwise take years in field conditions. The visualization of growth simulations allows us to see directly and vividly the outcome of a given model and provides us with an instructive tool useful for agronomists and foresters, as well as for teaching. Functional-structural (FS) plant growth models are nowadays particularly important for integrating biological processes with environmental conditions in 3-D virtual plants, and provide the basis for more advanced research in plant sciences. In this viewpoint paper, we ask the following questions. Are we modelling the correct processes that drive plant growth, and is growth driven mostly by sink or source activity? In current models, is the importance of soil resources (nutrients, water, temperature and their interaction with meristematic activity) considered adequately? Do classic models account for architectural adjustment as well as integrating the fundamental principles of development? Whilst answering these questions with the available data in the literature, we put forward the opinion that plant architecture and sink activity must be pushed to the centre of plant growth models. In natural conditions, sinks will more often drive growth than source activity, because sink activity is often controlled by finite soil resources or developmental constraints. PMA06: This viewpoint paper also serves as an introduction to this Special Issue devoted to plant growth modelling, which includes new research covering areas stretching from cell growth to biomechanics. All papers were presented at the Second International Symposium on Plant Growth Modeling, Simulation, Visualization and Applications (PMA06), held in Beijing, China, from 13-17 November, 2006. Although a large number of papers are devoted to FS models of agricultural and forest crop species, physiological and genetic processes have recently been included and point the way to a new direction in plant modelling research.

  1. Integrating cortico-limbic-basal ganglia architectures for learning model-based and model-free navigation strategies

    PubMed Central

    Khamassi, Mehdi; Humphries, Mark D.

    2012-01-01

    Behavior in spatial navigation is often organized into map-based (place-driven) vs. map-free (cue-driven) strategies; behavior in operant conditioning research is often organized into goal-directed vs. habitual strategies. Here we attempt to unify the two. We review one powerful theory for distinct forms of learning during instrumental conditioning, namely model-based (maintaining a representation of the world) and model-free (reacting to immediate stimuli) learning algorithms. We extend these lines of argument to propose an alternative taxonomy for spatial navigation, showing how various previously identified strategies can be distinguished as “model-based” or “model-free” depending on the usage of information and not on the type of information (e.g., cue vs. place). We argue that identifying “model-free” learning with dorsolateral striatum and “model-based” learning with dorsomedial striatum could reconcile numerous conflicting results in the spatial navigation literature. From this perspective, we further propose that the ventral striatum plays key roles in the model-building process. We propose that the core of the ventral striatum is positioned to learn the probability of action selection for every transition between states of the world. We further review suggestions that the ventral striatal core and shell are positioned to act as “critics” contributing to the computation of a reward prediction error for model-free and model-based systems, respectively. PMID:23205006

  2. Automation Hooks Architecture for Flexible Test Orchestration - Concept Development and Validation

    NASA Technical Reports Server (NTRS)

    Lansdowne, C. A.; Maclean, John R.; Winton, Chris; McCartney, Pat

    2011-01-01

    The Automation Hooks Architecture Trade Study for Flexible Test Orchestration sought a standardized data-driven alternative to conventional automated test programming interfaces. The study recommended composing the interface using multicast DNS (mDNS/SD) service discovery, Representational State Transfer (Restful) Web Services, and Automatic Test Markup Language (ATML). We describe additional efforts to rapidly mature the Automation Hooks Architecture candidate interface definition by validating it in a broad spectrum of applications. These activities have allowed us to further refine our concepts and provide observations directed toward objectives of economy, scalability, versatility, performance, severability, maintainability, scriptability and others.

  3. Algorithms and architecture for multiprocessor based circuit simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Deutsch, J.T.

    Accurate electrical simulation is critical to the design of high performance integrated circuits. Logic simulators can verify function and give first-order timing information. Switch level simulators are more effective at dealing with charge sharing than standard logic simulators, but cannot provide accurate timing information or discover DC problems. Delay estimation techniques and cell level simulation can be used in constrained design methods, but must be tuned for each application, and circuit simulation must still be used to generate the cell models. None of these methods has the guaranteed accuracy that many circuit designers desire, and none can provide detailed waveformmore » information. Detailed electrical-level simulation can predict circuit performance if devices and parasitics are modeled accurately. However, the computational requirements of conventional circuit simulators make it impractical to simulate current large circuits. In this dissertation, the implementation of Iterated Timing Analysis (ITA), a relaxation-based technique for accurate circuit simulation, on a special-purpose multiprocessor is presented. The ITA method is an SOR-Newton, relaxation-based method which uses event-driven analysis and selective trace to exploit the temporal sparsity of the electrical network. Because event-driven selective trace techniques are employed, this algorithm lends itself to implementation on a data-driven computer.« less

  4. Model-Driven Useware Engineering

    NASA Astrophysics Data System (ADS)

    Meixner, Gerrit; Seissler, Marc; Breiner, Kai

    User-oriented hardware and software development relies on a systematic development process based on a comprehensive analysis focusing on the users' requirements and preferences. Such a development process calls for the integration of numerous disciplines, from psychology and ergonomics to computer sciences and mechanical engineering. Hence, a correspondingly interdisciplinary team must be equipped with suitable software tools to allow it to handle the complexity of a multimodal and multi-device user interface development approach. An abstract, model-based development approach seems to be adequate for handling this complexity. This approach comprises different levels of abstraction requiring adequate tool support. Thus, in this chapter, we present the current state of our model-based software tool chain. We introduce the use model as the core model of our model-based process, transformation processes, and a model-based architecture, and we present different software tools that provide support for creating and maintaining the models or performing the necessary model transformations.

  5. Transforming Collaborative Process Models into Interface Process Models by Applying an MDA Approach

    NASA Astrophysics Data System (ADS)

    Lazarte, Ivanna M.; Chiotti, Omar; Villarreal, Pablo D.

    Collaborative business models among enterprises require defining collaborative business processes. Enterprises implement B2B collaborations to execute these processes. In B2B collaborations the integration and interoperability of processes and systems of the enterprises are required to support the execution of collaborative processes. From a collaborative process model, which describes the global view of the enterprise interactions, each enterprise must define the interface process that represents the role it performs in the collaborative process in order to implement the process in a Business Process Management System. Hence, in this work we propose a method for the automatic generation of the interface process model of each enterprise from a collaborative process model. This method is based on a Model-Driven Architecture to transform collaborative process models into interface process models. By applying this method, interface processes are guaranteed to be interoperable and defined according to a collaborative process.

  6. Molecular graph convolutions: moving beyond fingerprints

    NASA Astrophysics Data System (ADS)

    Kearnes, Steven; McCloskey, Kevin; Berndl, Marc; Pande, Vijay; Riley, Patrick

    2016-08-01

    Molecular "fingerprints" encoding structural information are the workhorse of cheminformatics and machine learning in drug discovery applications. However, fingerprint representations necessarily emphasize particular aspects of the molecular structure while ignoring others, rather than allowing the model to make data-driven decisions. We describe molecular graph convolutions, a machine learning architecture for learning from undirected graphs, specifically small molecules. Graph convolutions use a simple encoding of the molecular graph—atoms, bonds, distances, etc.—which allows the model to take greater advantage of information in the graph structure. Although graph convolutions do not outperform all fingerprint-based methods, they (along with other graph-based methods) represent a new paradigm in ligand-based virtual screening with exciting opportunities for future improvement.

  7. Molecular graph convolutions: moving beyond fingerprints.

    PubMed

    Kearnes, Steven; McCloskey, Kevin; Berndl, Marc; Pande, Vijay; Riley, Patrick

    2016-08-01

    Molecular "fingerprints" encoding structural information are the workhorse of cheminformatics and machine learning in drug discovery applications. However, fingerprint representations necessarily emphasize particular aspects of the molecular structure while ignoring others, rather than allowing the model to make data-driven decisions. We describe molecular graph convolutions, a machine learning architecture for learning from undirected graphs, specifically small molecules. Graph convolutions use a simple encoding of the molecular graph-atoms, bonds, distances, etc.-which allows the model to take greater advantage of information in the graph structure. Although graph convolutions do not outperform all fingerprint-based methods, they (along with other graph-based methods) represent a new paradigm in ligand-based virtual screening with exciting opportunities for future improvement.

  8. A practical approach for active camera coordination based on a fusion-driven multi-agent system

    NASA Astrophysics Data System (ADS)

    Bustamante, Alvaro Luis; Molina, José M.; Patricio, Miguel A.

    2014-04-01

    In this paper, we propose a multi-agent system architecture to manage spatially distributed active (or pan-tilt-zoom) cameras. Traditional video surveillance algorithms are of no use for active cameras, and we have to look at different approaches. Such multi-sensor surveillance systems have to be designed to solve two related problems: data fusion and coordinated sensor-task management. Generally, architectures proposed for the coordinated operation of multiple cameras are based on the centralisation of management decisions at the fusion centre. However, the existence of intelligent sensors capable of decision making brings with it the possibility of conceiving alternative decentralised architectures. This problem is approached by means of a MAS, integrating data fusion as an integral part of the architecture for distributed coordination purposes. This paper presents the MAS architecture and system agents.

  9. A Demand-Driven Approach for a Multi-Agent System in Supply Chain Management

    NASA Astrophysics Data System (ADS)

    Kovalchuk, Yevgeniya; Fasli, Maria

    This paper presents the architecture of a multi-agent decision support system for Supply Chain Management (SCM) which has been designed to compete in the TAC SCM game. The behaviour of the system is demand-driven and the agents plan, predict, and react dynamically to changes in the market. The main strength of the system lies in the ability of the Demand agent to predict customer winning bid prices - the highest prices the agent can offer customers and still obtain their orders. This paper investigates the effect of the ability to predict customer order prices on the overall performance of the system. Four strategies are proposed and compared for predicting such prices. The experimental results reveal which strategies are better and show that there is a correlation between the accuracy of the models' predictions and the overall system performance: the more accurate the prediction of customer order prices, the higher the profit.

  10. Metadata-Driven SOA-Based Application for Facilitation of Real-Time Data Warehousing

    NASA Astrophysics Data System (ADS)

    Pintar, Damir; Vranić, Mihaela; Skočir, Zoran

    Service-oriented architecture (SOA) has already been widely recognized as an effective paradigm for achieving integration of diverse information systems. SOA-based applications can cross boundaries of platforms, operation systems and proprietary data standards, commonly through the usage of Web Services technology. On the other side, metadata is also commonly referred to as a potential integration tool given the fact that standardized metadata objects can provide useful information about specifics of unknown information systems with which one has interest in communicating with, using an approach commonly called "model-based integration". This paper presents the result of research regarding possible synergy between those two integration facilitators. This is accomplished with a vertical example of a metadata-driven SOA-based business process that provides ETL (Extraction, Transformation and Loading) and metadata services to a data warehousing system in need of a real-time ETL support.

  11. Real-time computing platform for spiking neurons (RT-spike).

    PubMed

    Ros, Eduardo; Ortigosa, Eva M; Agís, Rodrigo; Carrillo, Richard; Arnold, Michael

    2006-07-01

    A computing platform is described for simulating arbitrary networks of spiking neurons in real time. A hybrid computing scheme is adopted that uses both software and hardware components to manage the tradeoff between flexibility and computational power; the neuron model is implemented in hardware and the network model and the learning are implemented in software. The incremental transition of the software components into hardware is supported. We focus on a spike response model (SRM) for a neuron where the synapses are modeled as input-driven conductances. The temporal dynamics of the synaptic integration process are modeled with a synaptic time constant that results in a gradual injection of charge. This type of model is computationally expensive and is not easily amenable to existing software-based event-driven approaches. As an alternative we have designed an efficient time-based computing architecture in hardware, where the different stages of the neuron model are processed in parallel. Further improvements occur by computing multiple neurons in parallel using multiple processing units. This design is tested using reconfigurable hardware and its scalability and performance evaluated. Our overall goal is to investigate biologically realistic models for the real-time control of robots operating within closed action-perception loops, and so we evaluate the performance of the system on simulating a model of the cerebellum where the emulation of the temporal dynamics of the synaptic integration process is important.

  12. Integration of Enzymes in Polyaniline-Sensitized 3D Inverse Opal TiO2 Architectures for Light-Driven Biocatalysis and Light-to-Current Conversion.

    PubMed

    Riedel, Marc; Lisdat, Fred

    2018-01-10

    Inspired by natural photosynthesis, coupling of artificial light-sensitive entities with biocatalysts in a biohybrid format can result in advanced photobioelectronic systems. Herein, we report on the integration of sulfonated polyanilines (PMSA1) and PQQ-dependent glucose dehydrogenase (PQQ-GDH) into inverse opal TiO 2 (IO-TiO 2 ) electrodes. While PMSA1 introduces sensitivity for visible light into the biohybrid architecture and ensures the efficient wiring between the IO-TiO 2 electrode and the biocatalytic entity, PQQ-GDH provides the catalytic activity for the glucose oxidation and therefore feeds the light-driven reaction with electrons for an enhanced light-to-current conversion. Here, the IO-TiO 2 electrodes with pores of around 650 nm provide a suitable interface and morphology needed for the stable and functional assembly of polymer and enzyme. The IO-TiO 2 electrodes have been prepared by a template approach applying spin coating, allowing an easy scalability of the electrode height and surface area. The successful integration of the polymer and the enzyme is confirmed by the generation of an anodic photocurrent, showing an enhanced magnitude with increasing glucose concentrations. Compared to flat and nanostructured TiO 2 electrodes, the three-layered IO-TiO 2 electrodes give access to a 24-fold and 29-fold higher glucose-dependent photocurrent due to the higher polymer and enzyme loading in IO films. The three-dimensional IO-TiO 2 |PMSA1|PQQ-GDH architecture reaches maximum photocurrent densities of 44.7 ± 6.5 μA cm -2 at low potentials in the presence of glucose (for a three TiO 2 layer arrangement). The onset potential for the light-driven substrate oxidation is found to be at -0.315 V vs Ag/AgCl (1 M KCl) under illumination with 100 mW cm -2 , which is more negative than the redox potential of the enzyme. The results demonstrate the advantageous properties of IO-TiO 2 |PMSA1|PQQ-GDH biohybrid architectures for the light-driven glucose conversion with improved performance.

  13. Archetype Model-Driven Development Framework for EHR Web System.

    PubMed

    Kobayashi, Shinji; Kimura, Eizen; Ishihara, Ken

    2013-12-01

    This article describes the Web application framework for Electronic Health Records (EHRs) we have developed to reduce construction costs for EHR sytems. The openEHR project has developed clinical model driven architecture for future-proof interoperable EHR systems. This project provides the specifications to standardize clinical domain model implementations, upon which the ISO/CEN 13606 standards are based. The reference implementation has been formally described in Eiffel. Moreover C# and Java implementations have been developed as reference. While scripting languages had been more popular because of their higher efficiency and faster development in recent years, they had not been involved in the openEHR implementations. From 2007, we have used the Ruby language and Ruby on Rails (RoR) as an agile development platform to implement EHR systems, which is in conformity with the openEHR specifications. We implemented almost all of the specifications, the Archetype Definition Language parser, and RoR scaffold generator from archetype. Although some problems have emerged, most of them have been resolved. We have provided an agile EHR Web framework, which can build up Web systems from archetype models using RoR. The feasibility of the archetype model to provide semantic interoperability of EHRs has been demonstrated and we have verified that that it is suitable for the construction of EHR systems.

  14. Retrosynthetic Reaction Prediction Using Neural Sequence-to-Sequence Models

    PubMed Central

    2017-01-01

    We describe a fully data driven model that learns to perform a retrosynthetic reaction prediction task, which is treated as a sequence-to-sequence mapping problem. The end-to-end trained model has an encoder–decoder architecture that consists of two recurrent neural networks, which has previously shown great success in solving other sequence-to-sequence prediction tasks such as machine translation. The model is trained on 50,000 experimental reaction examples from the United States patent literature, which span 10 broad reaction types that are commonly used by medicinal chemists. We find that our model performs comparably with a rule-based expert system baseline model, and also overcomes certain limitations associated with rule-based expert systems and with any machine learning approach that contains a rule-based expert system component. Our model provides an important first step toward solving the challenging problem of computational retrosynthetic analysis. PMID:29104927

  15. Nanomechanical architecture of semiconductor nanomembranes.

    PubMed

    Huang, Minghuang; Cavallo, Francesca; Liu, Feng; Lagally, Max G

    2011-01-01

    Semiconductor nanomembranes are single-crystal sheets with thickness ranging from 5 to 500nm. They are flexible, bondable, and mechanically ultra-compliant. They present a new platform to combine bottom-up and top-down semiconductor processing to fabricate various three-dimensional (3D) nanomechanical architectures, with an unprecedented level of control. The bottom-up part is the self-assembly, via folding, rolling, bending, curling, or other forms of shape change of the nanomembranes, with top-down patterning providing the starting point for these processes. The self-assembly to form 3D structures is driven by elastic strain relaxation. A variety of structures, including tubes, rings, coils, rolled-up "rugs", and periodic wrinkles, has been made by such self-assembly. Their geometry and unique properties suggest many potential applications. In this review, we describe the design of desired nanostructures based on continuum mechanics modelling, definition and fabrication of 2D strained nanomembranes according to the established design, and release of the 2D strained sheet into a 3D or quasi-3D object. We also describe several materials properties of nanomechanical architectures. We discuss potential applications of nanomembrane technology to implement simple and hybrid functionalities.

  16. Concept of operations for knowledge discovery from Big Data across enterprise data warehouses

    NASA Astrophysics Data System (ADS)

    Sukumar, Sreenivas R.; Olama, Mohammed M.; McNair, Allen W.; Nutaro, James J.

    2013-05-01

    The success of data-driven business in government, science, and private industry is driving the need for seamless integration of intra and inter-enterprise data sources to extract knowledge nuggets in the form of correlations, trends, patterns and behaviors previously not discovered due to physical and logical separation of datasets. Today, as volume, velocity, variety and complexity of enterprise data keeps increasing, the next generation analysts are facing several challenges in the knowledge extraction process. Towards addressing these challenges, data-driven organizations that rely on the success of their analysts have to make investment decisions for sustainable data/information systems and knowledge discovery. Options that organizations are considering are newer storage/analysis architectures, better analysis machines, redesigned analysis algorithms, collaborative knowledge management tools, and query builders amongst many others. In this paper, we present a concept of operations for enabling knowledge discovery that data-driven organizations can leverage towards making their investment decisions. We base our recommendations on the experience gained from integrating multi-agency enterprise data warehouses at the Oak Ridge National Laboratory to design the foundation of future knowledge nurturing data-system architectures.

  17. Achieving High Performance With TCP Over 40 GbE on NUMA Architectures for CMS Data Acquisition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bawej, Tomasz; et al.

    2014-01-01

    TCP and the socket abstraction have barely changed over the last two decades, but at the network layer there has been a giant leap from a few megabits to 100 gigabits in bandwidth. At the same time, CPU architectures have evolved into the multicore era and applications are expected to make full use of all available resources. Applications in the data acquisition domain based on the standard socket library running in a Non-Uniform Memory Access (NUMA) architecture are unable to reach full efficiency and scalability without the software being adequately aware about the IRQ (Interrupt Request), CPU and memory affinities.more » During the first long shutdown of LHC, the CMS DAQ system is going to be upgraded for operation from 2015 onwards and a new software component has been designed and developed in the CMS online framework for transferring data with sockets. This software attempts to wrap the low-level socket library to ease higher-level programming with an API based on an asynchronous event driven model similar to the DAT uDAPL API. It is an event-based application with NUMA optimizations, that allows for a high throughput of data across a large distributed system. This paper describes the architecture, the technologies involved and the performance measurements of the software in the context of the CMS distributed event building.« less

  18. Space Launch System Ascent Flight Control Design

    NASA Technical Reports Server (NTRS)

    VanZwieten, Tannen S.; Orr, Jeb S.; Wall, John H.; Hall, Charles E.

    2014-01-01

    A robust and flexible autopilot architecture for NASA's Space Launch System (SLS) family of launch vehicles is presented. As the SLS configurations represent a potentially significant increase in complexity and performance capability of the integrated flight vehicle, it was recognized early in the program that a new, generalized autopilot design should be formulated to fulfill the needs of this new space launch architecture. The present design concept is intended to leverage existing NASA and industry launch vehicle design experience and maintain the extensibility and modularity necessary to accommodate multiple vehicle configurations while relying on proven and flight-tested control design principles for large boost vehicles. The SLS flight control architecture combines a digital three-axis autopilot with traditional bending filters to support robust active or passive stabilization of the vehicle's bending and sloshing dynamics using optimally blended measurements from multiple rate gyros on the vehicle structure. The algorithm also relies on a pseudo-optimal control allocation scheme to maximize the performance capability of multiple vectored engines while accommodating throttling and engine failure contingencies in real time with negligible impact to stability characteristics. The architecture supports active in-flight load relief through the use of a nonlinear observer driven by acceleration measurements, and envelope expansion and robustness enhancement is obtained through the use of a multiplicative forward gain modulation law based upon a simple model reference adaptive control scheme.

  19. Space Launch System Ascent Flight Control Design

    NASA Technical Reports Server (NTRS)

    Orr, Jeb S.; Wall, John H.; VanZwieten, Tannen S.; Hall, Charles E.

    2014-01-01

    A robust and flexible autopilot architecture for NASA's Space Launch System (SLS) family of launch vehicles is presented. The SLS configurations represent a potentially significant increase in complexity and performance capability when compared with other manned launch vehicles. It was recognized early in the program that a new, generalized autopilot design should be formulated to fulfill the needs of this new space launch architecture. The present design concept is intended to leverage existing NASA and industry launch vehicle design experience and maintain the extensibility and modularity necessary to accommodate multiple vehicle configurations while relying on proven and flight-tested control design principles for large boost vehicles. The SLS flight control architecture combines a digital three-axis autopilot with traditional bending filters to support robust active or passive stabilization of the vehicle's bending and sloshing dynamics using optimally blended measurements from multiple rate gyros on the vehicle structure. The algorithm also relies on a pseudo-optimal control allocation scheme to maximize the performance capability of multiple vectored engines while accommodating throttling and engine failure contingencies in real time with negligible impact to stability characteristics. The architecture supports active in-flight disturbance compensation through the use of nonlinear observers driven by acceleration measurements. Envelope expansion and robustness enhancement is obtained through the use of a multiplicative forward gain modulation law based upon a simple model reference adaptive control scheme.

  20. Model-Driven Development for scientific computing. An upgrade of the RHEEDGr program

    NASA Astrophysics Data System (ADS)

    Daniluk, Andrzej

    2009-11-01

    Model-Driven Engineering (MDE) is the software engineering discipline, which considers models as the most important element for software development, and for the maintenance and evolution of software, through model transformation. Model-Driven Architecture (MDA) is the approach for software development under the Model-Driven Engineering framework. This paper surveys the core MDA technology that was used to upgrade of the RHEEDGR program to C++0x language standards. New version program summaryProgram title: RHEEDGR-09 Catalogue identifier: ADUY_v3_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADUY_v3_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 21 263 No. of bytes in distributed program, including test data, etc.: 1 266 982 Distribution format: tar.gz Programming language: Code Gear C++ Builder Computer: Intel Core Duo-based PC Operating system: Windows XP, Vista, 7 RAM: more than 1 MB Classification: 4.3, 7.2, 6.2, 8, 14 Does the new version supersede the previous version?: Yes Nature of problem: Reflection High-Energy Electron Diffraction (RHEED) is a very useful technique for studying growth and surface analysis of thin epitaxial structures prepared by the Molecular Beam Epitaxy (MBE). The RHEED technique can reveal, almost instantaneously, changes either in the coverage of the sample surface by adsorbates or in the surface structure of a thin film. Solution method: The calculations are based on the use of a dynamical diffraction theory in which the electrons are taken to be diffracted by a potential, which is periodic in the dimension perpendicular to the surface. Reasons for new version: Responding to the user feedback the graphical version of the RHEED program has been upgraded to C++0x language standards. Also, functionality and documentation of the program have been improved. Summary of revisions: Model-Driven Architecture (MDA) is the approach defined by the Object Management Group (OMG) for software development under the Model-Driven Engineering framework [1]. The MDA approach shifts the focus of software development from writing code to building models. By adapting a model-centric approach, the MDA approach hopes to automate the generation of system implementation artifacts directly from the model. The following three models are the core of the MDA: (i) the Computation Independent Model (CIM), which is focused on basic requirements of the system, (ii) the Platform Independent Model (PIM), which is used by software architects and designers, and is focused on the operational capabilities of a system outside the context of a specific platform, and (iii) the Platform Specific Model (PSM), which is used by software developers and programmers, and includes details relating to the system for a specific platform. Basic requirements for the calculation of the RHEED intensity rocking curves in the one-beam condition have been described in Ref. [2]. Fig. 1 shows the PIM for the present version of the program. Fig. 2 presents the PSM for the program. The TGraph2D.bpk package has been recompiled to Graph2D0x.bpl and upgraded according to C++0x language standards. Fig. 3 shows the PSM of the Graph2D component, which is manifested by the Graph2D0x.bpl package presently. This diagram is a graphic presentation of the static view, which shows a collection of declarative model elements and their relationships. Installation instructions of the Graph2D0x package can be found in the new distribution. The program requires the user to provide the appropriate parameters for the crystal structure under investigation. These parameters are loaded from the parameters.ini file at run-time. Instructions for the preparation of the .ini files can be found in the new distribution. The program enables carrying out one-dimensional dynamical calculations for the fcc lattice, with a two-atoms basis and fcc lattice, with one atom basis but yet the zeroth Fourier component of the scattering potential in the TRHEED1D::crystPotUg() function can be modified according to users' specific application requirements. A graphical user interface (GUI) for the program has been reconstructed. The program has been compiled with English/USA regional and language options. Unusual features: The program is distributed in the form of main projects RHEEDGr_09.cbproj and Graph2D0x.cbproj with associated files, and should be compiled using Code Gear C++ Builder 2009 compilers. Running time: The typical running time is machine and user-parameters dependent. References: OMG, Model Driven Architecture Guide Version 1.0.1, 2003, http://www.omg.org/cgi-bin/doc?omg/03-06-01. A. Daniluk, Comput. Phys. Comm. 166 (2005) 123.

  1. A computer architecture for intelligent machines

    NASA Technical Reports Server (NTRS)

    Lefebvre, D. R.; Saridis, G. N.

    1991-01-01

    The Theory of Intelligent Machines proposes a hierarchical organization for the functions of an autonomous robot based on the Principle of Increasing Precision With Decreasing Intelligence. An analytic formulation of this theory using information-theoretic measures of uncertainty for each level of the intelligent machine has been developed in recent years. A computer architecture that implements the lower two levels of the intelligent machine is presented. The architecture supports an event-driven programming paradigm that is independent of the underlying computer architecture and operating system. Details of Execution Level controllers for motion and vision systems are addressed, as well as the Petri net transducer software used to implement Coordination Level functions. Extensions to UNIX and VxWorks operating systems which enable the development of a heterogeneous, distributed application are described. A case study illustrates how this computer architecture integrates real-time and higher-level control of manipulator and vision systems.

  2. Solvable Family of Driven-Dissipative Many-Body Systems.

    PubMed

    Foss-Feig, Michael; Young, Jeremy T; Albert, Victor V; Gorshkov, Alexey V; Maghrebi, Mohammad F

    2017-11-10

    Exactly solvable models have played an important role in establishing the sophisticated modern understanding of equilibrium many-body physics. Conversely, the relative scarcity of solutions for nonequilibrium models greatly limits our understanding of systems away from thermal equilibrium. We study a family of nonequilibrium models, some of which can be viewed as dissipative analogues of the transverse-field Ising model, in that an effectively classical Hamiltonian is frustrated by dissipative processes that drive the system toward states that do not commute with the Hamiltonian. Surprisingly, a broad and experimentally relevant subset of these models can be solved efficiently. We leverage these solutions to compute the effects of decoherence on a canonical trapped-ion-based quantum computation architecture, and to prove a no-go theorem on steady-state phase transitions in a many-body model that can be realized naturally with Rydberg atoms or trapped ions.

  3. Solvable Family of Driven-Dissipative Many-Body Systems

    NASA Astrophysics Data System (ADS)

    Foss-Feig, Michael; Young, Jeremy T.; Albert, Victor V.; Gorshkov, Alexey V.; Maghrebi, Mohammad F.

    2017-11-01

    Exactly solvable models have played an important role in establishing the sophisticated modern understanding of equilibrium many-body physics. Conversely, the relative scarcity of solutions for nonequilibrium models greatly limits our understanding of systems away from thermal equilibrium. We study a family of nonequilibrium models, some of which can be viewed as dissipative analogues of the transverse-field Ising model, in that an effectively classical Hamiltonian is frustrated by dissipative processes that drive the system toward states that do not commute with the Hamiltonian. Surprisingly, a broad and experimentally relevant subset of these models can be solved efficiently. We leverage these solutions to compute the effects of decoherence on a canonical trapped-ion-based quantum computation architecture, and to prove a no-go theorem on steady-state phase transitions in a many-body model that can be realized naturally with Rydberg atoms or trapped ions.

  4. Sensor fusion IV: Control paradigms and data structures; Proceedings of the Meeting, Boston, MA, Nov. 12-15, 1991

    NASA Technical Reports Server (NTRS)

    Schenker, Paul S. (Editor)

    1992-01-01

    Various papers on control paradigms and data structures in sensor fusion are presented. The general topics addressed include: decision models and computational methods, sensor modeling and data representation, active sensing strategies, geometric planning and visualization, task-driven sensing, motion analysis, models motivated biology and psychology, decentralized detection and distributed decision, data fusion architectures, robust estimation of shapes and features, application and implementation. Some of the individual subjects considered are: the Firefly experiment on neural networks for distributed sensor data fusion, manifold traversing as a model for learning control of autonomous robots, choice of coordinate systems for multiple sensor fusion, continuous motion using task-directed stereo vision, interactive and cooperative sensing and control for advanced teleoperation, knowledge-based imaging for terrain analysis, physical and digital simulations for IVA robotics.

  5. CLARA: CLAS12 Reconstruction and Analysis Framework

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gyurjyan, Vardan; Matta, Sebastian Mancilla; Oyarzun, Ricardo

    2016-11-01

    In this paper we present SOA based CLAS12 event Reconstruction and Analyses (CLARA) framework. CLARA design focus is on two main traits: real-time data stream processing, and service-oriented architecture (SOA) in a flow based programming (FBP) paradigm. Data driven and data centric architecture of CLARA presents an environment for developing agile, elastic, multilingual data processing applications. The CLARA framework presents solutions capable of processing large volumes of data interactively and substantially faster than batch systems.

  6. Wavy Architecture Thin-Film Transistor for Ultrahigh Resolution Flexible Displays.

    PubMed

    Hanna, Amir Nabil; Kutbee, Arwa Talal; Subedi, Ram Chandra; Ooi, Boon; Hussain, Muhammad Mustafa

    2018-01-01

    A novel wavy-shaped thin-film-transistor (TFT) architecture, capable of achieving 70% higher drive current per unit chip area when compared with planar conventional TFT architectures, is reported for flexible display application. The transistor, due to its atypical architecture, does not alter the turn-on voltage or the OFF current values, leading to higher performance without compromising static power consumption. The concept behind this architecture is expanding the transistor's width vertically through grooved trenches in a structural layer deposited on a flexible substrate. Operation of zinc oxide (ZnO)-based TFTs is shown down to a bending radius of 5 mm with no degradation in the electrical performance or cracks in the gate stack. Finally, flexible low-power LEDs driven by the respective currents of the novel wavy, and conventional coplanar architectures are demonstrated, where the novel architecture is able to drive the LED at 2 × the output power, 3 versus 1.5 mW, which demonstrates the potential use for ultrahigh resolution displays in an area efficient manner. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  7. Modeling the two-locus architecture of divergent pollinator adaptation: how variation in SAD paralogs affects fitness and evolutionary divergence in sexually deceptive orchids.

    PubMed

    Xu, Shuqing; Schlüter, Philipp M

    2015-01-01

    Divergent selection by pollinators can bring about strong reproductive isolation via changes at few genes of large effect. This has recently been demonstrated in sexually deceptive orchids, where studies (1) quantified the strength of reproductive isolation in the field; (2) identified genes that appear to be causal for reproductive isolation; and (3) demonstrated selection by analysis of natural variation in gene sequence and expression. In a group of closely related Ophrys orchids, specific floral scent components, namely n-alkenes, are the key floral traits that control specific pollinator attraction by chemical mimicry of insect sex pheromones. The genetic basis of species-specific differences in alkene production mainly lies in two biosynthetic genes encoding stearoyl-acyl carrier protein desaturases (SAD) that are associated with floral scent variation and reproductive isolation between closely related species, and evolve under pollinator-mediated selection. However, the implications of this genetic architecture of key floral traits on the evolutionary processes of pollinator adaptation and speciation in this plant group remain unclear. Here, we expand on these recent findings to model scenarios of adaptive evolutionary change at SAD2 and SAD5, their effects on plant fitness (i.e., offspring number), and the dynamics of speciation. Our model suggests that the two-locus architecture of reproductive isolation allows for rapid sympatric speciation by pollinator shift; however, the likelihood of such pollinator-mediated speciation is asymmetric between the two orchid species O. sphegodes and O. exaltata due to different fitness effects of their predominant SAD2 and SAD5 alleles. Our study not only provides insight into pollinator adaptation and speciation mechanisms of sexually deceptive orchids but also demonstrates the power of applying a modeling approach to the study of pollinator-driven ecological speciation.

  8. Modeling and Simulation Reliable Spacecraft On-Board Computing

    NASA Technical Reports Server (NTRS)

    Park, Nohpill

    1999-01-01

    The proposed project will investigate modeling and simulation-driven testing and fault tolerance schemes for Spacecraft On-Board Computing, thereby achieving reliable spacecraft telecommunication. A spacecraft communication system has inherent capabilities of providing multipoint and broadcast transmission, connectivity between any two distant nodes within a wide-area coverage, quick network configuration /reconfiguration, rapid allocation of space segment capacity, and distance-insensitive cost. To realize the capabilities above mentioned, both the size and cost of the ground-station terminals have to be reduced by using reliable, high-throughput, fast and cost-effective on-board computing system which has been known to be a critical contributor to the overall performance of space mission deployment. Controlled vulnerability of mission data (measured in sensitivity), improved performance (measured in throughput and delay) and fault tolerance (measured in reliability) are some of the most important features of these systems. The system should be thoroughly tested and diagnosed before employing a fault tolerance into the system. Testing and fault tolerance strategies should be driven by accurate performance models (i.e. throughput, delay, reliability and sensitivity) to find an optimal solution in terms of reliability and cost. The modeling and simulation tools will be integrated with a system architecture module, a testing module and a module for fault tolerance all of which interacting through a centered graphical user interface.

  9. A Satellite Data-Driven, Client-Server Decision Support Application for Agricultural Water Resources Management

    NASA Technical Reports Server (NTRS)

    Johnson, Lee F.; Maneta, Marco P.; Kimball, John S.

    2016-01-01

    Water cycle extremes such as droughts and floods present a challenge for water managers and for policy makers responsible for the administration of water supplies in agricultural regions. In addition to the inherent uncertainties associated with forecasting extreme weather events, water planners need to anticipate water demands and water user behavior in a typical circumstances. This requires the use decision support systems capable of simulating agricultural water demand with the latest available data. Unfortunately, managers from local and regional agencies often use different datasets of variable quality, which complicates coordinated action. In previous work we have demonstrated novel methodologies to use satellite-based observational technologies, in conjunction with hydro-economic models and state of the art data assimilation methods, to enable robust regional assessment and prediction of drought impacts on agricultural production, water resources, and land allocation. These methods create an opportunity for new, cost-effective analysis tools to support policy and decision-making over large spatial extents. The methods can be driven with information from existing satellite-derived operational products, such as the Satellite Irrigation Management Support system (SIMS) operational over California, the Cropland Data Layer (CDL), and using a modified light-use efficiency algorithm to retrieve crop yield from the synergistic use of MODIS and Landsat imagery. Here we present an integration of this modeling framework in a client-server architecture based on the Hydra platform. Assimilation and processing of resource intensive remote sensing data, as well as hydrologic and other ancillary information occur on the server side. This information is processed and summarized as attributes in water demand nodes that are part of a vector description of the water distribution network. With this architecture, our decision support system becomes a light weight 'app' that connects to the server to retrieve the latest information regarding water demands, land use, yields and hydrologic information required to run different management scenarios. Furthermore, this architecture ensures all agencies and teams involved in water management use the same, up-to-date information in their simulations.

  10. A satellite data-driven, client-server decision support application for agricultural water resources management

    NASA Astrophysics Data System (ADS)

    Maneta, M. P.; Johnson, L.; Kimball, J. S.

    2016-12-01

    Water cycle extremes such as droughts and floods present a challenge for water managers and for policy makers responsible for the administration of water supplies in agricultural regions. In addition to the inherent uncertainties associated with forecasting extreme weather events, water planners need to anticipate water demands and water user behavior in atypical circumstances. This requires the use decision support systems capable of simulating agricultural water demand with the latest available data. Unfortunately, managers from local and regional agencies often use different datasets of variable quality, which complicates coordinated action. In previous work we have demonstrated novel methodologies to use satellite-based observational technologies, in conjunction with hydro-economic models and state of the art data assimilation methods, to enable robust regional assessment and prediction of drought impacts on agricultural production, water resources, and land allocation. These methods create an opportunity for new, cost-effective analysis tools to support policy and decision-making over large spatial extents. The methods can be driven with information from existing satellite-derived operational products, such as the Satellite Irrigation Management Support system (SIMS) operational over California, the Cropland Data Layer (CDL), and using a modified light-use efficiency algorithm to retrieve crop yield from the synergistic use of MODIS and Landsat imagery. Here we present an integration of this modeling framework in a client-server architecture based on the Hydra platform. Assimilation and processing of resource intensive remote sensing data, as well as hydrologic and other ancillary information occur on the server side. This information is processed and summarized as attributes in water demand nodes that are part of a vector description of the water distribution network. With this architecture, our decision support system becomes a light weight `app` that connects to the server to retrieve the latest information regarding water demands, land use, yields and hydrologic information required to run different management scenarios. Furthermore, this architecture ensures all agencies and teams involved in water management use the same, up-to-date information in their simulations.

  11. Behavior generation strategy of artificial behavioral system by self-learning paradigm for autonomous robot tasks

    NASA Astrophysics Data System (ADS)

    Dağlarli, Evren; Temeltaş, Hakan

    2008-04-01

    In this study, behavior generation and self-learning paradigms are investigated for the real-time applications of multi-goal mobile robot tasks. The method is capable to generate new behaviors and it combines them in order to achieve multi goal tasks. The proposed method is composed from three layers: Behavior Generating Module, Coordination Level and Emotion -Motivation Level. Last two levels use Hidden Markov models to manage dynamical structure of behaviors. The kinematics and dynamic model of the mobile robot with non-holonomic constraints are considered in the behavior based control architecture. The proposed method is tested on a four-wheel driven and four-wheel steered mobile robot with constraints in simulation environment and results are obtained successfully.

  12. Molecular graph convolutions: moving beyond fingerprints

    PubMed Central

    Kearnes, Steven; McCloskey, Kevin; Berndl, Marc; Pande, Vijay; Riley, Patrick

    2016-01-01

    Molecular “fingerprints” encoding structural information are the workhorse of cheminformatics and machine learning in drug discovery applications. However, fingerprint representations necessarily emphasize particular aspects of the molecular structure while ignoring others, rather than allowing the model to make data-driven decisions. We describe molecular graph convolutions, a machine learning architecture for learning from undirected graphs, specifically small molecules. Graph convolutions use a simple encoding of the molecular graph—atoms, bonds, distances, etc.—which allows the model to take greater advantage of information in the graph structure. Although graph convolutions do not outperform all fingerprint-based methods, they (along with other graph-based methods) represent a new paradigm in ligand-based virtual screening with exciting opportunities for future improvement. PMID:27558503

  13. The CMIP5 archive architecture: A system for petabyte-scale distributed archival of climate model data

    NASA Astrophysics Data System (ADS)

    Pascoe, Stephen; Cinquini, Luca; Lawrence, Bryan

    2010-05-01

    The Phase 5 Coupled Model Intercomparison Project (CMIP5) will produce a petabyte scale archive of climate data relevant to future international assessments of climate science (e.g., the IPCC's 5th Assessment Report scheduled for publication in 2013). The infrastructure for the CMIP5 archive must meet many challenges to support this ambitious international project. We describe here the distributed software architecture being deployed worldwide to meet these challenges. The CMIP5 architecture extends the Earth System Grid (ESG) distributed architecture of Datanodes, providing data access and visualisation services, and Gateways providing the user interface including registration, search and browse services. Additional features developed for CMIP5 include a publication workflow incorporating quality control and metadata submission, data replication, version control, update notification and production of citable metadata records. Implementation of these features have been driven by the requirements of reliable global access to over 1Pb of data and consistent citability of data and metadata. Central to the implementation is the concept of Atomic Datasets that are identifiable through a Data Reference Syntax (DRS). Atomic Datasets are immutable to allow them to be replicated and tracked whilst maintaining data consistency. However, since occasional errors in data production and processing is inevitable, new versions can be published and users notified of these updates. As deprecated datasets may be the target of existing citations they can remain visible in the system. Replication of Atomic Datasets is designed to improve regional access and provide fault tolerance. Several datanodes in the system are designated replicating nodes and hold replicas of a portion of the archive expected to be of broad interest to the community. Gateways provide a system-wide interface to users where they can track the version history and location of replicas to select the most appropriate location for download. In addition to meeting the immediate needs of CMIP5 this architecture provides a basis for the Earth System Modeling e-infrastructure being further developed within the EU FP7 IS-ENES project.

  14. Preneoplastic lesion growth driven by the death of adjacent normal stem cells

    PubMed Central

    Chao, Dennis L.; Eck, J. Thomas; Brash, Douglas E.; Maley, Carlo C.; Luebeck, E. Georg

    2008-01-01

    Clonal expansion of premalignant lesions is an important step in the progression to cancer. This process is commonly considered to be a consequence of sustaining a proliferative mutation. Here, we investigate whether the growth trajectory of clones can be better described by a model in which clone growth does not depend on a proliferative advantage. We developed a simple computer model of clonal expansion in an epithelium in which mutant clones can only colonize space left unoccupied by the death of adjacent normal stem cells. In this model, competition for space occurs along the frontier between mutant and normal territories, and both the shapes and the growth rates of lesions are governed by the differences between mutant and normal cells' replication or apoptosis rates. The behavior of this model of clonal expansion along a mutant clone's frontier, when apoptosis of both normal and mutant cells is included, matches the growth of UVB-induced p53-mutant clones in mouse dorsal epidermis better than a standard exponential growth model that does not include tissue architecture. The model predicts precancer cell mutation and death rates that agree with biological observations. These results support the hypothesis that clonal expansion of premalignant lesions can be driven by agents, such as ionizing or nonionizing radiation, that cause cell killing but do not directly stimulate cell replication. PMID:18815380

  15. International Space Station Electric Power System Performance Code-SPACE

    NASA Technical Reports Server (NTRS)

    Hojnicki, Jeffrey; McKissock, David; Fincannon, James; Green, Robert; Kerslake, Thomas; Delleur, Ann; Follo, Jeffrey; Trudell, Jeffrey; Hoffman, David J.; Jannette, Anthony; hide

    2005-01-01

    The System Power Analysis for Capability Evaluation (SPACE) software analyzes and predicts the minute-by-minute state of the International Space Station (ISS) electrical power system (EPS) for upcoming missions as well as EPS power generation capacity as a function of ISS configuration and orbital conditions. In order to complete the Certification of Flight Readiness (CoFR) process in which the mission is certified for flight each ISS System must thoroughly assess every proposed mission to verify that the system will support the planned mission operations; SPACE is the sole tool used to conduct these assessments for the power system capability. SPACE is an integrated power system model that incorporates a variety of modules tied together with integration routines and graphical output. The modules include orbit mechanics, solar array pointing/shadowing/thermal and electrical, battery performance, and power management and distribution performance. These modules are tightly integrated within a flexible architecture featuring data-file-driven configurations, source- or load-driven operation, and event scripting. SPACE also predicts the amount of power available for a given system configuration, spacecraft orientation, solar-array-pointing conditions, orbit, and the like. In the source-driven mode, the model must assure that energy balance is achieved, meaning that energy removed from the batteries must be restored (or balanced) each and every orbit. This entails an optimization scheme to ensure that energy balance is maintained without violating any other constraints.

  16. System Architecture Development for Energy and Water Infrastructure Data Management and Geovisual Analytics

    NASA Astrophysics Data System (ADS)

    Berres, A.; Karthik, R.; Nugent, P.; Sorokine, A.; Myers, A.; Pang, H.

    2017-12-01

    Building an integrated data infrastructure that can meet the needs of a sustainable energy-water resource management requires a robust data management and geovisual analytics platform, capable of cross-domain scientific discovery and knowledge generation. Such a platform can facilitate the investigation of diverse complex research and policy questions for emerging priorities in Energy-Water Nexus (EWN) science areas. Using advanced data analytics, machine learning techniques, multi-dimensional statistical tools, and interactive geovisualization components, such a multi-layered federated platform is being developed, the Energy-Water Nexus Knowledge Discovery Framework (EWN-KDF). This platform utilizes several enterprise-grade software design concepts and standards such as extensible service-oriented architecture, open standard protocols, event-driven programming model, enterprise service bus, and adaptive user interfaces to provide a strategic value to the integrative computational and data infrastructure. EWN-KDF is built on the Compute and Data Environment for Science (CADES) environment in Oak Ridge National Laboratory (ORNL).

  17. Architectural-level power estimation and experimentation

    NASA Astrophysics Data System (ADS)

    Ye, Wu

    With the emergence of a plethora of embedded and portable applications and ever increasing integration levels, power dissipation of integrated circuits has moved to the forefront as a design constraint. Recent years have also seen a significant trend towards designs starting at the architectural (or RT) level. Those demand accurate yet fast RT level power estimation methodologies and tools. This thesis addresses issues and experiments associate with architectural level power estimation. An execution driven, cycle-accurate RT level power simulator, SimplePower, was developed using transition-sensitive energy models. It is based on the architecture of a five-stage pipelined RISC datapath for both 0.35mum and 0.8mum technology and can execute the integer subset of the instruction set of SimpleScalar . SimplePower measures the energy consumed in the datapath, memory and on-chip buses. During the development of SimplePower , a partitioning power modeling technique was proposed to model the energy consumed in complex functional units. The accuracy of this technique was validated with HSPICE simulation results for a register file and a shifter. A novel, selectively gated pipeline register optimization technique was proposed to reduce the datapath energy consumption. It uses the decoded control signals to selectively gate the data fields of the pipeline registers. Simulation results show that this technique can reduce the datapath energy consumption by 18--36% for a set of benchmarks. A low-level back-end compiler optimization, register relabeling, was applied to reduce the on-chip instruction cache data bus switch activities. Its impact was evaluated by SimplePower. Results show that it can reduce the energy consumed in the instruction data buses by 3.55--16.90%. A quantitative evaluation was conducted for the impact of six state-of-art high-level compilation techniques on both datapath and memory energy consumption. The experimental results provide a valuable insight for designers to develop future power-aware compilation frameworks for embedded systems.

  18. Orthographic influences on division of labor in learning to read Chinese and English: Insights from computational modeling

    PubMed Central

    Yang, Jianfeng; Shu, Hua; McCandliss, Bruce D.; Zevin, Jason D.

    2013-01-01

    Learning to read any language requires learning to map among print, sound and meaning. Writing systems differ in a number of factors that influence both the ease and rate with which reading skill can be acquired, as well as the eventual division of labor between phonological and semantic processes. Further, developmental reading disability manifests differently across writing systems, and may be related to different deficits in constitutive processes. Here we simulate some aspects of reading acquisition in Chinese and English using the same model architecture for both writing systems. The contribution of semantic and phonological processing to literacy acquisition in the two languages is simulated, including specific effects of phonological and semantic deficits. Further, we demonstrate that similar patterns of performance are observed when the same model is trained on both Chinese and English as an "early bilingual." The results are consistent with the view that reading skill is acquired by the application of statistical learning rules to mappings among print, sound and meaning, and that differences in the typical and disordered acquisition of reading skill between writing systems are driven by differences in the statistical patterns of the writing systems themselves, rather than differences in cognitive architecture of the learner. PMID:24587693

  19. Domain specific software architectures: Command and control

    NASA Technical Reports Server (NTRS)

    Braun, Christine; Hatch, William; Ruegsegger, Theodore; Balzer, Bob; Feather, Martin; Goldman, Neil; Wile, Dave

    1992-01-01

    GTE is the Command and Control contractor for the Domain Specific Software Architectures program. The objective of this program is to develop and demonstrate an architecture-driven, component-based capability for the automated generation of command and control (C2) applications. Such a capability will significantly reduce the cost of C2 applications development and will lead to improved system quality and reliability through the use of proven architectures and components. A major focus of GTE's approach is the automated generation of application components in particular subdomains. Our initial work in this area has concentrated in the message handling subdomain; we have defined and prototyped an approach that can automate one of the most software-intensive parts of C2 systems development. This paper provides an overview of the GTE team's DSSA approach and then presents our work on automated support for message processing.

  20. Overview of the Phoenix Entry, Descent and Landing System Architecture

    NASA Technical Reports Server (NTRS)

    Grover, Myron R., III; Cichy, Benjamin D.; Desai, Prasun N.

    2008-01-01

    NASA s Phoenix Mars Lander began its journey to Mars from Cape Canaveral, Florida in August 2007, but its journey to the launch pad began many years earlier in 1997 as NASA s Mars Surveyor Program 2001 Lander. In the intervening years, the entry, descent and landing (EDL) system architecture went through a series of changes, resulting in the system flown to the surface of Mars on May 25th, 2008. Some changes, such as entry velocity and landing site elevation, were the result of differences in mission design. Other changes, including the removal of hypersonic guidance, the reformulation of the parachute deployment algorithm, and the addition of the backshell avoidance maneuver, were driven by constant efforts to augment system robustness. An overview of the Phoenix EDL system architecture is presented along with rationales driving these architectural changes.

  1. Hydrodynamic controls on the long-term construction of large river floodplains and alluvial ridges

    NASA Astrophysics Data System (ADS)

    Nicholas, Andrew; Aalto, Rolf; Sambrook Smith, Gregory; Schwendel, Arved

    2017-04-01

    Floodplain construction involves the interplay between channel belt sedimentation and avulsion, overbank deposition of fines, and sediment reworking by channel migration. Each of these processes is controlled, in part, by within-channel and/or overbank hydraulics. However, while spatially-distributed hydrodynamic models are used routinely to simulate floodplain inundation and overbank sedimentation during individual floods, most existing models of long-term floodplain construction and alluvial architecture do not account for flood hydraulics explicitly. Instead, floodplain sedimentation is typically modelled as an exponential function of distance from the river, and avulsion thresholds are defined using topographic indices that quantify alluvial ridge morphology (e.g., lateral:downstream slope ratios or metrics of channel belt super-elevation). Herein, we apply a hydraulically driven model of floodplain evolution, in order to quantify the controls on alluvial ridge construction and avulsion likelihood in large lowland rivers. We combine a simple model of meander migration and cutoff with a 2D grid-based model of flood hydrodynamics and overbank sedimentation. The latter involves a finite volume solution of the shallow water equations and an advection-diffusion model for suspended sediment transport. The model is used to carry out a series of numerical experiments to investigate floodplain construction for a range of flood regimes and sediment supply scenarios, and results are compared to field data from the Rio Beni system, northern Bolivia. Model results, supported by field data, illustrate that floodplain sedimentation is characterised by a high degree of intermittency that is driven by autogenic mechanisms (i.e. even in the absence of temporal variations in flood magnitude and sediment supply). Intermittency in overbank deposits occurs over a range of temporal and spatial scales, and is associated with the interaction between channel migration dynamics and crevasse splay formation. Moreover, alluvial ridge construction, by splay deposition, is controlled by the balance between in-channel and overbank sedimentation rates, and by ridge reworking linked to channel migration. The resulting relationship between sedimentation rates, ridge morphology and avulsion likelihood is more complex than that which is incorporated with existing models of long-term floodplain construction that neglect flood hydraulics. These results have implications for the interpretation of floodplain deposits as records of past flood regimes, and for the controls on the alluvial architecture of large river floodplains.

  2. Concept of Operations for Collaboration and Discovery from Big Data Across Enterprise Data Warehouses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Olama, Mohammed M; Nutaro, James J; Sukumar, Sreenivas R

    2013-01-01

    The success of data-driven business in government, science, and private industry is driving the need for seamless integration of intra and inter-enterprise data sources to extract knowledge nuggets in the form of correlations, trends, patterns and behaviors previously not discovered due to physical and logical separation of datasets. Today, as volume, velocity, variety and complexity of enterprise data keeps increasing, the next generation analysts are facing several challenges in the knowledge extraction process. Towards addressing these challenges, data-driven organizations that rely on the success of their analysts have to make investment decisions for sustainable data/information systems and knowledge discovery. Optionsmore » that organizations are considering are newer storage/analysis architectures, better analysis machines, redesigned analysis algorithms, collaborative knowledge management tools, and query builders amongst many others. In this paper, we present a concept of operations for enabling knowledge discovery that data-driven organizations can leverage towards making their investment decisions. We base our recommendations on the experience gained from integrating multi-agency enterprise data warehouses at the Oak Ridge National Laboratory to design the foundation of future knowledge nurturing data-system architectures.« less

  3. Archetype Model-Driven Development Framework for EHR Web System

    PubMed Central

    Kimura, Eizen; Ishihara, Ken

    2013-01-01

    Objectives This article describes the Web application framework for Electronic Health Records (EHRs) we have developed to reduce construction costs for EHR sytems. Methods The openEHR project has developed clinical model driven architecture for future-proof interoperable EHR systems. This project provides the specifications to standardize clinical domain model implementations, upon which the ISO/CEN 13606 standards are based. The reference implementation has been formally described in Eiffel. Moreover C# and Java implementations have been developed as reference. While scripting languages had been more popular because of their higher efficiency and faster development in recent years, they had not been involved in the openEHR implementations. From 2007, we have used the Ruby language and Ruby on Rails (RoR) as an agile development platform to implement EHR systems, which is in conformity with the openEHR specifications. Results We implemented almost all of the specifications, the Archetype Definition Language parser, and RoR scaffold generator from archetype. Although some problems have emerged, most of them have been resolved. Conclusions We have provided an agile EHR Web framework, which can build up Web systems from archetype models using RoR. The feasibility of the archetype model to provide semantic interoperability of EHRs has been demonstrated and we have verified that that it is suitable for the construction of EHR systems. PMID:24523991

  4. A mechanistic model on the role of “radially-running” collagen fibers on dissection properties of human ascending thoracic aorta

    PubMed Central

    Pal, Siladitya; Tsamis, Alkiviadis; Pasta, Salvatore; D'Amore, Antonio; Gleason, Thomas G.; Vorp, David A.; Maiti, Spandan

    2014-01-01

    Aortic dissection (AoD) is a common condition that often leads to life-threatening cardiovaular emergency. From a biomechanics viewpoint, AoD involves failure of load-bearing microstructural components of the aortic wall, mainly elastin and collagen fibers. Delamination strength of the aortic wall depends on the load-bearing capacity and local micro-architecture of these fibers, which may vary with age, disease and aortic location. Therefore, quantifying the role of fiber micro-architecture on the delamination strength of the aortic wall may lead to improved understanding of AoD. We present an experimentally-driven modeling paradigm towards this goal. Specifically, we utilize collagen fiber microarchitecture, obtained in a parallel study from multi-photon microopy, in a predictive mechanistic framework to characterize the delamination strength. We then validate our model against peel test experiments on human aortic strips and utilize the model to predict the delamination strength of separate aortic strips and compare with experimental findings. We observe that the number density and failure energy of the radially-running collagen fibers control the peel strength. Furthermore, our model suggests that the lower delamination strength previously found for the circumferential direction in human aorta is related to a lower number density of radially-running collagen fibers in that direction. Our model sets the stage for an expanded future study that could predict AoD propagation in patient-specific aortic geometries and better understand factors that may influence propensity for occurrence. PMID:24484644

  5. A satellite-driven, client-server hydro-economic model prototype for agricultural water management

    NASA Astrophysics Data System (ADS)

    Maneta, Marco; Kimball, John; He, Mingzhu; Payton Gardner, W.

    2017-04-01

    Anticipating agricultural water demand, land reallocation, and impact on farm revenues associated with different policy or climate constraints is a challenge for water managers and for policy makers. While current integrated decision support systems based on programming methods provide estimates of farmer reaction to external constraints, they have important shortcomings such as the high cost of data collection surveys necessary to calibrate the model, biases associated with inadequate farm sampling, infrequent model updates and recalibration, model overfitting, or their deterministic nature, among other problems. In addition, the administration of water supplies and the generation of policies that promote sustainable agricultural regions depend on more than one bureau or office. Unfortunately, managers from local and regional agencies often use different datasets of variable quality, which complicates coordinated action. To overcome these limitations, we present a client-server, integrated hydro-economic modeling and observation framework driven by satellite remote sensing and other ancillary information from regional monitoring networks. The core of the framework is a stochastic data assimilation system that sequentially ingests remote sensing observations and corrects the parameters of the hydro-economic model at unprecedented spatial and temporal resolutions. An economic model of agricultural production, based on mathematical programming, requires information on crop type and extent, crop yield, crop transpiration and irrigation technology. A regional hydro-climatologic model provides biophysical constraints to an economic model of agricultural production with a level of detail that permits the study of the spatial impact of large- and small-scale water use decisions. Crop type and extent is obtained from the Cropland Data Layer (CDL), which is multi-sensor operational classification of crops maintained by the United States Department of Agriculture. Because this product is only available for the conterminous United States, the framework is currently only applicable in this region. To obtain information on crop phenology, productivity and transpiration at adequate spatial and temporal frequencies we blend high spatial resolution Landsat information with high temporal fidelity MODIS imagery. The result is a 30 m, 8-day fused dataset of crop greenness that is subsequently transformed into productivity and transpiration by adapting existing forest productivity and transpiration algorithms for agricultural applications. To ensure all involved agencies work with identical information and that end-users are sheltered from the computational burden of storing and processing remote sensing data, this modeling framework is integrated in a client-server architecture based on the Hydra platform (www.hydraplatform.org). Assimilation and processing of resource-intensive remote sensing information, as well as hydrologic and other ancillary data, occur on the server side. With this architecture, our decision support system becomes a light weight 'app' that connects to the server to retrieve the latest information regarding water demands, land use, yields and hydrologic information required to run different management scenarios. This architecture ensures that all agencies and teams involved in water management use the same, up-to-date information in their simulations.

  6. Using crustal thickness and subsidence history on the Iberia-Newfoundland margins to constrain lithosphere deformation modes during continental breakup

    NASA Astrophysics Data System (ADS)

    Jeanniot, Ludovic; Kusznir, Nick; Manatschal, Gianreto; Mohn, Geoffroy

    2014-05-01

    Observations at magma-poor rifted margins such as Iberia-Newfoundland show a complex lithosphere deformation history during continental breakup and seafloor spreading initiation leading to complex OCT architecture with hyper-extended continental crust and lithosphere, exhumed mantle and scattered embryonic oceanic crust and continental slivers. Initiation of seafloor spreading requires both the rupture of the continental crust and lithospheric mantle, and the onset of decompressional melting. Their relative timing controls when mantle exhumation may occur; the presence or absence of exhumed mantle provides useful information on the timing of these events and constraints on lithosphere deformation modes. A single lithosphere deformation mode leading to continental breakup and sea-floor spreading cannot explain observations. We have determined the sequence of lithosphere deformation events for two profiles across the present-day conjugate Iberia-Newfoundland margins, using forward modelling of continental breakup and seafloor spreading initiation calibrated against observations of crustal basement thickness and subsidence. Flow fields, representing a sequence of lithosphere deformation modes, are generated by a 2D finite element viscous flow model (FeMargin), and used to advect lithosphere and asthenosphere temperature and material. FeMargin is kinematically driven by divergent deformation in the upper 15-20 km of the lithosphere inducing passive upwelling beneath that layer; extensional faulting and magmatic intrusions deform the topmost upper lithosphere, consistent with observations of deformation processes occurring at slow spreading ocean ridges (Cannat, 1996). Buoyancy enhanced upwelling, as predicted by Braun et al. (2000) is also kinematically included in the lithosphere deformation model. Melt generation by decompressional melting is predicted using the parameterization and methodology of Katz et al. (2003). The distribution of lithosphere deformation, the contribution of buoyancy driven upwelling and their spatial and temporal evolution including lateral migration are determined by using a series of numerical experiments, tested and calibrated against observations of crustal thicknesses and water-loaded subsidence. Pure-shear widths exert a strong control on the timing of crustal rupture and melt initiation; to satisfy OCT architecture, subsidence and mantle exhumation, we need to focus the deformation from a broad to a narrow region. The lateral migration of the deformation flow axis has an important control on the rupture of continental crust and lithosphere, melt initiation, their relative timing, the resulting OCT architecture and conjugate margin asymmetry. The numerical models are used to predict margin isostatic response and subsidence history.

  7. Architectural Lessons: Look Back In Order To Move Forward

    NASA Astrophysics Data System (ADS)

    Huang, T.; Djorgovski, S. G.; Caltagirone, S.; Crichton, D. J.; Hughes, J. S.; Law, E.; Pilone, D.; Pilone, T.; Mahabal, A.

    2015-12-01

    True elegance of scalable and adaptable architecture is not about incorporating the latest and greatest technologies. Its elegance is measured by its ability to scale and adapt as its operating environment evolves over time. Architecture is the link that bridges people, process, policies, interfaces, and technologies. Architectural development begins by observe the relationships which really matter to the problem domain. It follows by the creation of a single, shared, evolving, pattern language, which everyone contributes to, and everyone can use [C. Alexander, 1979]. Architects are the true artists. Like all masterpieces, the values and strength of architectures are measured not by the volumes of publications, it is measured by its ability to evolve. An architect must look back in order to move forward. This talk discusses some of the prior works including onboard data analysis system, knowledgebase system, cloud-based Big Data platform, as enablers to help shape the new generation of Earth Science projects at NASA and EarthCube where a community-driven architecture is the key to enable data-intensive science. [C. Alexander, The Timeless Way of Building, Oxford University, 1979.

  8. Bridging a divide: architecture for a joint hospital-primary care data warehouse.

    PubMed

    An, Jeff; Keshavjee, Karim; Mirza, Kashif; Vassanji, Karim; Greiver, Michelle

    2015-01-01

    Healthcare costs are driven by a surprisingly small number of patients. Predicting who is likely to require care in the near future could help reduce costs by pre-empting use of expensive health care resources such as emergency departments and hospitals. We describe the design of an architecture for a joint hospital-primary care data warehouse (JDW) that can monitor the effectiveness of in-hospital interventions in reducing readmissions and predict which patients are most likely to be admitted to hospital in the near future. The design identifies the key governance elements, the architectural principles, the business case, the privacy architecture, future work flows, the IT infrastructure, the data analytics and the high level implementation plan for realization of the JDW. This architecture fills a gap in bridging data from two separate hospital and primary care organizations, not a single managed care entity with multiple locations. The JDW architecture design was well received by the stakeholders engaged and by senior leadership at the hospital and the primary care organization. Future plans include creating a demonstration system and conducting a pilot study.

  9. Systems biology driven software design for the research enterprise.

    PubMed

    Boyle, John; Cavnor, Christopher; Killcoyne, Sarah; Shmulevich, Ilya

    2008-06-25

    In systems biology, and many other areas of research, there is a need for the interoperability of tools and data sources that were not originally designed to be integrated. Due to the interdisciplinary nature of systems biology, and its association with high throughput experimental platforms, there is an additional need to continually integrate new technologies. As scientists work in isolated groups, integration with other groups is rarely a consideration when building the required software tools. We illustrate an approach, through the discussion of a purpose built software architecture, which allows disparate groups to reuse tools and access data sources in a common manner. The architecture allows for: the rapid development of distributed applications; interoperability, so it can be used by a wide variety of developers and computational biologists; development using standard tools, so that it is easy to maintain and does not require a large development effort; extensibility, so that new technologies and data types can be incorporated; and non intrusive development, insofar as researchers need not to adhere to a pre-existing object model. By using a relatively simple integration strategy, based upon a common identity system and dynamically discovered interoperable services, a light-weight software architecture can become the focal point through which scientists can both get access to and analyse the plethora of experimentally derived data.

  10. Cyberinfrastructure to Support Collaborative and Reproducible Computational Hydrologic Modeling

    NASA Astrophysics Data System (ADS)

    Goodall, J. L.; Castronova, A. M.; Bandaragoda, C.; Morsy, M. M.; Sadler, J. M.; Essawy, B.; Tarboton, D. G.; Malik, T.; Nijssen, B.; Clark, M. P.; Liu, Y.; Wang, S. W.

    2017-12-01

    Creating cyberinfrastructure to support reproducibility of computational hydrologic models is an important research challenge. Addressing this challenge requires open and reusable code and data with machine and human readable metadata, organized in ways that allow others to replicate results and verify published findings. Specific digital objects that must be tracked for reproducible computational hydrologic modeling include (1) raw initial datasets, (2) data processing scripts used to clean and organize the data, (3) processed model inputs, (4) model results, and (5) the model code with an itemization of all software dependencies and computational requirements. HydroShare is a cyberinfrastructure under active development designed to help users store, share, and publish digital research products in order to improve reproducibility in computational hydrology, with an architecture supporting hydrologic-specific resource metadata. Researchers can upload data required for modeling, add hydrology-specific metadata to these resources, and use the data directly within HydroShare.org for collaborative modeling using tools like CyberGIS, Sciunit-CLI, and JupyterHub that have been integrated with HydroShare to run models using notebooks, Docker containers, and cloud resources. Current research aims to implement the Structure For Unifying Multiple Modeling Alternatives (SUMMA) hydrologic model within HydroShare to support hypothesis-driven hydrologic modeling while also taking advantage of the HydroShare cyberinfrastructure. The goal of this integration is to create the cyberinfrastructure that supports hypothesis-driven model experimentation, education, and training efforts by lowering barriers to entry, reducing the time spent on informatics technology and software development, and supporting collaborative research within and across research groups.

  11. HYDRA: A Middleware-Oriented Integrated Architecture for e-Procurement in Supply Chains

    NASA Astrophysics Data System (ADS)

    Alor-Hernandez, Giner; Aguilar-Lasserre, Alberto; Juarez-Martinez, Ulises; Posada-Gomez, Ruben; Cortes-Robles, Guillermo; Garcia-Martinez, Mario Alberto; Gomez-Berbis, Juan Miguel; Rodriguez-Gonzalez, Alejandro

    The Service-Oriented Architecture (SOA) development paradigm has emerged to improve the critical issues of creating, modifying and extending solutions for business processes integration, incorporating process automation and automated exchange of information between organizations. Web services technology follows the SOA's principles for developing and deploying applications. Besides, Web services are considered as the platform for SOA, for both intra- and inter-enterprise communication. However, an SOA does not incorporate information about occurring events into business processes, which are the main features of supply chain management. These events and information delivery are addressed in an Event-Driven Architecture (EDA). Taking this into account, we propose a middleware-oriented integrated architecture that offers a brokering service for the procurement of products in a Supply Chain Management (SCM) scenario. As salient contributions, our system provides a hybrid architecture combining features of both SOA and EDA and a set of mechanisms for business processes pattern management, monitoring based on UML sequence diagrams, Web services-based management, event publish/subscription and reliable messaging service.

  12. Leaf-architectured 3D Hierarchical Artificial Photosynthetic System of Perovskite Titanates Towards CO2 Photoreduction Into Hydrocarbon Fuels

    PubMed Central

    Zhou, Han; Guo, Jianjun; Li, Peng; Fan, Tongxiang; Zhang, Di; Ye, Jinhua

    2013-01-01

    The development of an “artificial photosynthetic system” (APS) having both the analogous important structural elements and reaction features of photosynthesis to achieve solar-driven water splitting and CO2 reduction is highly challenging. Here, we demonstrate a design strategy for a promising 3D APS architecture as an efficient mass flow/light harvesting network relying on the morphological replacement of a concept prototype-leaf's 3D architecture into perovskite titanates for CO2 photoreduction into hydrocarbon fuels (CO and CH4). The process uses artificial sunlight as the energy source, water as an electron donor and CO2 as the carbon source, mimicking what real leaves do. To our knowledge this is the first example utilizing biological systems as “architecture-directing agents” for APS towards CO2 photoreduction, which hints at a more general principle for APS architectures with a great variety of optimized biological geometries. This research would have great significance for the potential realization of global carbon neutral cycle. PMID:23588925

  13. Development and evaluation of SOA-based AAL services in real-life environments: a case study and lessons learned.

    PubMed

    Stav, Erlend; Walderhaug, Ståle; Mikalsen, Marius; Hanke, Sten; Benc, Ivan

    2013-11-01

    The proper use of ICT services can support seniors in living independently longer. While such services are starting to emerge, current proprietary solutions are often expensive, covering only isolated parts of seniors' needs, and lack support for sharing information between services and between users. For developers, the challenge is that it is complex and time consuming to develop high quality, interoperable services, and new techniques are needed to simplify the development and reduce the development costs. This paper provides the complete view of the experiences gained in the MPOWER project with respect to using model-driven development (MDD) techniques for Service Oriented Architecture (SOA) system development in the Ambient Assisted Living (AAL) domain. To address this challenge, the approach of the European research project MPOWER (2006-2009) was to investigate and record the user needs, define a set of reusable software services based on these needs, and then implement pilot systems using these services. Further, a model-driven toolchain covering key development phases was developed to support software developers through this process. Evaluations were conducted both on the technical artefacts (methodology and tools), and on end user experience from using the pilot systems in trial sites. The outcome of the work on the user needs is a knowledge base recorded as a Unified Modeling Language (UML) model. This comprehensive model describes actors, use cases, and features derived from these. The model further includes the design of a set of software services, including full trace information back to the features and use cases motivating their design. Based on the model, the services were implemented for use in Service Oriented Architecture (SOA) systems, and are publicly available as open source software. The services were successfully used in the realization of two pilot applications. There is therefore a direct and traceable link from the user needs of the elderly, through the service design knowledge base, to the service and pilot implementations. The evaluation of the SOA approach on the developers in the project revealed that SOA is useful with respect to job performance and quality. Furthermore, they think SOA is easy to use and support development of AAL applications. An important finding is that the developers clearly report that they intend to use SOA in the future, but not for all type of projects. With respect to using model-driven development in web services design and implementation, the developers reported that it was useful. However, it is important that the code generated from the models is correct if the full potential of MDD should be achieved. The pilots and their evaluation in the trial sites showed that the services of the platform are sufficient to create suitable systems for end users in the domain. A SOA platform with a set of reusable domain services is a suitable foundation for more rapid development and tailoring of assisted living systems covering reoccurring needs among elderly users. It is feasible to realize a tool-chain for model-driven development of SOA applications in the AAL domain, and such a tool-chain can be accepted and found useful by software developers. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  14. Modulation of gene expression using electrospun scaffolds with templated architecture.

    PubMed

    Karchin, A; Wang, Y-N; Sanders, J E

    2012-06-01

    The fabrication of biomimetic scaffolds is a critical component to fulfill the promise of functional tissue-engineered materials. We describe herein a simple technique, based on printed circuit board manufacturing, to produce novel templates for electrospinning scaffolds for tissue-engineering applications. This technique facilitates fabrication of electrospun scaffolds with templated architecture, which we defined as a scaffold's bulk mechanical properties being driven by its fiber architecture. Electrospun scaffolds with templated architectures were characterized with regard to fiber alignment and mechanical properties. Fast Fourier transform analysis revealed a high degree of fiber alignment along the conducting traces of the templates. Mechanical testing showed that scaffolds demonstrated tunable mechanical properties as a function of templated architecture. Fibroblast-seeded scaffolds were subjected to a peak strain of 3 or 10% at 0.5 Hz for 1 h. Exposing seeded scaffolds to the low strain magnitude (3%) significantly increased collagen I gene expression compared to the high strain magnitude (10%) in a scaffold architecture-dependent manner. These experiments indicate that scaffolds with templated architectures can be produced, and modulation of gene expression is possible with templated architectures. This technology holds promise for the long-term goal of creating tissue-engineered replacements with the biomechanical and biochemical make-up of native tissues. Copyright © 2012 Wiley Periodicals, Inc.

  15. Choosing Training Delivery Media.

    ERIC Educational Resources Information Center

    Hybert, Peter R.

    2000-01-01

    Focuses on decisionmaking about delivery media, and introduces CADDI's Performance-based, Accelerated, Customer-Stakeholder-driven Training & Development(SM) (PACT) Processes for training and development (T&D). Describes the media decisions that correspond with the design three levels of PACT: Curriculum Architecture Design, Modular Curriculum…

  16. Real-time value-driven diagnosis

    NASA Technical Reports Server (NTRS)

    Dambrosio, Bruce

    1995-01-01

    Diagnosis is often thought of as an isolated task in theoretical reasoning (reasoning with the goal of updating our beliefs about the world). We present a decision-theoretic interpretation of diagnosis as a task in practical reasoning (reasoning with the goal of acting in the world), and sketch components of our approach to this task. These components include an abstract problem description, a decision-theoretic model of the basic task, a set of inference methods suitable for evaluating the decision representation in real-time, and a control architecture to provide the needed continuing coordination between the agent and its environment. A principal contribution of this work is the representation and inference methods we have developed, which extend previously available probabilistic inference methods and narrow, somewhat, the gap between probabilistic and logical models of diagnosis.

  17. Design of the HELICS High-Performance Transmission-Distribution-Communication-Market Co-Simulation Framework

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Palmintier, Bryan S; Krishnamurthy, Dheepak; Top, Philip

    This paper describes the design rationale for a new cyber-physical-energy co-simulation framework for electric power systems. This new framework will support very large-scale (100,000+ federates) co-simulations with off-the-shelf power-systems, communication, and end-use models. Other key features include cross-platform operating system support, integration of both event-driven (e.g. packetized communication) and time-series (e.g. power flow) simulation, and the ability to co-iterate among federates to ensure model convergence at each time step. After describing requirements, we begin by evaluating existing co-simulation frameworks, including HLA and FMI, and conclude that none provide the required features. Then we describe the design for the new layeredmore » co-simulation architecture.« less

  18. Design of the HELICS High-Performance Transmission-Distribution-Communication-Market Co-Simulation Framework: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Palmintier, Bryan S; Krishnamurthy, Dheepak; Top, Philip

    This paper describes the design rationale for a new cyber-physical-energy co-simulation framework for electric power systems. This new framework will support very large-scale (100,000+ federates) co-simulations with off-the-shelf power-systems, communication, and end-use models. Other key features include cross-platform operating system support, integration of both event-driven (e.g. packetized communication) and time-series (e.g. power flow) simulation, and the ability to co-iterate among federates to ensure model convergence at each time step. After describing requirements, we begin by evaluating existing co-simulation frameworks, including HLA and FMI, and conclude that none provide the required features. Then we describe the design for the new layeredmore » co-simulation architecture.« less

  19. A Unified Data-Driven Approach for Programming In Situ Analysis and Visualization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aiken, Alex

    The placement and movement of data is becoming the key limiting factor on both performance and energy efficiency of high performance computations. As systems generate more data, it is becoming increasingly difficult to actually move that data elsewhere for post-processing, as the rate of improvements in supporting I/O infrastructure is not keeping pace. Together, these trends are creating a shift in how we think about exascale computations, from a viewpoint that focuses on FLOPS to one that focuses on data and data-centric operations as fundamental to the reasoning about, and optimization of, scientific workflows on extreme-scale architectures. The overarching goalmore » of our effort was the study of a unified data-driven approach for programming applications and in situ analysis and visualization. Our work was to understand the interplay between data-centric programming model requirements at extreme-scale and the overall impact of those requirements on the design, capabilities, flexibility, and implementation details for both applications and the supporting in situ infrastructure. In this context, we made many improvements to the Legion programming system (one of the leading data-centric models today) and demonstrated in situ analyses on real application codes using these improvements.« less

  20. A multitasking finite state architecture for computer control of an electric powertrain

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burba, J.C.

    1984-01-01

    Finite state techniques provide a common design language between the control engineer and the computer engineer for event driven computer control systems. They simplify communication and provide a highly maintainable control system understandable by both. This paper describes the development of a control system for an electric vehicle powertrain utilizing finite state concepts. The basics of finite state automata are provided as a framework to discuss a unique multitasking software architecture developed for this application. The architecture employs conventional time-sliced techniques with task scheduling controlled by a finite state machine representation of the control strategy of the powertrain. The complexitiesmore » of excitation variable sampling in this environment are also considered.« less

  1. Reversible Self-Assembly of 3D Architectures Actuated by Responsive Polymers.

    PubMed

    Zhang, Cheng; Su, Jheng-Wun; Deng, Heng; Xie, Yunchao; Yan, Zheng; Lin, Jian

    2017-11-29

    An assembly of three-dimensional (3D) architectures with defined configurations has important applications in broad areas. Among various approaches of constructing 3D structures, a stress-driven assembly provides the capabilities of creating 3D architectures in a broad range of functional materials with unique merits. However, 3D architectures built via previous methods are simple, irreversible, or not free-standing. Furthermore, the substrates employed for the assembly remain flat, thus not involved as parts of the final 3D architectures. Herein, we report a reversible self-assembly of various free-standing 3D architectures actuated by the self-folding of smart polymer substrates with programmed geometries. The strategically designed polymer substrates can respond to external stimuli, such as organic solvents, to initiate the 3D assembly process and subsequently become the parts of the final 3D architectures. The self-assembly process is highly controllable via origami and kirigami designs patterned by direct laser writing. Self-assembled geometries include 3D architectures such as "flower", "rainbow", "sunglasses", "box", "pyramid", "grating", and "armchair". The reported self-assembly also shows wide applicability to various materials including epoxy, polyimide, laser-induced graphene, and metal films. The device examples include 3D architectures integrated with a micro light-emitting diode and a flex sensor, indicting the potential applications in soft robotics, bioelectronics, microelectromechanical systems, and others.

  2. A Modular, Data Driven System: Architecture for GSFC Ground Systems: GSFC's Mission Services Evolution Center (GMSEC)

    NASA Technical Reports Server (NTRS)

    Cary, Everett; Smith, Danford

    2004-01-01

    The GSFC Mission Services Evolution Center (GMSEC) was established in 2001 to coordinate ground and flight data systems development and services at NASA's Goddard Space Flight Center (GSFC). GMSEC system architecture represents a new way to build the next generation systems to be used for a variety of missions for years to come. The old approach was to find or build the best products available and integrate them into a reusable system to meet everyone's needs. The new approach assumes that needs, products, and technology will change.

  3. An asynchronous data-driven readout prototype for CEPC vertex detector

    NASA Astrophysics Data System (ADS)

    Yang, Ping; Sun, Xiangming; Huang, Guangming; Xiao, Le; Gao, Chaosong; Huang, Xing; Zhou, Wei; Ren, Weiping; Li, Yashu; Liu, Jianchao; You, Bihui; Zhang, Li

    2017-12-01

    The Circular Electron Positron Collider (CEPC) is proposed as a Higgs boson and/or Z boson factory for high-precision measurements on the Higgs boson. The precision of secondary vertex impact parameter plays an important role in such measurements which typically rely on flavor-tagging. Thus silicon CMOS Pixel Sensors (CPS) are the most promising technology candidate for a CEPC vertex detector, which can most likely feature a high position resolution, a low power consumption and a fast readout simultaneously. For the R&D of the CEPC vertex detector, we have developed a prototype MIC4 in the Towerjazz 180 nm CMOS Image Sensor (CIS) process. We have proposed and implemented a new architecture of asynchronous zero-suppression data-driven readout inside the matrix combined with a binary front-end inside the pixel. The matrix contains 128 rows and 64 columns with a small pixel pitch of 25 μm. The readout architecture has implemented the traditional OR-gate chain inside a super pixel combined with a priority arbiter tree between the super pixels, only reading out relevant pixels. The MIC4 architecture will be introduced in more detail in this paper. It will be taped out in May and will be characterized when the chip comes back.

  4. Data-driven sampling method for building 3D anatomical models from serial histology

    NASA Astrophysics Data System (ADS)

    Salunke, Snehal Ulhas; Ablove, Tova; Danforth, Theresa; Tomaszewski, John; Doyle, Scott

    2017-03-01

    In this work, we investigate the effect of slice sampling on 3D models of tissue architecture using serial histopathology. We present a method for using a single fully-sectioned tissue block as pilot data, whereby we build a fully-realized 3D model and then determine the optimal set of slices needed to reconstruct the salient features of the model objects under biological investigation. In our work, we are interested in the 3D reconstruction of microvessel architecture in the trigone region between the vagina and the bladder. This region serves as a potential avenue for drug delivery to treat bladder infection. We collect and co-register 23 serial sections of CD31-stained tissue images (6 μm thick sections), from which four microvessels are selected for analysis. To build each model, we perform semi-automatic segmentation of the microvessels. Subsampled meshes are then created by removing slices from the stack, interpolating the missing data, and re-constructing the mesh. We calculate the Hausdorff distance between the full and subsampled meshes to determine the optimal sampling rate for the modeled structures. In our application, we found that a sampling rate of 50% (corresponding to just 12 slices) was sufficient to recreate the structure of the microvessels without significant deviation from the fullyrendered mesh. This pipeline effectively minimizes the number of histopathology slides required for 3D model reconstruction, and can be utilized to either (1) reduce the overall costs of a project, or (2) enable additional analysis on the intermediate slides.

  5. Clinical data interoperability based on archetype transformation.

    PubMed

    Costa, Catalina Martínez; Menárguez-Tortosa, Marcos; Fernández-Breis, Jesualdo Tomás

    2011-10-01

    The semantic interoperability between health information systems is a major challenge to improve the quality of clinical practice and patient safety. In recent years many projects have faced this problem and provided solutions based on specific standards and technologies in order to satisfy the needs of a particular scenario. Most of such solutions cannot be easily adapted to new scenarios, thus more global solutions are needed. In this work, we have focused on the semantic interoperability of electronic healthcare records standards based on the dual model architecture and we have developed a solution that has been applied to ISO 13606 and openEHR. The technological infrastructure combines reference models, archetypes and ontologies, with the support of Model-driven Engineering techniques. For this purpose, the interoperability infrastructure developed in previous work by our group has been reused and extended to cover the requirements of data transformation. Copyright © 2011 Elsevier Inc. All rights reserved.

  6. Object-oriented model-driven control

    NASA Technical Reports Server (NTRS)

    Drysdale, A.; Mcroberts, M.; Sager, J.; Wheeler, R.

    1994-01-01

    A monitoring and control subsystem architecture has been developed that capitalizes on the use of modeldriven monitoring and predictive control, knowledge-based data representation, and artificial reasoning in an operator support mode. We have developed an object-oriented model of a Controlled Ecological Life Support System (CELSS). The model based on the NASA Kennedy Space Center CELSS breadboard data, tracks carbon, hydrogen, and oxygen, carbodioxide, and water. It estimates and tracks resorce-related parameters such as mass, energy, and manpower measurements such as growing area required for balance. We are developing an interface with the breadboard systems that is compatible with artificial reasoning. Initial work is being done on use of expert systems and user interface development. This paper presents an approach to defining universally applicable CELSS monitor and control issues, and implementing appropriate monitor and control capability for a particular instance: the KSC CELSS Breadboard Facility.

  7. Risk Driven Outcome-Based Command and Control (C2) Assessment

    DTIC Science & Technology

    2000-01-01

    shaping the risk ranking scores into more interpretable and statistically sound risk measures. Regression analysis was applied to determine what...Architecture Framework Implementation, AFCEA Coursebook 503J, February 8-11, 2000, San Diego, California. [Morgan and Henrion, 1990] M. Granger Morgan and

  8. Solving Partial Differential Equations in a data-driven multiprocessor environment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gaudiot, J.L.; Lin, C.M.; Hosseiniyar, M.

    1988-12-31

    Partial differential equations can be found in a host of engineering and scientific problems. The emergence of new parallel architectures has spurred research in the definition of parallel PDE solvers. Concurrently, highly programmable systems such as data-how architectures have been proposed for the exploitation of large scale parallelism. The implementation of some Partial Differential Equation solvers (such as the Jacobi method) on a tagged token data-flow graph is demonstrated here. Asynchronous methods (chaotic relaxation) are studied and new scheduling approaches (the Token No-Labeling scheme) are introduced in order to support the implementation of the asychronous methods in a data-driven environment.more » New high-level data-flow language program constructs are introduced in order to handle chaotic operations. Finally, the performance of the program graphs is demonstrated by a deterministic simulation of a message passing data-flow multiprocessor. An analysis of the overhead in the data-flow graphs is undertaken to demonstrate the limits of parallel operations in dataflow PDE program graphs.« less

  9. Application of process monitoring to anomaly detection in nuclear material processing systems via system-centric event interpretation of data from multiple sensors of varying reliability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garcia, Humberto E.; Simpson, Michael F.; Lin, Wen-Chiao

    In this paper, we apply an advanced safeguards approach and associated methods for process monitoring to a hypothetical nuclear material processing system. The assessment regarding the state of the processing facility is conducted at a systemcentric level formulated in a hybrid framework. This utilizes architecture for integrating both time- and event-driven data and analysis for decision making. While the time-driven layers of the proposed architecture encompass more traditional process monitoring methods based on time series data and analysis, the event-driven layers encompass operation monitoring methods based on discrete event data and analysis. By integrating process- and operation-related information and methodologiesmore » within a unified framework, the task of anomaly detection is greatly improved. This is because decision-making can benefit from not only known time-series relationships among measured signals but also from known event sequence relationships among generated events. This available knowledge at both time series and discrete event layers can then be effectively used to synthesize observation solutions that optimally balance sensor and data processing requirements. The application of the proposed approach is then implemented on an illustrative monitored system based on pyroprocessing and results are discussed.« less

  10. Modeling the Personal Health Ecosystem.

    PubMed

    Blobel, Bernd; Brochhausen, Mathias; Ruotsalainen, Pekka

    2018-01-01

    Complex ecosystems like the pHealth one combine different domains represented by a huge variety of different actors (human beings, organizations, devices, applications, components) belonging to different policy domains, coming from different disciplines, deploying different methodologies, terminologies, and ontologies, offering different levels of knowledge, skills, and experiences, acting in different scenarios and accommodating different business cases to meet the intended business objectives. For correctly modeling such systems, a system-oriented, architecture-centric, ontology-based, policy-driven approach is inevitable, thereby following established Good Modeling Best Practices. However, most of the existing standards, specifications and tools for describing, representing, implementing and managing health (information) systems reflect the advancement of information and communication technology (ICT) represented by different evolutionary levels of data modeling. The paper presents a methodology for integrating, adopting and advancing models, standards, specifications as well as implemented systems and components on the way towards the aforementioned ultimate approach, so meeting the challenge we face when transforming health systems towards ubiquitous, personalized, predictive, preventive, participative, and cognitive health and social care.

  11. A learnable parallel processing architecture towards unity of memory and computing

    NASA Astrophysics Data System (ADS)

    Li, H.; Gao, B.; Chen, Z.; Zhao, Y.; Huang, P.; Ye, H.; Liu, L.; Liu, X.; Kang, J.

    2015-08-01

    Developing energy-efficient parallel information processing systems beyond von Neumann architecture is a long-standing goal of modern information technologies. The widely used von Neumann computer architecture separates memory and computing units, which leads to energy-hungry data movement when computers work. In order to meet the need of efficient information processing for the data-driven applications such as big data and Internet of Things, an energy-efficient processing architecture beyond von Neumann is critical for the information society. Here we show a non-von Neumann architecture built of resistive switching (RS) devices named “iMemComp”, where memory and logic are unified with single-type devices. Leveraging nonvolatile nature and structural parallelism of crossbar RS arrays, we have equipped “iMemComp” with capabilities of computing in parallel and learning user-defined logic functions for large-scale information processing tasks. Such architecture eliminates the energy-hungry data movement in von Neumann computers. Compared with contemporary silicon technology, adder circuits based on “iMemComp” can improve the speed by 76.8% and the power dissipation by 60.3%, together with a 700 times aggressive reduction in the circuit area.

  12. A learnable parallel processing architecture towards unity of memory and computing.

    PubMed

    Li, H; Gao, B; Chen, Z; Zhao, Y; Huang, P; Ye, H; Liu, L; Liu, X; Kang, J

    2015-08-14

    Developing energy-efficient parallel information processing systems beyond von Neumann architecture is a long-standing goal of modern information technologies. The widely used von Neumann computer architecture separates memory and computing units, which leads to energy-hungry data movement when computers work. In order to meet the need of efficient information processing for the data-driven applications such as big data and Internet of Things, an energy-efficient processing architecture beyond von Neumann is critical for the information society. Here we show a non-von Neumann architecture built of resistive switching (RS) devices named "iMemComp", where memory and logic are unified with single-type devices. Leveraging nonvolatile nature and structural parallelism of crossbar RS arrays, we have equipped "iMemComp" with capabilities of computing in parallel and learning user-defined logic functions for large-scale information processing tasks. Such architecture eliminates the energy-hungry data movement in von Neumann computers. Compared with contemporary silicon technology, adder circuits based on "iMemComp" can improve the speed by 76.8% and the power dissipation by 60.3%, together with a 700 times aggressive reduction in the circuit area.

  13. An integrative view of microbiome-host interactions in inflammatory bowel diseases

    PubMed Central

    Wlodarska, Marta; Kostic, Aleksandar D.; Xavier, Ramnik J.

    2015-01-01

    Summary The intestinal microbiota, which is composed of bacteria, viruses, and micro-eukaryotes, acts as an accessory organ system with distinct functions along the intestinal tract that are critical for health. This review focuses on how the microbiota drives intestinal disease through alterations in microbial community architecture, disruption of the mucosal barrier, modulation of innate and adaptive immunity, and dysfunction of the enteric nervous system. Inflammatory bowel disease is used as a model system to understand these microbial-driven pathologies, but the knowledge gained in this space is extended to less well studied intestinal diseases that may also have an important microbial component, including environmental enteropathy and chronic colitis-associated colorectal cancer. PMID:25974300

  14. Astrophysics and Big Data: Challenges, Methods, and Tools

    NASA Astrophysics Data System (ADS)

    Garofalo, Mauro; Botta, Alessio; Ventre, Giorgio

    2017-06-01

    Nowadays there is no field research which is not flooded with data. Among the sciences, astrophysics has always been driven by the analysis of massive amounts of data. The development of new and more sophisticated observation facilities, both ground-based and spaceborne, has led data more and more complex (Variety), an exponential growth of both data Volume (i.e., in the order of petabytes), and Velocity in terms of production and transmission. Therefore, new and advanced processing solutions will be needed to process this huge amount of data. We investigate some of these solutions, based on machine learning models as well as tools and architectures for Big Data analysis that can be exploited in the astrophysical context.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sierra Thermal /Fluid Team

    The SIERRA Low Mach Module: Fuego along with the SIERRA Participating Media Radiation Module: Syrinx, henceforth referred to as Fuego and Syrinx, respectively, are the key elements of the ASCI fire environment simulation project. The fire environment simulation project is directed at characterizing both open large-scale pool fires and building enclosure fires. Fuego represents the turbulent, buoyantly-driven incompressible flow, heat transfer, mass transfer, combustion, soot, and absorption coefficient model portion of the simulation software. Syrinx represents the participating-media thermal radiation mechanics. This project is an integral part of the SIERRA multi-mechanics software development project. Fuego depends heavily upon the coremore » architecture developments provided by SIERRA for massively parallel computing, solution adaptivity, and mechanics coupling on unstructured grids.« less

  16. Evolution of the PWWP-domain encoding genes in the plant and animal lineages

    PubMed Central

    2012-01-01

    Background Conserved domains are recognized as the building blocks of eukaryotic proteins. Domains showing a tendency to occur in diverse combinations (‘promiscuous’ domains) are involved in versatile architectures in proteins with different functions. Current models, based on global-level analyses of domain combinations in multiple genomes, have suggested that the propensity of some domains to associate with other domains in high-level architectures increases with organismal complexity. Alternative models using domain-based phylogenetic trees propose that domains have become promiscuous independently in different lineages through convergent evolution and are, thus, random with no functional or structural preferences. Here we test whether complex protein architectures have occurred by accretion from simpler systems and whether the appearance of multidomain combinations parallels organismal complexity. As a model, we analyze the modular evolution of the PWWP domain and ask whether its appearance in combinations with other domains into multidomain architectures is linked with the occurrence of more complex life-forms. Whether high-level combinations of domains are conserved and transmitted as stable units (cassettes) through evolution is examined in the genomes of plant or metazoan species selected for their established position in the evolution of the respective lineages. Results Using the domain-tree approach, we analyze the evolutionary origins and distribution patterns of the promiscuous PWWP domain to understand the principles of its modular evolution and its existence in combination with other domains in higher-level protein architectures. We found that as a single module the PWWP domain occurs only in proteins with a limited, mainly, species-specific distribution. Earlier, it was suggested that domain promiscuity is a fast-changing (volatile) feature shaped by natural selection and that only a few domains retain their promiscuity status throughout evolution. In contrast, our data show that most of the multidomain PWWP combinations in extant multicellular organisms (humans or land plants) are present in their unicellular ancestral relatives suggesting they have been transmitted through evolution as conserved linear arrangements (‘cassettes’). Among the most interesting biologically relevant results is the finding that the genes of the two plant Trithorax family subgroups (ATX1/2 and ATX3/4/5) have different phylogenetic origins. The two subgroups occur together in the earliest land plants Physcomitrella patens and Selaginella moellendorffii. Conclusion Gain/loss of a single PWWP domain is observed throughout evolution reflecting dynamic lineage- or species-specific events. In contrast, higher-level protein architectures involving the PWWP domain have survived as stable arrangements driven by evolutionary descent. The association of PWWP domains with the DNA methyltransferases in O. tauri and in the metazoan lineage seems to have occurred independently consistent with convergent evolution. Our results do not support models wherein more complex protein architectures involving the PWWP domain occur with the appearance of more evolutionarily advanced life forms. PMID:22734652

  17. Modeling and executing electronic health records driven phenotyping algorithms using the NQF Quality Data Model and JBoss® Drools Engine.

    PubMed

    Li, Dingcheng; Endle, Cory M; Murthy, Sahana; Stancl, Craig; Suesse, Dale; Sottara, Davide; Huff, Stanley M; Chute, Christopher G; Pathak, Jyotishman

    2012-01-01

    With increasing adoption of electronic health records (EHRs), the need for formal representations for EHR-driven phenotyping algorithms has been recognized for some time. The recently proposed Quality Data Model from the National Quality Forum (NQF) provides an information model and a grammar that is intended to represent data collected during routine clinical care in EHRs as well as the basic logic required to represent the algorithmic criteria for phenotype definitions. The QDM is further aligned with Meaningful Use standards to ensure that the clinical data and algorithmic criteria are represented in a consistent, unambiguous and reproducible manner. However, phenotype definitions represented in QDM, while structured, cannot be executed readily on existing EHRs. Rather, human interpretation, and subsequent implementation is a required step for this process. To address this need, the current study investigates open-source JBoss® Drools rules engine for automatic translation of QDM criteria into rules for execution over EHR data. In particular, using Apache Foundation's Unstructured Information Management Architecture (UIMA) platform, we developed a translator tool for converting QDM defined phenotyping algorithm criteria into executable Drools rules scripts, and demonstrated their execution on real patient data from Mayo Clinic to identify cases for Coronary Artery Disease and Diabetes. To the best of our knowledge, this is the first study illustrating a framework and an approach for executing phenotyping criteria modeled in QDM using the Drools business rules management system.

  18. Modeling and Executing Electronic Health Records Driven Phenotyping Algorithms using the NQF Quality Data Model and JBoss® Drools Engine

    PubMed Central

    Li, Dingcheng; Endle, Cory M; Murthy, Sahana; Stancl, Craig; Suesse, Dale; Sottara, Davide; Huff, Stanley M.; Chute, Christopher G.; Pathak, Jyotishman

    2012-01-01

    With increasing adoption of electronic health records (EHRs), the need for formal representations for EHR-driven phenotyping algorithms has been recognized for some time. The recently proposed Quality Data Model from the National Quality Forum (NQF) provides an information model and a grammar that is intended to represent data collected during routine clinical care in EHRs as well as the basic logic required to represent the algorithmic criteria for phenotype definitions. The QDM is further aligned with Meaningful Use standards to ensure that the clinical data and algorithmic criteria are represented in a consistent, unambiguous and reproducible manner. However, phenotype definitions represented in QDM, while structured, cannot be executed readily on existing EHRs. Rather, human interpretation, and subsequent implementation is a required step for this process. To address this need, the current study investigates open-source JBoss® Drools rules engine for automatic translation of QDM criteria into rules for execution over EHR data. In particular, using Apache Foundation’s Unstructured Information Management Architecture (UIMA) platform, we developed a translator tool for converting QDM defined phenotyping algorithm criteria into executable Drools rules scripts, and demonstrated their execution on real patient data from Mayo Clinic to identify cases for Coronary Artery Disease and Diabetes. To the best of our knowledge, this is the first study illustrating a framework and an approach for executing phenotyping criteria modeled in QDM using the Drools business rules management system. PMID:23304325

  19. Efficient Ada multitasking on a RISC register window architecture

    NASA Technical Reports Server (NTRS)

    Kearns, J. P.; Quammen, D.

    1987-01-01

    This work addresses the problem of reducing context switch overhead on a processor which supports a large register file - a register file much like that which is part of the Berkeley RISC processors and several other emerging architectures (which are not necessarily reduced instruction set machines in the purest sense). Such a reduction in overhead is particularly desirable in a real-time embedded application, in which task-to-task context switch overhead may result in failure to meet crucial deadlines. A storage management technique by which a context switch may be implemented as cheaply as a procedure call is presented. The essence of this technique is the avoidance of the save/restore of registers on the context switch. This is achieved through analysis of the static source text of an Ada tasking program. Information gained during that analysis directs the optimized storage management strategy for that program at run time. A formal verification of the technique in terms of an operational control model and an evaluation of the technique's performance via simulations driven by synthetic Ada program traces are presented.

  20. The software architecture to control the Cherenkov Telescope Array

    NASA Astrophysics Data System (ADS)

    Oya, I.; Füßling, M.; Antonino, P. O.; Conforti, V.; Hagge, L.; Melkumyan, D.; Morgenstern, A.; Tosti, G.; Schwanke, U.; Schwarz, J.; Wegner, P.; Colomé, J.; Lyard, E.

    2016-07-01

    The Cherenkov Telescope Array (CTA) project is an initiative to build two large arrays of Cherenkov gamma- ray telescopes. CTA will be deployed as two installations, one in the northern and the other in the southern hemisphere, containing dozens of telescopes of different sizes. CTA is a big step forward in the field of ground- based gamma-ray astronomy, not only because of the expected scientific return, but also due to the order-of- magnitude larger scale of the instrument to be controlled. The performance requirements associated with such a large and distributed astronomical installation require a thoughtful analysis to determine the best software solutions. The array control and data acquisition (ACTL) work-package within the CTA initiative will deliver the software to control and acquire the data from the CTA instrumentation. In this contribution we present the current status of the formal ACTL system decomposition into software building blocks and the relationships among them. The system is modelled via the Systems Modelling Language (SysML) formalism. To cope with the complexity of the system, this architecture model is sub-divided into different perspectives. The relationships with the stakeholders and external systems are used to create the first perspective, the context of the ACTL software system. Use cases are employed to describe the interaction of those external elements with the ACTL system and are traced to a hierarchy of functionalities (abstract system functions) describing the internal structure of the ACTL system. These functions are then traced to fully specified logical elements (software components), the deployment of which as technical elements, is also described. This modelling approach allows us to decompose the ACTL software in elements to be created and the ow of information within the system, providing us with a clear way to identify sub-system interdependencies. This architectural approach allows us to build the ACTL system model and trace requirements to deliverables (source code, documentation, etc.), and permits the implementation of a flexible use-case driven software development approach thanks to the traceability from use cases to the logical software elements. The Alma Common Software (ACS) container/component framework, used for the control of the Atacama Large Millimeter/submillimeter Array (ALMA) is the basis for the ACTL software and as such it is considered as an integral part of the software architecture.

  1. Differential solvation of intrinsically disordered linkers drives the formation of spatially organized droplets in ternary systems of linear multivalent proteins

    NASA Astrophysics Data System (ADS)

    Harmon, Tyler S.; Holehouse, Alex S.; Pappu, Rohit V.

    2018-04-01

    Intracellular biomolecular condensates are membraneless organelles that encompass large numbers of multivalent protein and nucleic acid molecules. The bodies assemble via a combination of liquid–liquid phase separation and gelation. A majority of condensates included multiple components and show multilayered organization as opposed to being well-mixed unitary liquids. Here, we put forward a simple thermodynamic framework to describe the emergence of spatially organized droplets in multicomponent systems comprising of linear multivalent polymers also known as associative polymers. These polymers, which mimic proteins and/or RNA have the architecture of domains or motifs known as stickers that are interspersed by flexible spacers known as linkers. Using a minimalist numerical model for a four-component system, we have identified features of linear multivalent molecules that are necessary and sufficient for generating spatially organized droplets. We show that differences in sequence-specific effective solvation volumes of disordered linkers between interaction domains enable the formation of spatially organized droplets. Molecules with linkers that are preferentially solvated are driven to the interface with the bulk solvent, whereas molecules that have linkers with negligible effective solvation volumes form cores in the core–shell architectures that emerge in the minimalist four-component systems. Our modeling has relevance for understanding the physical determinants of spatially organized membraneless organelles.

  2. Simulating Hydrologic Flow and Reactive Transport with PFLOTRAN and PETSc on Emerging Fine-Grained Parallel Computer Architectures

    NASA Astrophysics Data System (ADS)

    Mills, R. T.; Rupp, K.; Smith, B. F.; Brown, J.; Knepley, M.; Zhang, H.; Adams, M.; Hammond, G. E.

    2017-12-01

    As the high-performance computing community pushes towards the exascale horizon, power and heat considerations have driven the increasing importance and prevalence of fine-grained parallelism in new computer architectures. High-performance computing centers have become increasingly reliant on GPGPU accelerators and "manycore" processors such as the Intel Xeon Phi line, and 512-bit SIMD registers have even been introduced in the latest generation of Intel's mainstream Xeon server processors. The high degree of fine-grained parallelism and more complicated memory hierarchy considerations of such "manycore" processors present several challenges to existing scientific software. Here, we consider how the massively parallel, open-source hydrologic flow and reactive transport code PFLOTRAN - and the underlying Portable, Extensible Toolkit for Scientific Computation (PETSc) library on which it is built - can best take advantage of such architectures. We will discuss some key features of these novel architectures and our code optimizations and algorithmic developments targeted at them, and present experiences drawn from working with a wide range of PFLOTRAN benchmark problems on these architectures.

  3. Advanced Design and Implementation of a Control Architecture for Long Range Autonomous Planetary Rovers

    NASA Technical Reports Server (NTRS)

    Martin-Alvarez, A.; Hayati, S.; Volpe, R.; Petras, R.

    1999-01-01

    An advanced design and implementation of a Control Architecture for Long Range Autonomous Planetary Rovers is presented using a hierarchical top-down task decomposition, and the common structure of each design is presented based on feedback control theory. Graphical programming is presented as a common intuitive language for the design when a large design team is composed of managers, architecture designers, engineers, programmers, and maintenance personnel. The whole design of the control architecture consists in the classic control concepts of cyclic data processing and event-driven reaction to achieve all the reasoning and behaviors needed. For this purpose, a commercial graphical tool is presented that includes the mentioned control capabilities. Messages queues are used for inter-communication among control functions, allowing Artificial Intelligence (AI) reasoning techniques based on queue manipulation. Experimental results show a highly autonomous control system running in real time on top the JPL micro-rover Rocky 7 controlling simultaneously several robotic devices. This paper validates the sinergy between Artificial Intelligence and classic control concepts in having in advanced Control Architecture for Long Range Autonomous Planetary Rovers.

  4. Modelling of hydrothermal fluid flow and structural architecture in an extensional basin, Ngakuru Graben, Taupo Rift, New Zealand

    NASA Astrophysics Data System (ADS)

    Kissling, W. M.; Villamor, P.; Ellis, S. M.; Rae, A.

    2018-05-01

    Present-day geothermal activity on the margins of the Ngakuru graben and evidence of fossil hydrothermal activity in the central graben suggest that a graben-wide system of permeable intersecting faults acts as the principal conduit for fluid flow to the surface. We have developed numerical models of fluid and heat flow in a regional-scale 2-D cross-section of the Ngakuru Graben. The models incorporate simplified representations of two 'end-member' fault architectures (one symmetric at depth, the other highly asymmetric) which are consistent with the surface locations and dips of the Ngakuru graben faults. The models are used to explore controls on buoyancy-driven convective fluid flow which could explain the differences between the past and present hydrothermal systems associated with these faults. The models show that the surface flows from the faults are strongly controlled by the fault permeability, the fault system architecture and the location of the heat source with respect to the faults in the graben. In particular, fault intersections at depth allow exchange of fluid between faults, and the location of the heat source on the footwall of normal faults can facilitate upflow along those faults. These controls give rise to two distinct fluid flow regimes in the fault network. The first, a regular flow regime, is characterised by a nearly unchanging pattern of fluid flow vectors within the fault network as the fault permeability evolves. In the second, complex flow regime, the surface flows depend strongly on fault permeability, and can fluctuate in an erratic manner. The direction of flow within faults can reverse in both regimes as fault permeability changes. Both flow regimes provide insights into the differences between the present-day and fossil geothermal systems in the Ngakuru graben. Hydrothermal upflow along the Paeroa fault seems to have occurred, possibly continuously, for tens of thousands of years, while upflow in other faults in the graben has switched on and off during the same period. An asymmetric graben architecture with the Paeroa being the major boundary fault will facilitate the predominant upflow along this fault. Upflow on the axial faults is more difficult to explain with this modelling. It occurs most easily with an asymmetric graben architecture and heat sources close to the graben axis (which could be associated with remnant heat from recent eruptions from Okataina Volcanic Centre). Temporal changes in upflow can also be associated with acceleration and deceleration of fault activity if this is considered a proxy for fault permeability. Other explanations for temporal variations in hydrothermal activity not explored here are different permeability on different faults, and different permeability along fault strike.

  5. Systems biology driven software design for the research enterprise

    PubMed Central

    Boyle, John; Cavnor, Christopher; Killcoyne, Sarah; Shmulevich, Ilya

    2008-01-01

    Background In systems biology, and many other areas of research, there is a need for the interoperability of tools and data sources that were not originally designed to be integrated. Due to the interdisciplinary nature of systems biology, and its association with high throughput experimental platforms, there is an additional need to continually integrate new technologies. As scientists work in isolated groups, integration with other groups is rarely a consideration when building the required software tools. Results We illustrate an approach, through the discussion of a purpose built software architecture, which allows disparate groups to reuse tools and access data sources in a common manner. The architecture allows for: the rapid development of distributed applications; interoperability, so it can be used by a wide variety of developers and computational biologists; development using standard tools, so that it is easy to maintain and does not require a large development effort; extensibility, so that new technologies and data types can be incorporated; and non intrusive development, insofar as researchers need not to adhere to a pre-existing object model. Conclusion By using a relatively simple integration strategy, based upon a common identity system and dynamically discovered interoperable services, a light-weight software architecture can become the focal point through which scientists can both get access to and analyse the plethora of experimentally derived data. PMID:18578887

  6. Muscle volume is related to trabecular and cortical bone architecture in typically developing children.

    PubMed

    Bajaj, Deepti; Allerton, Brianne M; Kirby, Joshua T; Miller, Freeman; Rowe, David A; Pohlig, Ryan T; Modlesky, Christopher M

    2015-12-01

    Muscle is strongly related to cortical bone architecture in children; however, the relationship between muscle volume and trabecular bone architecture is poorly studied. The aim of this study was to determine if muscle volume is related to trabecular bone architecture in children and if the relationship is different than the relationship between muscle volume and cortical bone architecture. Forty typically developing children (20 boys and 20 girls; 6 to 12y) were included in the study. Measures of trabecular bone architecture [i.e., apparent trabecular bone volume to total volume (appBV/TV), trabecular number (appTb.N), trabecular thickness (appTb.Th) and trabecular separation (appTb.Sp)] in the distal femur, cortical bone architecture [cortical volume, total volume, section modulus (Z) and polar moment of inertia (J)] in the midfemur, muscle volume in the midthigh and femur length were assessed using magnetic resonance imaging. Total physical activity and moderate-to-vigorous physical activity were assessed using an accelerometer-based activity monitor worn around the waist for four days. Calcium intake was assessed using diet records. Relationships among the measures were tested using multiple linear regression analysis. Muscle volume was moderately-to-strongly related to measures of trabecular bone architecture [appBV/TV (r=0.81), appTb.N (r=0.53), appTb.Th (r=0.67), appTb.Sp (r=-0.71); all p<0.001] but more strongly related to measures of cortical bone architecture [cortical volume (r=0.96), total volume (r=0.94), Z (r=0.94) and J (r=0.92; all p<0.001)]. Similar relationships were observed between femur length and measures of trabecular (p<0.01) and cortical (p<0.001) bone architecture. Sex, physical activity and calcium intake were not related to any measure of bone architecture (p>0.05). Because muscle volume and femur length were strongly related (r=0.91, p<0.001), muscle volume was scaled for femur length (muscle volume/femur length(2.77)). When muscle volume/femur length(2.77) was included in a regression model with femur length, sex, physical activity and calcium intake, muscle volume/femur length(2.77) was a significant predictor of appBV/TV, appTb.Th and appTb.Sp (partial r=0.44 to 0.49, p<0.05) and all measures of cortical bone architecture (partial r=0.47 to 0.54; p<0.01). The findings suggest that muscle volume in the midthigh is related to trabecular bone architecture in the distal femur of typically developing children. The relationship is weaker than the relationship between muscle volume in the midthigh and cortical bone architecture in the midfemur, but the discrepancy is driven, in large part, by the greater dependence of cortical bone architecture measures on femur length. Copyright © 2015. Published by Elsevier Inc.

  7. Muscle volume is related to trabecular and cortical bone architecture in typically developing children

    PubMed Central

    Bajaj, Deepti; Allerton, Brianne M.; Kirby, Joshua T.; Miller, Freeman; Rowe, David A.; Pohlig, Ryan T.; Modlesky, Christopher M.

    2016-01-01

    Introduction Muscle is strongly related to cortical bone architecture in children; however, the relationship between muscle volume and trabecular bone architecture is poorly studied. The aim of this study was to determine if muscle volume is related to trabecular bone architecture in children and if the relationship is different than the relationship between muscle volume and cortical bone architecture. Materials and methods Forty typically developing children (20 boys and 20 girls; 6 to 12 y) were included in the study. Measures of trabecular bone architecture [apparent trabecular bone volume to total volume (appBV/TV), trabecular number (appTb.N), trabecular thickness (appTb.Th), and trabecular separation (appTb.Sp)] in the distal femur, cortical bone architecture [(cortical volume, medullary volume, total volume, polar moment of inertia (J) and section modulus (Z)] in the midfemur, muscle volume in the midthigh and femur length were assessed using magnetic resonance imaging. Total and moderate-to-vigorous physical activity were assessed using an accelerometer-based activity monitor worn around the waist for four days. Calcium intake was assessed using diet records. Relationships among the measures were tested using multiple linear regression analysis. Results Muscle volume was moderately-to-strongly related to measures of trabecular bone architecture [appBV/TV (r = 0.81, appTb.N (r = 0.53), appTb.Th (r = 0.67), appTb.Sp (r = −0.71; all p < 0.001] but more strongly related to measures of cortical bone architecture [cortical volume (r = 0.96), total volume (r = 0.94), Z (r = 0.94) and J (r = 0.92; all p < 0.001)]. Similar relationships were observed between femur length and measures of trabecular (p < 0.01) and cortical (p < 0.001) bone architecture. Sex, physical activity and calcium intake were not related to any measure of bone architecture (p > 0.05). Because muscle volume and femur length were strongly related (r = 0.91, p < 0.001), muscle volume was scaled for femur length (muscle volume/femur length2.77). When muscle volume/femur length2.77 was included in a regression model with femur length, sex, physical activity and calcium intake, muscle volume/femur length2.77 was a significant predictor of appBV/TV, appTb.Th and appTb.Sp (partial r = 0.44 to 049, p < 0.05) and all measures of cortical bone architecture (partial r = 0.47 to 054; p < 0.01). Conclusions The findings suggest that muscle volume in the midthigh is related to trabecular bone architecture in the distal femur of children. The relationship is weaker than the relationship between muscle volume in the midthigh and cortical bone architecture in the midfemur, but the discrepancy is driven, in large part, by the greater dependence of cortical bone architecture measures on femur length. PMID:26187197

  8. An ontology-based telemedicine tasks management system architecture.

    PubMed

    Nageba, Ebrahim; Fayn, Jocelyne; Rubel, Paul

    2008-01-01

    The recent developments in ambient intelligence and ubiquitous computing offer new opportunities for the design of advanced Telemedicine systems providing high quality services, anywhere, anytime. In this paper we present an approach for building an ontology-based task-driven telemedicine system. The architecture is composed of a task management server, a communication server and a knowledge base for enabling decision makings taking account of different telemedical concepts such as actors, resources, services and the Electronic Health Record. The final objective is to provide an intelligent management of the different types of available human, material and communication resources.

  9. XMI2USE: A Tool for Transforming XMI to USE Specifications

    NASA Astrophysics Data System (ADS)

    Sun, Wuliang; Song, Eunjee; Grabow, Paul C.; Simmonds, Devon M.

    The UML-based Specification Environment (USE) tool supports syntactic analysis, type checking, consistency checking, and dynamic validation of invariants and pre-/post conditions specified in the Object Constraint Language (OCL). Due to its animation and analysis power, it is useful when checking critical non-functional properties such as security policies. However, the USE tool requires one to specify (i.e., "write") a model using its own textual language and does not allow one to import any model specification files created by other UML modeling tools. Hence, to make the best use of existing UML tools, we often create a model with OCL constraints using a modeling tool such as the IBM Rational Software Architect (RSA) and then use the USE tool for model validation. This approach, however, requires a manual transformation between the specifications of two different tool formats, which is error-prone and diminishes the benefit of automated model-level validations. In this paper, we describe our own implementation of a specification transformation engine that is based on the Model Driven Architecture (MDA) framework and currently supports automatic tool-level transformations from RSA to USE.

  10. Two problems in multiphase biological flows: Blood flow and particulate transport in microvascular network, and pseudopod-driven motility of amoeboid cells

    NASA Astrophysics Data System (ADS)

    Bagchi, Prosenjit

    2016-11-01

    In this talk, two problems in multiphase biological flows will be discussed. The first is the direct numerical simulation of whole blood and drug particulates in microvascular networks. Blood in microcirculation behaves as a dense suspension of heterogeneous cells. The erythrocytes are extremely deformable, while inactivated platelets and leukocytes are nearly rigid. A significant progress has been made in recent years in modeling blood as a dense cellular suspension. However, many of these studies considered the blood flow in simple geometry, e.g., straight tubes of uniform cross-section. In contrast, the architecture of a microvascular network is very complex with bifurcating, merging and winding vessels, posing a further challenge to numerical modeling. We have developed an immersed-boundary-based method that can consider blood cell flow in physiologically realistic and complex microvascular network. In addition to addressing many physiological issues related to network hemodynamics, this tool can be used to optimize the transport properties of drug particulates for effective organ-specific delivery. Our second problem is pseudopod-driven motility as often observed in metastatic cancer cells and other amoeboid cells. We have developed a multiscale hydrodynamic model to simulate such motility. We study the effect of cell stiffness on motility as the former has been considered as a biomarker for metastatic potential. Funded by the National Science Foundation.

  11. A fuzzy Petri-net-based mode identification algorithm for fault diagnosis of complex systems

    NASA Astrophysics Data System (ADS)

    Propes, Nicholas C.; Vachtsevanos, George

    2003-08-01

    Complex dynamical systems such as aircraft, manufacturing systems, chillers, motor vehicles, submarines, etc. exhibit continuous and event-driven dynamics. These systems undergo several discrete operating modes from startup to shutdown. For example, a certain shipboard system may be operating at half load or full load or may be at start-up or shutdown. Of particular interest are extreme or "shock" operating conditions, which tend to severely impact fault diagnosis or the progression of a fault leading to a failure. Fault conditions are strongly dependent on the operating mode. Therefore, it is essential that in any diagnostic/prognostic architecture, the operating mode be identified as accurately as possible so that such functions as feature extraction, diagnostics, prognostics, etc. can be correlated with the predominant operating conditions. This paper introduces a mode identification methodology that incorporates both time- and event-driven information about the process. A fuzzy Petri net is used to represent the possible successive mode transitions and to detect events from processed sensor signals signifying a mode change. The operating mode is initialized and verified by analysis of the time-driven dynamics through a fuzzy logic classifier. An evidence combiner module is used to combine the results from both the fuzzy Petri net and the fuzzy logic classifier to determine the mode. Unlike most event-driven mode identifiers, this architecture will provide automatic mode initialization through the fuzzy logic classifier and robustness through the combining of evidence of the two algorithms. The mode identification methodology is applied to an AC Plant typically found as a component of a shipboard system.

  12. The AI Bus architecture for distributed knowledge-based systems

    NASA Technical Reports Server (NTRS)

    Schultz, Roger D.; Stobie, Iain

    1991-01-01

    The AI Bus architecture is layered, distributed object oriented framework developed to support the requirements of advanced technology programs for an order of magnitude improvement in software costs. The consequent need for highly autonomous computer systems, adaptable to new technology advances over a long lifespan, led to the design of an open architecture and toolbox for building large scale, robust, production quality systems. The AI Bus accommodates a mix of knowledge based and conventional components, running on heterogeneous, distributed real world and testbed environment. The concepts and design is described of the AI Bus architecture and its current implementation status as a Unix C++ library or reusable objects. Each high level semiautonomous agent process consists of a number of knowledge sources together with interagent communication mechanisms based on shared blackboards and message passing acquaintances. Standard interfaces and protocols are followed for combining and validating subsystems. Dynamic probes or demons provide an event driven means for providing active objects with shared access to resources, and each other, while not violating their security.

  13. Process and data fragmentation-oriented enterprise network integration with collaboration modelling and collaboration agents

    NASA Astrophysics Data System (ADS)

    Li, Qing; Wang, Ze-yuan; Cao, Zhi-chao; Du, Rui-yang; Luo, Hao

    2015-08-01

    With the process of globalisation and the development of management models and information technology, enterprise cooperation and collaboration has developed from intra-enterprise integration, outsourcing and inter-enterprise integration, and supply chain management, to virtual enterprises and enterprise networks. Some midfielder enterprises begin to serve for different supply chains. Therefore, they combine related supply chains into a complex enterprise network. The main challenges for enterprise network's integration and collaboration are business process and data fragmentation beyond organisational boundaries. This paper reviews the requirements of enterprise network's integration and collaboration, as well as the development of new information technologies. Based on service-oriented architecture (SOA), collaboration modelling and collaboration agents are introduced to solve problems of collaborative management for service convergence under the condition of process and data fragmentation. A model-driven methodology is developed to design and deploy the integrating framework. An industrial experiment is designed and implemented to illustrate the usage of developed technologies in this paper.

  14. Interactive Volume Exploration of Petascale Microscopy Data Streams Using a Visualization-Driven Virtual Memory Approach.

    PubMed

    Hadwiger, M; Beyer, J; Jeong, Won-Ki; Pfister, H

    2012-12-01

    This paper presents the first volume visualization system that scales to petascale volumes imaged as a continuous stream of high-resolution electron microscopy images. Our architecture scales to dense, anisotropic petascale volumes because it: (1) decouples construction of the 3D multi-resolution representation required for visualization from data acquisition, and (2) decouples sample access time during ray-casting from the size of the multi-resolution hierarchy. Our system is designed around a scalable multi-resolution virtual memory architecture that handles missing data naturally, does not pre-compute any 3D multi-resolution representation such as an octree, and can accept a constant stream of 2D image tiles from the microscopes. A novelty of our system design is that it is visualization-driven: we restrict most computations to the visible volume data. Leveraging the virtual memory architecture, missing data are detected during volume ray-casting as cache misses, which are propagated backwards for on-demand out-of-core processing. 3D blocks of volume data are only constructed from 2D microscope image tiles when they have actually been accessed during ray-casting. We extensively evaluate our system design choices with respect to scalability and performance, compare to previous best-of-breed systems, and illustrate the effectiveness of our system for real microscopy data from neuroscience.

  15. CSDMS2.0: Computational Infrastructure for Community Surface Dynamics Modeling

    NASA Astrophysics Data System (ADS)

    Syvitski, J. P.; Hutton, E.; Peckham, S. D.; Overeem, I.; Kettner, A.

    2012-12-01

    The Community Surface Dynamic Modeling System (CSDMS) is an NSF-supported, international and community-driven program that seeks to transform the science and practice of earth-surface dynamics modeling. CSDMS integrates a diverse community of more than 850 geoscientists representing 360 international institutions (academic, government, industry) from 60 countries and is supported by a CSDMS Interagency Committee (22 Federal agencies), and a CSDMS Industrial Consortia (18 companies). CSDMS presently distributes more 200 Open Source models and modeling tools, access to high performance computing clusters in support of developing and running models, and a suite of products for education and knowledge transfer. CSDMS software architecture employs frameworks and services that convert stand-alone models into flexible "plug-and-play" components to be assembled into larger applications. CSDMS2.0 will support model applications within a web browser, on a wider variety of computational platforms, and on other high performance computing clusters to ensure robustness and sustainability of the framework. Conversion of stand-alone models into "plug-and-play" components will employ automated wrapping tools. Methods for quantifying model uncertainty are being adapted as part of the modeling framework. Benchmarking data is being incorporated into the CSDMS modeling framework to support model inter-comparison. Finally, a robust mechanism for ingesting and utilizing semantic mediation databases is being developed within the Modeling Framework. Six new community initiatives are being pursued: 1) an earth - ecosystem modeling initiative to capture ecosystem dynamics and ensuing interactions with landscapes, 2) a geodynamics initiative to investigate the interplay among climate, geomorphology, and tectonic processes, 3) an Anthropocene modeling initiative, to incorporate mechanistic models of human influences, 4) a coastal vulnerability modeling initiative, with emphasis on deltas and their multiple threats and stressors, 5) a continental margin modeling initiative, to capture extreme oceanic and atmospheric events generating turbidity currents in the Gulf of Mexico, and 6) a CZO Focus Research Group, to develop compatibility between CSDMS architecture and protocols and Critical Zone Observatory-developed models and data.

  16. An analysis of Belgian Cannabis Social Clubs' supply practices: A shapeshifting model?

    PubMed

    Pardal, Mafalda

    2018-04-13

    Cannabis Social Clubs (CSCs) are associations of cannabis users that collectively organize the cultivation and distribution of cannabis. As this middle ground supply model has been active in Belgium for over a decade, this paper aims to examine CSCs' supply practices, noting any shifts from previously reported features of the model. We draw on interviews with directors of seven currently active Belgian CSCs (n = 21) and their cannabis growers (n = 23). This data was complemented by additional fieldwork, as well as a review of CSCs' key internal documents. Most Belgian CSCs are formally registered non-profit associations. One of the Belgian CSCs has developed a structure of sub-divisions and regional chapters. The Belgian CSCs supply cannabis to members only, and in some cases only medical users are admitted. CSCs rely on in-house growers, ensuring supply in a cooperative and closed-circuit way, despite changes to the distribution methods The associations are relatively small-scale and non-commercially driven. The introduction of formal quality control practices remains challenging. As the CSC model is often included in discussions about cannabis policy, but remains in most cases driven by self-regulatory efforts, it is important to take stock of how CSCs' supply function has been implemented in practice - as doing so will improve our understanding of the model and of the wider range of cannabis 'supply architectures'. This paper highlights the continuity and changes in CSC practices, noting the emergence of several different variants of the CSC model, which are classified in a first CSC typology. Copyright © 2018 Elsevier B.V. All rights reserved.

  17. Multi-tiered S-SOA, Parameter-Driven New Islamic Syariah Products of Holistic Islamic Banking System (HiCORE): Virtual Banking Environment

    NASA Astrophysics Data System (ADS)

    Halimah, B. Z.; Azlina, A.; Sembok, T. M.; Sufian, I.; Sharul Azman, M. N.; Azuraliza, A. B.; Zulaiha, A. O.; Nazlia, O.; Salwani, A.; Sanep, A.; Hailani, M. T.; Zaher, M. Z.; Azizah, J.; Nor Faezah, M. Y.; Choo, W. O.; Abdullah, Chew; Sopian, B.

    The Holistic Islamic Banking System (HiCORE), a banking system suitable for virtual banking environment, created based on universityindustry collaboration initiative between Universiti Kebangsaan Malaysia (UKM) and Fuziq Software Sdn Bhd. HiCORE was modeled on a multitiered Simple - Services Oriented Architecture (S-SOA), using the parameterbased semantic approach. HiCORE's existence is timely as the financial world is looking for a new approach to creating banking and financial products that are interest free or based on the Islamic Syariah principles and jurisprudence. An interest free banking system has currently caught the interest of bankers and financiers all over the world. HiCORE's Parameter-based module houses the Customer-information file (CIF), Deposit and Financing components. The Parameter based module represents the third tier of the multi-tiered Simple SOA approach. This paper highlights the multi-tiered parameter- driven approach to the creation of new Islamiic products based on the 'dalil' (Quran), 'syarat' (rules) and 'rukun' (procedures) as required by the syariah principles and jurisprudence reflected by the semantic ontology embedded in the parameter module of the system.

  18. Feature integration theory revisited: dissociating feature detection and attentional guidance in visual search.

    PubMed

    Chan, Louis K H; Hayward, William G

    2009-02-01

    In feature integration theory (FIT; A. Treisman & S. Sato, 1990), feature detection is driven by independent dimensional modules, and other searches are driven by a master map of locations that integrates dimensional information into salience signals. Although recent theoretical models have largely abandoned this distinction, some observed results are difficult to explain in its absence. The present study measured dimension-specific performance during detection and localization, tasks that require operation of dimensional modules and the master map, respectively. Results showed a dissociation between tasks in terms of both dimension-switching costs and cross-dimension attentional capture, reflecting a dimension-specific nature for detection tasks and a dimension-general nature for localization tasks. In a feature-discrimination task, results precluded an explanation based on response mode. These results are interpreted to support FIT's postulation that different mechanisms are involved in parallel and focal attention searches. This indicates that the FIT architecture should be adopted to explain the current results and that a variety of visual attention findings can be addressed within this framework. Copyright 2009 APA, all rights reserved.

  19. Space Generic Open Avionics Architecture (SGOAA): Overview

    NASA Technical Reports Server (NTRS)

    Wray, Richard B.; Stovall, John R.

    1992-01-01

    A space generic open avionics architecture created for NASA is described. It will serve as the basis for entities in spacecraft core avionics, capable of being tailored by NASA for future space program avionics ranging from small vehicles such as Moon ascent/descent vehicles to large ones such as Mars transfer vehicles or orbiting stations. The standard consists of: (1) a system architecture; (2) a generic processing hardware architecture; (3) a six class architecture interface model; (4) a system services functional subsystem architectural model; and (5) an operations control functional subsystem architectural model.

  20. External Dependencies-Driven Architecture Discovery and Analysis of Implemented Systems

    NASA Technical Reports Server (NTRS)

    Ganesan, Dharmalingam; Lindvall, Mikael; Ron, Monica

    2014-01-01

    A method for architecture discovery and analysis of implemented systems (AIS) is disclosed. The premise of the method is that architecture decisions are inspired and influenced by the external entities that the software system makes use of. Examples of such external entities are COTS components, frameworks, and ultimately even the programming language itself and its libraries. Traces of these architecture decisions can thus be found in the implemented software and is manifested in the way software systems use such external entities. While this fact is often ignored in contemporary reverse engineering methods, the AIS method actively leverages and makes use of the dependencies to external entities as a starting point for the architecture discovery. The AIS method is demonstrated using the NASA's Space Network Access System (SNAS). The results show that, with abundant evidence, the method offers reusable and repeatable guidelines for discovering the architecture and locating potential risks (e.g. low testability, decreased performance) that are hidden deep in the implementation. The analysis is conducted by using external dependencies to identify, classify and review a minimal set of key source code files. Given the benefits of analyzing external dependencies as a way to discover architectures, it is argued that external dependencies deserve to be treated as first-class citizens during reverse engineering. The current structure of a knowledge base of external entities and analysis questions with strategies for getting answers is also discussed.

  1. Geology and Design: Formal and Rational Connections

    NASA Astrophysics Data System (ADS)

    Eriksson, S. C.; Brewer, J.

    2016-12-01

    Geological forms and the manmade environment have always been inextricably linked. From the time that Upper Paleolithic man created drawings in the Lascaux Caves in the southwest of France, geology has provided a critical and dramatic spoil for human creativity. This inspiration has manifested itself in many different ways, and the history of architecture is rife with examples of geologically derived buildings. During the early 20th Century, German Expressionist art and architecture was heavily influenced by the natural and often translucent quality of minerals. Architects like Bruno Taut drew and built crystalline forms that would go on to inspire the more restrained Bauhaus movement. Even within the context of Contemporary architecture, geology has been a fertile source for inspiration. Architectural practices across the globe leverage the rationality and grounding found in geology to inform a process that is otherwise dominated by computer-driven parametric design. The connection between advanced design technology and the beautifully realized geo natural forms insures that geology will be a relevant source of architectural inspiration well into the 21st century. The sometimes hidden relationship of geology to the various sub-disciplines of Design such as Architecture, Interiors, Landscape Architecture, and Historic Preservation is explored in relation to curriculum and the practice of design. Topics such as materials, form, history, the cultural and physical landscape, natural hazards, and global design enrich and inform curriculum across the college. Commonly, these help define place-based education.

  2. ESPC Common Model Architecture

    DTIC Science & Technology

    2014-09-30

    1 DISTRIBUTION STATEMENT A. Approved for public release; distribution is unlimited. ESPC Common Model Architecture Earth System Modeling...Operational Prediction Capability (NUOPC) was established between NOAA and Navy to develop common software architecture for easy and efficient...development under a common model architecture and other software-related standards in this project. OBJECTIVES NUOPC proposes to accelerate

  3. A Proposed Pattern of Enterprise Architecture

    DTIC Science & Technology

    2013-02-01

    consistent architecture descriptions. UPDM comprises extensions to both OMG’s Unified Modelling Language (UML) and Systems Modelling Language ( SysML ...those who use UML and SysML . These represent significant advancements that enable architecture trade-off analyses, architecture model execution...Language ( SysML ), and thus provides for architectural descriptions that contain a rich set of (formally) connected DoDAF/MoDAF viewpoints expressed

  4. The System of Systems Architecture Feasibility Assessment Model

    DTIC Science & Technology

    2016-06-01

    OF SYSTEMS ARCHITECTURE FEASIBILITY ASSESSMENT MODEL by Stephen E. Gillespie June 2016 Dissertation Supervisor Eugene Paulo THIS PAGE...Dissertation 4. TITLE AND SUBTITLE THE SYSTEM OF SYSTEMS ARCHITECTURE FEASIBILITY ASSESSMENT MODEL 5. FUNDING NUMBERS 6. AUTHOR(S) Stephen E...SoS architecture feasibility assessment model (SoS-AFAM). Together, these extend current model- based systems engineering (MBSE) and SoS engineering

  5. Lost in translation: bridging gaps between design and evidence-based design.

    PubMed

    Watkins, Nicholas; Keller, Amy

    2008-01-01

    The healthcare design community is adopting evidence-based design (EBD) at a startling rate. However, the role of research within an architectural practice is unclear. Reasons for the lack of clarity include multiple connotations of EBD, the tension between a research-driven market and market-driven research, and the competing expectations and standards of design practitioners and researchers. Research as part of EBD should be integral with the design process so that research directly contributes to building projects. Characteristics of a comprehensive programming methodology to close the gap between design and EBD are suggested.

  6. Ontology-Based Retrieval of Spatially Related Objects for Location Based Services

    NASA Astrophysics Data System (ADS)

    Haav, Hele-Mai; Kaljuvee, Aivi; Luts, Martin; Vajakas, Toivo

    Advanced Location Based Service (LBS) applications have to integrate information stored in GIS, information about users' preferences (profile) as well as contextual information and information about application itself. Ontology engineering provides methods to semantically integrate several data sources. We propose an ontology-driven LBS development framework: the paper describes the architecture of ontologies and their usage for retrieval of spatially related objects relevant to the user. Our main contribution is to enable personalised ontology driven LBS by providing a novel approach for defining personalised semantic spatial relationships by means of ontologies. The approach is illustrated by an industrial case study.

  7. Assessment of IT solutions used in the Hungarian income tax microsimulation system

    NASA Astrophysics Data System (ADS)

    Molnar, I.; Hardhienata, S.

    2017-01-01

    This paper focuses on the use of information technology (IT) in diverse microsimulation studies and presents state-of-the-art solutions in the traditional application field of personal income tax simulation. The aim of the paper is to promote solutions, which can improve the efficiency and quality of microsimulation model implementation, assess their applicability and help to shift attention from microsimulation model implementation and data analysis towards experiment design and model use. First, the authors shortly discuss the relevant characteristics of the microsimulation application field and the managerial decision-making problem. After examination of the salient problems, advanced IT solutions, such as meta-database and service-oriented architecture are presented. The authors show how selected technologies can be applied to support both data- and behavior-driven and even agent-based personal income tax microsimulation model development. Finally, examples are presented and references made to the Hungarian Income Tax Simulator (HITS) models and their results. The paper concludes with a summary of the IT assessment and application-related author remarks dedicated to an Indonesian Income Tax Microsimulation Model.

  8. A three dimensional micropatterned tumor model for breast cancer cell migration studies.

    PubMed

    Peela, Nitish; Sam, Feba S; Christenson, Wayne; Truong, Danh; Watson, Adam W; Mouneimne, Ghassan; Ros, Robert; Nikkhah, Mehdi

    2016-03-01

    Breast cancer cell invasion is a highly orchestrated process driven by a myriad of complex microenvironmental stimuli, making it difficult to isolate and assess the effects of biochemical or biophysical cues (i.e. tumor architecture, matrix stiffness) on disease progression. In this regard, physiologically relevant tumor models are becoming instrumental to perform studies of cancer cell invasion within well-controlled conditions. Herein, we explored the use of photocrosslinkable hydrogels and a novel, two-step photolithography technique to microengineer a 3D breast tumor model. The microfabrication process enabled precise localization of cell-encapsulated circular constructs adjacent to a low stiffness matrix. To validate the model, breast cancer cell lines (MDA-MB-231, MCF7) and non-tumorigenic mammary epithelial cells (MCF10A) were embedded separately within the tumor model, all of which maintained high viability throughout the experiments. MDA-MB-231 cells exhibited extensive migratory behavior and invaded the surrounding matrix, whereas MCF7 or MCF10A cells formed clusters that stayed confined within the circular tumor regions. Additionally, real-time cell tracking indicated that the speed and persistence of MDA-MB-231 cells were substantially higher within the surrounding matrix compared to the circular constructs. Z-stack imaging of F-actin/α-tubulin cytoskeletal organization revealed unique 3D protrusions in MDA-MB-231 cells and an abundance of 3D clusters formed by MCF7 and MCF10A cells. Our results indicate that gelatin methacrylate (GelMA) hydrogel, integrated with the two-step photolithography technique, has great promise in the development of 3D tumor models with well-defined architecture and tunable stiffness. Copyright © 2015 Elsevier Ltd. All rights reserved.

  9. Scale-dependent genetic structure of the Idaho giant salamander (Dicamptodon aterrimus) in stream networks.

    PubMed

    Mullen, Lindy B; Arthur Woods, H; Schwartz, Michael K; Sepulveda, Adam J; Lowe, Winsor H

    2010-03-01

    The network architecture of streams and rivers constrains evolutionary, demographic and ecological processes of freshwater organisms. This consistent architecture also makes stream networks useful for testing general models of population genetic structure and the scaling of gene flow. We examined genetic structure and gene flow in the facultatively paedomorphic Idaho giant salamander, Dicamptodon aterrimus, in stream networks of Idaho and Montana, USA. We used microsatellite data to test population structure models by (i) examining hierarchical partitioning of genetic variation in stream networks; and (ii) testing for genetic isolation by distance along stream corridors vs. overland pathways. Replicated sampling of streams within catchments within three river basins revealed that hierarchical scale had strong effects on genetic structure and gene flow. amova identified significant structure at all hierarchical scales (among streams, among catchments, among basins), but divergence among catchments had the greatest structural influence. Isolation by distance was detected within catchments, and in-stream distance was a strong predictor of genetic divergence. Patterns of genetic divergence suggest that differentiation among streams within catchments was driven by limited migration, consistent with a stream hierarchy model of population structure. However, there was no evidence of migration among catchments within basins, or among basins, indicating that gene flow only counters the effects of genetic drift at smaller scales (within rather than among catchments). These results show the strong influence of stream networks on population structure and genetic divergence of a salamander, with contrasting effects at different hierarchical scales.

  10. 3DVEM Software Modules for Efficient Management of Point Clouds and Photorealistic 3d Models

    NASA Astrophysics Data System (ADS)

    Fabado, S.; Seguí, A. E.; Cabrelles, M.; Navarro, S.; García-De-San-Miguel, D.; Lerma, J. L.

    2013-07-01

    Cultural heritage managers in general and information users in particular are not usually used to deal with high-technological hardware and software. On the contrary, information providers of metric surveys are most of the times applying latest developments for real-life conservation and restoration projects. This paper addresses the software issue of handling and managing either 3D point clouds or (photorealistic) 3D models to bridge the gap between information users and information providers as regards the management of information which users and providers share as a tool for decision-making, analysis, visualization and management. There are not many viewers specifically designed to handle, manage and create easily animations of architectural and/or archaeological 3D objects, monuments and sites, among others. 3DVEM - 3D Viewer, Editor & Meter software will be introduced to the scientific community, as well as 3DVEM - Live and 3DVEM - Register. The advantages of managing projects with both sets of data, 3D point cloud and photorealistic 3D models, will be introduced. Different visualizations of true documentation projects in the fields of architecture, archaeology and industry will be presented. Emphasis will be driven to highlight the features of new userfriendly software to manage virtual projects. Furthermore, the easiness of creating controlled interactive animations (both walkthrough and fly-through) by the user either on-the-fly or as a traditional movie file will be demonstrated through 3DVEM - Live.

  11. Position Control of Tendon-Driven Fingers

    NASA Technical Reports Server (NTRS)

    Abdallah, Muhammad E.; Platt, Robert, Jr.; Hargrave, B.; Pementer, Frank

    2011-01-01

    Conventionally, tendon-driven manipulators implement some force control scheme based on tension feedback. This feedback allows the system to ensure that the tendons are maintained taut with proper levels of tensioning at all times. Occasionally, whether it is due to the lack of tension feedback or the inability to implement sufficiently high stiffnesses, a position control scheme is needed. This work compares three position controllers for tendon-driven manipulators. A new controller is introduced that achieves the best overall performance with regards to speed, accuracy, and transient behavior. To compensate for the lack of tension feedback, the controller nominally maintains the internal tension on the tendons by implementing a two-tier architecture with a range-space constraint. These control laws are validated experimentally on the Robonaut-2 humanoid hand. I

  12. Review of game theory applications for situation awareness

    NASA Astrophysics Data System (ADS)

    Blasch, Erik; Shen, Dan; Pham, Khanh D.; Chen, Genshe

    2015-05-01

    Game theoretical methods have been used for spectral awareness, space situational awareness (SSA), cyber situational awareness (CSA), and Intelligence, Surveillance, and Reconnaissance situation awareness (ISA). Each of these cases, awareness is supported by sensor estimation for assessment and the situation is determined from the actions of multiple players. Game theory assumes rational actors in a defined scenario; however, variations in social, cultural and behavioral factors include the dynamic nature of the context. In a dynamic data-driven application system (DDDAS), modeling must include both the measurements but also how models are used by different actors with different priorities. In this paper, we highlight the applications of game theory by reviewing the literature to determine the current state of the art and future needs. Future developments would include building towards knowledge awareness with information technology (e.g., data aggregation, access, indexing); multiscale analysis (e.g., space, time, and frequency), and software methods (e.g., architectures, cloud computing, protocols).

  13. Electrostatic actuation and electromechanical switching behavior of one-dimensional nanostructures.

    PubMed

    Subramanian, Arunkumar; Alt, Andreas R; Dong, Lixin; Kratochvil, Bradley E; Bolognesi, Colombo R; Nelson, Bradley J

    2009-10-27

    We report on the electromechanical actuation and switching performance of nanoconstructs involving doubly clamped, individual multiwalled carbon nanotubes. Batch-fabricated, three-state switches with low ON-state voltages (6.7 V average) are demonstrated. A nanoassembly architecture that permits individual probing of one device at a time without crosstalk from other nanotubes, which are originally assembled in parallel, is presented. Experimental investigations into device performance metrics such as hysteresis, repeatability and failure modes are presented. Furthermore, current-driven shell etching is demonstrated as a tool to tune the nanomechanical clamping configuration, stiffness, and actuation voltage of fabricated devices. Computational models, which take into account the nonlinearities induced by stress-stiffening of 1-D nanowires at large deformations, are presented. Apart from providing accurate estimates of device performance, these models provide new insights into the extension of stable travel range in electrostatically actuated nanowire-based constructs as compared to their microscale counterparts.

  14. A system framework of inter-enterprise machining quality control based on fractal theory

    NASA Astrophysics Data System (ADS)

    Zhao, Liping; Qin, Yongtao; Yao, Yiyong; Yan, Peng

    2014-03-01

    In order to meet the quality control requirement of dynamic and complicated product machining processes among enterprises, a system framework of inter-enterprise machining quality control based on fractal was proposed. In this system framework, the fractal-specific characteristic of inter-enterprise machining quality control function was analysed, and the model of inter-enterprise machining quality control was constructed by the nature of fractal structures. Furthermore, the goal-driven strategy of inter-enterprise quality control and the dynamic organisation strategy of inter-enterprise quality improvement were constructed by the characteristic analysis on this model. In addition, the architecture of inter-enterprise machining quality control based on fractal was established by means of Web service. Finally, a case study for application was presented. The result showed that the proposed method was available, and could provide guidance for quality control and support for product reliability in inter-enterprise machining processes.

  15. ARACHNE: A neural-neuroglial network builder with remotely controlled parallel computing

    PubMed Central

    Rusakov, Dmitri A.; Savtchenko, Leonid P.

    2017-01-01

    Creating and running realistic models of neural networks has hitherto been a task for computing professionals rather than experimental neuroscientists. This is mainly because such networks usually engage substantial computational resources, the handling of which requires specific programing skills. Here we put forward a newly developed simulation environment ARACHNE: it enables an investigator to build and explore cellular networks of arbitrary biophysical and architectural complexity using the logic of NEURON and a simple interface on a local computer or a mobile device. The interface can control, through the internet, an optimized computational kernel installed on a remote computer cluster. ARACHNE can combine neuronal (wired) and astroglial (extracellular volume-transmission driven) network types and adopt realistic cell models from the NEURON library. The program and documentation (current version) are available at GitHub repository https://github.com/LeonidSavtchenko/Arachne under the MIT License (MIT). PMID:28362877

  16. Residual Error Based Anomaly Detection Using Auto-Encoder in SMD Machine Sound.

    PubMed

    Oh, Dong Yul; Yun, Il Dong

    2018-04-24

    Detecting an anomaly or an abnormal situation from given noise is highly useful in an environment where constantly verifying and monitoring a machine is required. As deep learning algorithms are further developed, current studies have focused on this problem. However, there are too many variables to define anomalies, and the human annotation for a large collection of abnormal data labeled at the class-level is very labor-intensive. In this paper, we propose to detect abnormal operation sounds or outliers in a very complex machine along with reducing the data-driven annotation cost. The architecture of the proposed model is based on an auto-encoder, and it uses the residual error, which stands for its reconstruction quality, to identify the anomaly. We assess our model using Surface-Mounted Device (SMD) machine sound, which is very complex, as experimental data, and state-of-the-art performance is successfully achieved for anomaly detection.

  17. The LUVOIR Large Mission Concept

    NASA Astrophysics Data System (ADS)

    O'Meara, John; LUVOIR Science and Technology Definition Team

    2018-01-01

    LUVOIR is one of four large mission concepts for which the NASA Astrophysics Division has commissioned studies by Science and Technology Definition Teams (STDTs) drawn from the astronomical community. We are currently developing two architectures: Architecture A with a 15.1 meter segmented primary mirror, and Architecture B with a 9.2 meter segmented primary mirror. Our focus in this presentation is the Architecture A LUVOIR. LUVOIR will operate at the Sun-Earth L2 point. It will be designed to support a broad range of astrophysics and exoplanet studies. The initial instruments developed for LUVOIR Architecture A include 1) a high-performance optical/NIR coronagraph with imaging and spectroscopic capability, 2) a UV imager and spectrograph with high spectral resolution and multi-object capability, 3) a high-definition wide-field optical/NIR camera, and 4) a high resolution UV/optical spectropolarimeter. LUVOIR will be designed for extreme stability to support unprecedented spatial resolution and coronagraphy. It is intended to be a long-lifetime facility that is both serviceable, upgradable, and primarily driven by guest observer science programs. In this presentation, we will describe the observatory, its instruments, and survey the transformative science LUVOIR can accomplish.

  18. Access-enabling architectures : new hybrid multi-modal spatial prototypes towards resource and social sustainability : USDOT Region V Regional University Transportation Center final report.

    DOT National Transportation Integrated Search

    2016-12-19

    The efforts of this project aim to capture and engage these potentials through a design-research method that incorporates a top down, data-driven approach with bottom-up stakeholder perspectives to develop prototypical scenario-based design solutions...

  19. Global Information Infrastructure: The Birth, Vision, and Architecture.

    ERIC Educational Resources Information Center

    Targowski, Andrew S.

    A new world has arrived in which computer and communications technologies will transform the national and global economies into information-driven economies. This is triggering the Information Revolution, which will have political and societal impacts every bit as profound as those of the Industrial Revolution. The 21st century is viewed as one…

  20. Feeding People's Curiosity: Leveraging the Cloud for Automatic Dissemination of Mars Images

    NASA Technical Reports Server (NTRS)

    Knight, David; Powell, Mark

    2013-01-01

    Smartphones and tablets have made wireless computing ubiquitous, and users expect instant, on-demand access to information. The Mars Science Laboratory (MSL) operations software suite, MSL InterfaCE (MSLICE), employs a different back-end image processing architecture compared to that of the Mars Exploration Rovers (MER) in order to better satisfy modern consumer-driven usage patterns and to offer greater server-side flexibility. Cloud services are a centerpiece of the server-side architecture that allows new image data to be delivered automatically to both scientists using MSLICE and the general public through the MSL website (http://mars.jpl.nasa.gov/msl/).

  1. Performance evaluation of OpenFOAM on many-core architectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brzobohatý, Tomáš; Říha, Lubomír; Karásek, Tomáš, E-mail: tomas.karasek@vsb.cz

    In this article application of Open Source Field Operation and Manipulation (OpenFOAM) C++ libraries for solving engineering problems on many-core architectures is presented. Objective of this article is to present scalability of OpenFOAM on parallel platforms solving real engineering problems of fluid dynamics. Scalability test of OpenFOAM is performed using various hardware and different implementation of standard PCG and PBiCG Krylov iterative methods. Speed up of various implementations of linear solvers using GPU and MIC accelerators are presented in this paper. Numerical experiments of 3D lid-driven cavity flow for several cases with various number of cells are presented.

  2. Integration of Distributed Services and Hybrid Models Based on Process Choreography to Predict and Detect Type 2 Diabetes

    PubMed Central

    Bayo-Monton, Jose-Luis; Argente-Pla, María; Fernandez-Llatas, Carlos; Merino-Torres, Juan Francisco

    2017-01-01

    Life expectancy is increasing and, so, the years that patients have to live with chronic diseases and co-morbidities. Type 2 diabetes is one of the most prevalent chronic diseases, specifically linked to being overweight and ages over sixty. Recent studies have demonstrated the effectiveness of new strategies to delay and even prevent the onset of type 2 diabetes by a combination of active and healthy lifestyle on cohorts of mid to high risk subjects. Prospective research has been driven on large groups of the population to build risk scores that aim to obtain a rule for the classification of patients according to the odds for developing the disease. Currently, there are more than two hundred models and risk scores for doing this, but a few have been properly evaluated in external groups and integrated into a clinical application for decision support. In this paper, we present a novel system architecture based on service choreography and hybrid modeling, which enables a distributed integration of clinical databases, statistical and mathematical engines and web interfaces to be deployed in a clinical setting. The system was assessed during an eight-week continuous period with eight endocrinologists of a hospital who evaluated up to 8080 patients with seven different type 2 diabetes risk models implemented in two mathematical engines. Throughput was assessed as a matter of technical key performance indicators, confirming the reliability and efficiency of the proposed architecture to integrate hybrid artificial intelligence tools into daily clinical routine to identify high risk subjects. PMID:29286314

  3. Integration of Distributed Services and Hybrid Models Based on Process Choreography to Predict and Detect Type 2 Diabetes.

    PubMed

    Martinez-Millana, Antonio; Bayo-Monton, Jose-Luis; Argente-Pla, María; Fernandez-Llatas, Carlos; Merino-Torres, Juan Francisco; Traver-Salcedo, Vicente

    2017-12-29

    Life expectancy is increasing and, so, the years that patients have to live with chronic diseases and co-morbidities. Type 2 diabetes is one of the most prevalent chronic diseases, specifically linked to being overweight and ages over sixty. Recent studies have demonstrated the effectiveness of new strategies to delay and even prevent the onset of type 2 diabetes by a combination of active and healthy lifestyle on cohorts of mid to high risk subjects. Prospective research has been driven on large groups of the population to build risk scores that aim to obtain a rule for the classification of patients according to the odds for developing the disease. Currently, there are more than two hundred models and risk scores for doing this, but a few have been properly evaluated in external groups and integrated into a clinical application for decision support. In this paper, we present a novel system architecture based on service choreography and hybrid modeling, which enables a distributed integration of clinical databases, statistical and mathematical engines and web interfaces to be deployed in a clinical setting. The system was assessed during an eight-week continuous period with eight endocrinologists of a hospital who evaluated up to 8080 patients with seven different type 2 diabetes risk models implemented in two mathematical engines. Throughput was assessed as a matter of technical key performance indicators, confirming the reliability and efficiency of the proposed architecture to integrate hybrid artificial intelligence tools into daily clinical routine to identify high risk subjects.

  4. A Neural Dynamic Architecture for Reaching and Grasping Integrates Perception and Movement Generation and Enables On-Line Updating.

    PubMed

    Knips, Guido; Zibner, Stephan K U; Reimann, Hendrik; Schöner, Gregor

    2017-01-01

    Reaching for objects and grasping them is a fundamental skill for any autonomous robot that interacts with its environment. Although this skill seems trivial to adults, who effortlessly pick up even objects they have never seen before, it is hard for other animals, for human infants, and for most autonomous robots. Any time during movement preparation and execution, human reaching movement are updated if the visual scene changes (with a delay of about 100 ms). The capability for online updating highlights how tightly perception, movement planning, and movement generation are integrated in humans. Here, we report on an effort to reproduce this tight integration in a neural dynamic process model of reaching and grasping that covers the complete path from visual perception to movement generation within a unified modeling framework, Dynamic Field Theory. All requisite processes are realized as time-continuous dynamical systems that model the evolution in time of neural population activation. Population level neural processes bring about the attentional selection of objects, the estimation of object shape and pose, and the mapping of pose parameters to suitable movement parameters. Once a target object has been selected, its pose parameters couple into the neural dynamics of movement generation so that changes of pose are propagated through the architecture to update the performed movement online. Implementing the neural architecture on an anthropomorphic robot arm equipped with a Kinect sensor, we evaluate the model by grasping wooden objects. Their size, shape, and pose are estimated from a neural model of scene perception that is based on feature fields. The sequential organization of a reach and grasp act emerges from a sequence of dynamic instabilities within a neural dynamics of behavioral organization, that effectively switches the neural controllers from one phase of the action to the next. Trajectory formation itself is driven by a dynamical systems version of the potential field approach. We highlight the emergent capacity for online updating by showing that a shift or rotation of the object during the reaching phase leads to the online adaptation of the movement plan and successful completion of the grasp.

  5. A Neural Dynamic Architecture for Reaching and Grasping Integrates Perception and Movement Generation and Enables On-Line Updating

    PubMed Central

    Knips, Guido; Zibner, Stephan K. U.; Reimann, Hendrik; Schöner, Gregor

    2017-01-01

    Reaching for objects and grasping them is a fundamental skill for any autonomous robot that interacts with its environment. Although this skill seems trivial to adults, who effortlessly pick up even objects they have never seen before, it is hard for other animals, for human infants, and for most autonomous robots. Any time during movement preparation and execution, human reaching movement are updated if the visual scene changes (with a delay of about 100 ms). The capability for online updating highlights how tightly perception, movement planning, and movement generation are integrated in humans. Here, we report on an effort to reproduce this tight integration in a neural dynamic process model of reaching and grasping that covers the complete path from visual perception to movement generation within a unified modeling framework, Dynamic Field Theory. All requisite processes are realized as time-continuous dynamical systems that model the evolution in time of neural population activation. Population level neural processes bring about the attentional selection of objects, the estimation of object shape and pose, and the mapping of pose parameters to suitable movement parameters. Once a target object has been selected, its pose parameters couple into the neural dynamics of movement generation so that changes of pose are propagated through the architecture to update the performed movement online. Implementing the neural architecture on an anthropomorphic robot arm equipped with a Kinect sensor, we evaluate the model by grasping wooden objects. Their size, shape, and pose are estimated from a neural model of scene perception that is based on feature fields. The sequential organization of a reach and grasp act emerges from a sequence of dynamic instabilities within a neural dynamics of behavioral organization, that effectively switches the neural controllers from one phase of the action to the next. Trajectory formation itself is driven by a dynamical systems version of the potential field approach. We highlight the emergent capacity for online updating by showing that a shift or rotation of the object during the reaching phase leads to the online adaptation of the movement plan and successful completion of the grasp. PMID:28303100

  6. A Reference Architecture for Space Information Management

    NASA Technical Reports Server (NTRS)

    Mattmann, Chris A.; Crichton, Daniel J.; Hughes, J. Steven; Ramirez, Paul M.; Berrios, Daniel C.

    2006-01-01

    We describe a reference architecture for space information management systems that elegantly overcomes the rigid design of common information systems in many domains. The reference architecture consists of a set of flexible, reusable, independent models and software components that function in unison, but remain separately managed entities. The main guiding principle of the reference architecture is to separate the various models of information (e.g., data, metadata, etc.) from implemented system code, allowing each to evolve independently. System modularity, systems interoperability, and dynamic evolution of information system components are the primary benefits of the design of the architecture. The architecture requires the use of information models that are substantially more advanced than those used by the vast majority of information systems. These models are more expressive and can be more easily modularized, distributed and maintained than simpler models e.g., configuration files and data dictionaries. Our current work focuses on formalizing the architecture within a CCSDS Green Book and evaluating the architecture within the context of the C3I initiative.

  7. Framework for the Parametric System Modeling of Space Exploration Architectures

    NASA Technical Reports Server (NTRS)

    Komar, David R.; Hoffman, Jim; Olds, Aaron D.; Seal, Mike D., II

    2008-01-01

    This paper presents a methodology for performing architecture definition and assessment prior to, or during, program formulation that utilizes a centralized, integrated architecture modeling framework operated by a small, core team of general space architects. This framework, known as the Exploration Architecture Model for IN-space and Earth-to-orbit (EXAMINE), enables: 1) a significantly larger fraction of an architecture trade space to be assessed in a given study timeframe; and 2) the complex element-to-element and element-to-system relationships to be quantitatively explored earlier in the design process. Discussion of the methodology advantages and disadvantages with respect to the distributed study team approach typically used within NASA to perform architecture studies is presented along with an overview of EXAMINE s functional components and tools. An example Mars transportation system architecture model is used to demonstrate EXAMINE s capabilities in this paper. However, the framework is generally applicable for exploration architecture modeling with destinations to any celestial body in the solar system.

  8. Insights into secondary growth in perennial plants: its unequal spatial and temporal dynamics in the apple (Malus domestica) is driven by architectural position and fruit load

    PubMed Central

    Lauri, P. É.; Kelner, J. J.; Trottier, C.; Costes, E.

    2010-01-01

    Background and Aims Secondary growth is a main physiological sink. However, the hierarchy between the processes which compete with secondary growth is still a matter of debate, especially on fruit trees where fruit weight dramatically increases with time. It was hypothesized that tree architecture, here mediated by branch age, is likely to have a major effect on the dynamics of secondary growth within a growing season. Methods Three variables were monitored on 6-year-old ‘Golden Delicious’ apple trees from flowering time to harvest: primary shoot growth, fruit volume, and cross-section area of branch portions of consecutive ages. Analyses were done through an ANOVA-type analysis in a linear mixed model framework. Key Results Secondary growth exhibited three consecutive phases characterized by unequal relative area increment over the season. The age of the branch had the strongest effect, with the highest and lowest relative area increment for the current-year shoots and the trunk, respectively. The growth phase had a lower effect, with a shift of secondary growth through the season from leafy shoots towards older branch portions. Eventually, fruit load had an effect on secondary growth mainly after primary growth had ceased. Conclusions The results support the idea that relationships between production of photosynthates and allocation depend on both primary growth and branch architectural position. Fruit load mainly interacted with secondary growth later in the season, especially on old branch portions. PMID:20228088

  9. Insights into secondary growth in perennial plants: its unequal spatial and temporal dynamics in the apple (Malus domestica) is driven by architectural position and fruit load.

    PubMed

    Lauri, P E; Kelner, J J; Trottier, C; Costes, E

    2010-04-01

    Secondary growth is a main physiological sink. However, the hierarchy between the processes which compete with secondary growth is still a matter of debate, especially on fruit trees where fruit weight dramatically increases with time. It was hypothesized that tree architecture, here mediated by branch age, is likely to have a major effect on the dynamics of secondary growth within a growing season. Three variables were monitored on 6-year-old 'Golden Delicious' apple trees from flowering time to harvest: primary shoot growth, fruit volume, and cross-section area of branch portions of consecutive ages. Analyses were done through an ANOVA-type analysis in a linear mixed model framework. Secondary growth exhibited three consecutive phases characterized by unequal relative area increment over the season. The age of the branch had the strongest effect, with the highest and lowest relative area increment for the current-year shoots and the trunk, respectively. The growth phase had a lower effect, with a shift of secondary growth through the season from leafy shoots towards older branch portions. Eventually, fruit load had an effect on secondary growth mainly after primary growth had ceased. The results support the idea that relationships between production of photosynthates and allocation depend on both primary growth and branch architectural position. Fruit load mainly interacted with secondary growth later in the season, especially on old branch portions.

  10. Orientation-selective aVLSI spiking neurons.

    PubMed

    Liu, S C; Kramer, J; Indiveri, G; Delbrück, T; Burg, T; Douglas, R

    2001-01-01

    We describe a programmable multi-chip VLSI neuronal system that can be used for exploring spike-based information processing models. The system consists of a silicon retina, a PIC microcontroller, and a transceiver chip whose integrate-and-fire neurons are connected in a soft winner-take-all architecture. The circuit on this multi-neuron chip approximates a cortical microcircuit. The neurons can be configured for different computational properties by the virtual connections of a selected set of pixels on the silicon retina. The virtual wiring between the different chips is effected by an event-driven communication protocol that uses asynchronous digital pulses, similar to spikes in a neuronal system. We used the multi-chip spike-based system to synthesize orientation-tuned neurons using both a feedforward model and a feedback model. The performance of our analog hardware spiking model matched the experimental observations and digital simulations of continuous-valued neurons. The multi-chip VLSI system has advantages over computer neuronal models in that it is real-time, and the computational time does not scale with the size of the neuronal network.

  11. Software architecture of INO340 telescope control system

    NASA Astrophysics Data System (ADS)

    Ravanmehr, Reza; Khosroshahi, Habib

    2016-08-01

    The software architecture plays an important role in distributed control system of astronomical projects because many subsystems and components must work together in a consistent and reliable way. We have utilized a customized architecture design approach based on "4+1 view model" in order to design INOCS software architecture. In this paper, after reviewing the top level INOCS architecture, we present the software architecture model of INOCS inspired by "4+1 model", for this purpose we provide logical, process, development, physical, and scenario views of our architecture using different UML diagrams and other illustrative visual charts. Each view presents INOCS software architecture from a different perspective. We finish the paper by science data operation of INO340 and the concluding remarks.

  12. Self-organizing actin patterns shape membrane architecture but not cell mechanics

    NASA Astrophysics Data System (ADS)

    Fritzsche, M.; Li, D.; Colin-York, H.; Chang, V. T.; Moeendarbary, E.; Felce, J. H.; Sezgin, E.; Charras, G.; Betzig, E.; Eggeling, C.

    2017-02-01

    Cell-free studies have demonstrated how collective action of actin-associated proteins can organize actin filaments into dynamic patterns, such as vortices, asters and stars. Using complementary microscopic techniques, we here show evidence of such self-organization of the actin cortex in living HeLa cells. During cell adhesion, an active multistage process naturally leads to pattern transitions from actin vortices over stars into asters. This process is primarily driven by Arp2/3 complex nucleation, but not by myosin motors, which is in contrast to what has been theoretically predicted and observed in vitro. Concomitant measurements of mechanics and plasma membrane fluidity demonstrate that changes in actin patterning alter membrane architecture but occur functionally independent of macroscopic cortex elasticity. Consequently, tuning the activity of the Arp2/3 complex to alter filament assembly may thus be a mechanism allowing cells to adjust their membrane architecture without affecting their macroscopic mechanical properties.

  13. Self-organizing actin patterns shape membrane architecture but not cell mechanics

    PubMed Central

    Fritzsche, M.; Li, D.; Colin-York, H.; Chang, V. T.; Moeendarbary, E.; Felce, J. H.; Sezgin, E.; Charras, G.; Betzig, E.; Eggeling, C.

    2017-01-01

    Cell-free studies have demonstrated how collective action of actin-associated proteins can organize actin filaments into dynamic patterns, such as vortices, asters and stars. Using complementary microscopic techniques, we here show evidence of such self-organization of the actin cortex in living HeLa cells. During cell adhesion, an active multistage process naturally leads to pattern transitions from actin vortices over stars into asters. This process is primarily driven by Arp2/3 complex nucleation, but not by myosin motors, which is in contrast to what has been theoretically predicted and observed in vitro. Concomitant measurements of mechanics and plasma membrane fluidity demonstrate that changes in actin patterning alter membrane architecture but occur functionally independent of macroscopic cortex elasticity. Consequently, tuning the activity of the Arp2/3 complex to alter filament assembly may thus be a mechanism allowing cells to adjust their membrane architecture without affecting their macroscopic mechanical properties. PMID:28194011

  14. Phonon Spectrum Engineering in Rolled-up Micro- and Nano-Architectures

    DOE PAGES

    Fomin, Vladimir M.; Balandin, Alexander A.

    2015-10-10

    We report on a possibility of efficient engineering of the acoustic phonon energy spectrum in multishell tubular structures produced by a novel high-tech method of self-organization of micro- and nano-architectures. The strain-driven roll-up procedure paved the way for novel classes of metamaterials such as single semiconductor radial micro- and nano-crystals and multi-layer spiral micro- and nano-superlattices. The acoustic phonon dispersion is determined by solving the equations of elastodynamics for InAs and GaAs material systems. It is shown that the number of shells is an important control parameter of the phonon dispersion together with the structure dimensions and acoustic impedance mismatchmore » between the superlattice layers. The obtained results suggest that rolled up nano-architectures are promising for thermoelectric applications owing to a possibility of significant reduction of the thermal conductivity without degradation of the electronic transport.« less

  15. Shift changes, updates, and the on-call architecture in space shuttle mission control.

    PubMed

    Patterson, E S; Woods, D D

    2001-01-01

    In domains such as nuclear power, industrial process control, and space shuttle mission control, there is increased interest in reducing personnel during nominal operations. An essential element in maintaining safe operations in high risk environments with this 'on-call' organizational architecture is to understand how to bring called-in practitioners up to speed quickly during escalating situations. Targeted field observations were conducted to investigate what it means to update a supervisory controller on the status of a continuous, anomaly-driven process in a complex, distributed environment. Sixteen shift changes, or handovers, at the NASA Johnson Space Center were observed during the STS-76 Space Shuttle mission. The findings from this observational study highlight the importance of prior knowledge in the updates and demonstrate how missing updates can leave flight controllers vulnerable to being unprepared. Implications for mitigating risk in the transition to 'on-call' architectures are discussed.

  16. Shift changes, updates, and the on-call architecture in space shuttle mission control

    NASA Technical Reports Server (NTRS)

    Patterson, E. S.; Woods, D. D.

    2001-01-01

    In domains such as nuclear power, industrial process control, and space shuttle mission control, there is increased interest in reducing personnel during nominal operations. An essential element in maintaining safe operations in high risk environments with this 'on-call' organizational architecture is to understand how to bring called-in practitioners up to speed quickly during escalating situations. Targeted field observations were conducted to investigate what it means to update a supervisory controller on the status of a continuous, anomaly-driven process in a complex, distributed environment. Sixteen shift changes, or handovers, at the NASA Johnson Space Center were observed during the STS-76 Space Shuttle mission. The findings from this observational study highlight the importance of prior knowledge in the updates and demonstrate how missing updates can leave flight controllers vulnerable to being unprepared. Implications for mitigating risk in the transition to 'on-call' architectures are discussed.

  17. A RESTful Service Oriented Architecture for Science Data Processing

    NASA Astrophysics Data System (ADS)

    Duggan, B.; Tilmes, C.; Durbin, P.; Masuoka, E.

    2012-12-01

    The Atmospheric Composition Processing System is an implementation of a RESTful Service Oriented Architecture which handles incoming data from the Ozone Monitoring Instrument and the Ozone Monitoring and Profiler Suite aboard the Aura and NPP spacecrafts respectively. The system has been built entirely from open source components, such as Postgres, Perl, and SQLite and has leveraged the vast resources of the Comprehensive Perl Archive Network (CPAN). The modular design of the system also allows for many of the components to be easily released and integrated into the CPAN ecosystem and reused independently. At minimal expense, the CPAN infrastructure and community provide peer review, feedback and continuous testing in a wide variety of environments and architectures. A well defined set of conventions also facilitates dependency management, packaging, and distribution of code. Test driven development also provides a way to ensure stability despite a continuously changing base of dependencies.

  18. Phonon Spectrum Engineering in Rolled-up Micro- and Nano-Architectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fomin, Vladimir M.; Balandin, Alexander A.

    We report on a possibility of efficient engineering of the acoustic phonon energy spectrum in multishell tubular structures produced by a novel high-tech method of self-organization of micro- and nano-architectures. The strain-driven roll-up procedure paved the way for novel classes of metamaterials such as single semiconductor radial micro- and nano-crystals and multi-layer spiral micro- and nano-superlattices. The acoustic phonon dispersion is determined by solving the equations of elastodynamics for InAs and GaAs material systems. It is shown that the number of shells is an important control parameter of the phonon dispersion together with the structure dimensions and acoustic impedance mismatchmore » between the superlattice layers. The obtained results suggest that rolled up nano-architectures are promising for thermoelectric applications owing to a possibility of significant reduction of the thermal conductivity without degradation of the electronic transport.« less

  19. Ion trap architectures and new directions

    NASA Astrophysics Data System (ADS)

    Siverns, James D.; Quraishi, Qudsia

    2017-12-01

    Trapped ion technology has seen advances in performance, robustness and versatility over the last decade. With increasing numbers of trapped ion groups worldwide, a myriad of trap architectures are currently in use. Applications of trapped ions include: quantum simulation, computing and networking, time standards and fundamental studies in quantum dynamics. Design of such traps is driven by these various research aims, but some universally desirable properties have lead to the development of ion trap foundries. Additionally, the excellent control achievable with trapped ions and the ability to do photonic readout has allowed progress on quantum networking using entanglement between remotely situated ion-based nodes. Here, we present a selection of trap architectures currently in use by the community and present their most salient characteristics, identifying features particularly suited for quantum networking. We also discuss our own in-house research efforts aimed at long-distance trapped ion networking.

  20. A Bayesian Approach to Model Selection in Hierarchical Mixtures-of-Experts Architectures.

    PubMed

    Tanner, Martin A.; Peng, Fengchun; Jacobs, Robert A.

    1997-03-01

    There does not exist a statistical model that shows good performance on all tasks. Consequently, the model selection problem is unavoidable; investigators must decide which model is best at summarizing the data for each task of interest. This article presents an approach to the model selection problem in hierarchical mixtures-of-experts architectures. These architectures combine aspects of generalized linear models with those of finite mixture models in order to perform tasks via a recursive "divide-and-conquer" strategy. Markov chain Monte Carlo methodology is used to estimate the distribution of the architectures' parameters. One part of our approach to model selection attempts to estimate the worth of each component of an architecture so that relatively unused components can be pruned from the architecture's structure. A second part of this approach uses a Bayesian hypothesis testing procedure in order to differentiate inputs that carry useful information from nuisance inputs. Simulation results suggest that the approach presented here adheres to the dictum of Occam's razor; simple architectures that are adequate for summarizing the data are favored over more complex structures. Copyright 1997 Elsevier Science Ltd. All Rights Reserved.

  1. Detection and recognition of mechanical, digging and vehicle signals in the optical fiber pre-warning system

    NASA Astrophysics Data System (ADS)

    Tian, Qing; Yang, Dan; Zhang, Yuan; Qu, Hongquan

    2018-04-01

    This paper presents detection and recognition method to locate and identify harmful intrusions in the optical fiber pre-warning system (OFPS). Inspired by visual attention architecture (VAA), the process flow is divided into two parts, i.e., data-driven process and task-driven process. At first, data-driven process takes all the measurements collected by the system as input signals, which is handled by detection method to locate the harmful intrusion in both spatial domain and time domain. Then, these detected intrusion signals are taken over by task-driven process. Specifically, we get pitch period (PP) and duty cycle (DC) of the intrusion signals to identify the mechanical and manual digging (MD) intrusions respectively. For the passing vehicle (PV) intrusions, their strong low frequency component can be used as good feature. In generally, since the harmful intrusion signals only account for a small part of whole measurements, the data-driven process reduces the amount of input data for subsequent task-driven process considerably. Furthermore, the task-driven process determines the harmful intrusions orderly according to their severity, which makes a priority mechanism for the system as well as targeted processing for different harmful intrusion. At last, real experiments are performed to validate the effectiveness of this method.

  2. Deficiency and Also Transgenic Overexpression of Timp-3 Both Lead to Compromised Bone Mass and Architecture In Vivo

    PubMed Central

    Hopkinson, Mark; Poulet, Blandine; Pollard, Andrea S.; Shefelbine, Sandra J.; Chang, Yu-Mei; Francis-West, Philippa; Bou-Gharios, George; Pitsillides, Andrew A.

    2016-01-01

    Tissue inhibitor of metalloproteinases-3 (TIMP-3) regulates extracellular matrix via its inhibition of matrix metalloproteinases and membrane-bound sheddases. Timp-3 is expressed at multiple sites of extensive tissue remodelling. This extends to bone where its role, however, remains largely unresolved. In this study, we have used Micro-CT to assess bone mass and architecture, histological and histochemical evaluation to characterise the skeletal phenotype of Timp-3 KO mice and have complemented this by also examining similar indices in mice harbouring a Timp-3 transgene driven via a Col-2a-driven promoter to specifically target overexpression to chondrocytes. Our data show that Timp-3 deficiency compromises tibial bone mass and structure in both cortical and trabecular compartments, with corresponding increases in osteoclasts. Transgenic overexpression also generates defects in tibial structure predominantly in the cortical bone along the entire shaft without significant increases in osteoclasts. These alterations in cortical mass significantly compromise predicted tibial load-bearing resistance to torsion in both genotypes. Neither Timp-3 KO nor transgenic mouse growth plates are significantly affected. The impact of Timp-3 deficiency and of transgenic overexpression extends to produce modification in craniofacial bones of both endochondral and intramembranous origins. These data indicate that the levels of Timp-3 are crucial in the attainment of functionally-appropriate bone mass and architecture and that this arises from chondrogenic and osteogenic lineages. PMID:27519049

  3. Role of System Architecture in Architecture in Developing New Drafting Tools

    NASA Astrophysics Data System (ADS)

    Sorguç, Arzu Gönenç

    In this study, the impact of information technologies in architectural design process is discussed. In this discussion, first the differences/nuances between the concept of software engineering and system architecture are clarified. Then, the design process in engineering, and design process in architecture has been compared by considering 3-D models as the center of design process over which the other disciplines involve the design. It is pointed out that in many high-end engineering applications, 3-D solid models and consequently digital mock-up concept has become a common practice. But, architecture as one of the important customers of CAD systems employing these tools has not started to use these 3-D models. It is shown that the reason of this time lag between architecture and engineering lies behind the tradition of design attitude. Therefore, it is proposed a new design scheme a meta-model to develop an integrated design model being centered on 3-D model. It is also proposed a system architecture to achieve the transformation of architectural design process by replacing 2-D thinking with 3-D thinking. It is stated that in the proposed system architecture, the CAD systems are included and adapted for 3-D architectural design in order to provide interfaces for integration of all possible disciplines to design process. It is also shown that such a change will allow to elaborate the intelligent or smart building concept in future.

  4. A mathematical model of cancer stem cell driven tumor initiation: implications of niche size and loss of homeostatic regulatory mechanisms.

    PubMed

    Gentry, Sara N; Jackson, Trachette L

    2013-01-01

    Hierarchical organized tissue structures, with stem cell driven cell differentiation, are critical to the homeostatic maintenance of most tissues, and this underlying cellular architecture is potentially a critical player in the development of a many cancers. Here, we develop a mathematical model of mutation acquisition to investigate how deregulation of the mechanisms preserving stem cell homeostasis contributes to tumor initiation. A novel feature of the model is the inclusion of both extrinsic and intrinsic chemical signaling and interaction with the niche to control stem cell self-renewal. We use the model to simulate the effects of a variety of types and sequences of mutations and then compare and contrast all mutation pathways in order to determine which ones generate cancer cells fastest. The model predicts that the sequence in which mutations occur significantly affects the pace of tumorigenesis. In addition, tumor composition varies for different mutation pathways, so that some sequences generate tumors that are dominated by cancerous cells with all possible mutations, while others are primarily comprised of cells that more closely resemble normal cells with only one or two mutations. We are also able to show that, under certain circumstances, healthy stem cells diminish due to the displacement by mutated cells that have a competitive advantage in the niche. Finally, in the event that all homeostatic regulation is lost, exponential growth of the cancer population occurs in addition to the depletion of normal cells. This model helps to advance our understanding of how mutation acquisition affects mechanisms that influence cell-fate decisions and leads to the initiation of cancers.

  5. A Network Architecture for Data-Driven Systems

    DTIC Science & Technology

    1985-07-01

    ELABORATION. ..... ..... 26 Real - Time Operating System . ....... ......... 26 Secondary Memory Utilization. ........ ....... 26 Data Flow Graphical...discussions followed by a flight simulator exam~ple. REAL - TIME OPERATING SYSTEM An operating system needs to be designed exclusively for real-time...Assessment. (SDWA) module. The SDWA module is tightly coupled to the real - time operating system . This module must determine the sensitivity to

  6. Architecture-led Requirements and Safety Analysis of an Aircraft Survivability Situational Awareness System

    DTIC Science & Technology

    2015-05-01

    quality attributes. Prioritization of the utility tree leafs driven by mission goals help the user ensure that critical requirements are well-specified...Methods: State of the Art and Future Directions”, ACM Computing Surveys. 1996. 10 Laitenberger, Oliver , “A Survey of Software Inspection Technologies, Handbook on Software Engineering and Knowledge Engineering”. 2002.

  7. Design of a Knowledge Driven HIS

    PubMed Central

    Pryor, T. Allan; Clayton, Paul D.; Haug, Peter J.; Wigertz, Ove

    1987-01-01

    Design of the software architecture for a knowledge driven HIS is presented. In our design the frame has been used as the basic unit of knowledge representation. The structure of the frame is being designed to be sufficiently universal to contain knowledge required to implement not only expert systems, but almost all traditional HIS functions including ADT, order entry and results review. The design incorporates a two level format for the knowledge. The first level as ASCII records is used to maintain the knowledge base while the second level converted by special knowledge compilers to standard computer languages is used for efficient implementation of the knowledge applications.

  8. Advanced optical manufacturing digital integrated system

    NASA Astrophysics Data System (ADS)

    Tao, Yizheng; Li, Xinglan; Li, Wei; Tang, Dingyong

    2012-10-01

    It is necessarily to adapt development of advanced optical manufacturing technology with modern science technology development. To solved these problems which low of ration, ratio of finished product, repetition, consistent in big size and high precision in advanced optical component manufacturing. Applied business driven and method of Rational Unified Process, this paper has researched advanced optical manufacturing process flow, requirement of Advanced Optical Manufacturing integrated System, and put forward architecture and key technology of it. Designed Optical component core and Manufacturing process driven of Advanced Optical Manufacturing Digital Integrated System. the result displayed effective well, realized dynamic planning Manufacturing process, information integration improved ratio of production manufactory.

  9. Enterprise application architecture development based on DoDAF and TOGAF

    NASA Astrophysics Data System (ADS)

    Tao, Zhi-Gang; Luo, Yun-Feng; Chen, Chang-Xin; Wang, Ming-Zhe; Ni, Feng

    2017-05-01

    For the purpose of supporting the design and analysis of enterprise application architecture, here, we report a tailored enterprise application architecture description framework and its corresponding design method. The presented framework can effectively support service-oriented architecting and cloud computing by creating the metadata model based on architecture content framework (ACF), DoDAF metamodel (DM2) and Cloud Computing Modelling Notation (CCMN). The framework also makes an effort to extend and improve the mapping between The Open Group Architecture Framework (TOGAF) application architectural inputs/outputs, deliverables and Department of Defence Architecture Framework (DoDAF)-described models. The roadmap of 52 DoDAF-described models is constructed by creating the metamodels of these described models and analysing the constraint relationship among metamodels. By combining the tailored framework and the roadmap, this article proposes a service-oriented enterprise application architecture development process. Finally, a case study is presented to illustrate the results of implementing the tailored framework in the Southern Base Management Support and Information Platform construction project using the development process proposed by the paper.

  10. Performance Analysis of GFDL's GCM Line-By-Line Radiative Transfer Model on GPU and MIC Architectures

    NASA Astrophysics Data System (ADS)

    Menzel, R.; Paynter, D.; Jones, A. L.

    2017-12-01

    Due to their relatively low computational cost, radiative transfer models in global climate models (GCMs) run on traditional CPU architectures generally consist of shortwave and longwave parameterizations over a small number of wavelength bands. With the rise of newer GPU and MIC architectures, however, the performance of high resolution line-by-line radiative transfer models may soon approach those of the physical parameterizations currently employed in GCMs. Here we present an analysis of the current performance of a new line-by-line radiative transfer model currently under development at GFDL. Although originally designed to specifically exploit GPU architectures through the use of CUDA, the radiative transfer model has recently been extended to include OpenMP in an effort to also effectively target MIC architectures such as Intel's Xeon Phi. Using input data provided by the upcoming Radiative Forcing Model Intercomparison Project (RFMIP, as part of CMIP 6), we compare model results and performance data for various model configurations and spectral resolutions run on both GPU and Intel Knights Landing architectures to analogous runs of the standard Oxford Reference Forward Model on traditional CPUs.

  11. Space Generic Open Avionics Architecture (SGOAA) reference model technical guide

    NASA Technical Reports Server (NTRS)

    Wray, Richard B.; Stovall, John R.

    1993-01-01

    This report presents a full description of the Space Generic Open Avionics Architecture (SGOAA). The SGOAA consists of a generic system architecture for the entities in spacecraft avionics, a generic processing architecture, and a six class model of interfaces in a hardware/software system. The purpose of the SGOAA is to provide an umbrella set of requirements for applying the generic architecture interface model to the design of specific avionics hardware/software systems. The SGOAA defines a generic set of system interface points to facilitate identification of critical interfaces and establishes the requirements for applying appropriate low level detailed implementation standards to those interface points. The generic core avionics system and processing architecture models provided herein are robustly tailorable to specific system applications and provide a platform upon which the interface model is to be applied.

  12. Requirements and Solutions for Personalized Health Systems.

    PubMed

    Blobel, Bernd; Ruotsalainen, Pekka; Lopez, Diego M; Oemig, Frank

    2017-01-01

    Organizational, methodological and technological paradigm changes enable a precise, personalized, predictive, preventive and participative approach to health and social services supported by multiple actors from different domains at diverse level of knowledge and skills. Interoperability has to advance beyond Information and Communication Technologies (ICT) concerns, including the real world business domains and their processes, but also the individual context of all actors involved. The paper introduces and compares personalized health definitions, summarizes requirements and principles for pHealth systems, and considers intelligent interoperability. It addresses knowledge representation and harmonization, decision intelligence, and usability as crucial issues in pHealth. On this basis, a system-theoretical, ontology-based, policy-driven reference architecture model for open and intelligent pHealth ecosystems and its transformation into an appropriate ICT design and implementation is proposed.

  13. SIERRA Low Mach Module: Fuego User Manual Version 4.46.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sierra Thermal/Fluid Team

    2017-09-01

    The SIERRA Low Mach Module: Fuego along with the SIERRA Participating Media Radiation Module: Syrinx, henceforth referred to as Fuego and Syrinx, respectively, are the key elements of the ASCI fire environment simulation project. The fire environment simulation project is directed at characterizing both open large-scale pool fires and building enclosure fires. Fuego represents the turbulent, buoyantly-driven incompressible flow, heat transfer, mass transfer, combustion, soot, and absorption coefficient model portion of the simulation software. Syrinx represents the participating-media thermal radiation mechanics. This project is an integral part of the SIERRA multi-mechanics software development project. Fuego depends heavily upon the coremore » architecture developments provided by SIERRA for massively parallel computing, solution adaptivity, and mechanics coupling on unstructured grids.« less

  14. SIERRA Low Mach Module: Fuego Theory Manual Version 4.44

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sierra Thermal /Fluid Team

    2017-04-01

    The SIERRA Low Mach Module: Fuego along with the SIERRA Participating Media Radiation Module: Syrinx, henceforth referred to as Fuego and Syrinx, respectively, are the key elements of the ASCI fire environment simulation project. The fire environment simulation project is directed at characterizing both open large-scale pool fires and building enclosure fires. Fuego represents the turbulent, buoyantly-driven incompressible flow, heat transfer, mass transfer, combustion, soot, and absorption coefficient model portion of the simulation software. Syrinx represents the participating-media thermal radiation mechanics. This project is an integral part of the SIERRA multi-mechanics software development project. Fuego depends heavily upon the coremore » architecture developments provided by SIERRA for massively parallel computing, solution adaptivity, and mechanics coupling on unstructured grids.« less

  15. SIERRA Low Mach Module: Fuego Theory Manual Version 4.46.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sierra Thermal/Fluid Team

    The SIERRA Low Mach Module: Fuego along with the SIERRA Participating Media Radiation Module: Syrinx, henceforth referred to as Fuego and Syrinx, respectively, are the key elements of the ASCI fire environment simulation project. The fire environment simulation project is directed at characterizing both open large-scale pool fires and building enclosure fires. Fuego represents the turbulent, buoyantly-driven incompressible flow, heat transfer, mass transfer, combustion, soot, and absorption coefficient model portion of the simulation software. Syrinx represents the participating-media thermal radiation mechanics. This project is an integral part of the SIERRA multi-mechanics software development project. Fuego depends heavily upon the coremore » architecture developments provided by SIERRA for massively parallel computing, solution adaptivity, and mechanics coupling on unstructured grids.« less

  16. The Expert Project Management System (EPMS)

    NASA Technical Reports Server (NTRS)

    Silverman, Barry G.; Diakite, Coty

    1986-01-01

    Successful project managers (PMs) have been shown to rely on 'intuition,' experience, and analogical reasoning heuristics. For new PMs to be trained and experienced PMs to avoid repeating others' mistakes, it is necessary to make the knowledge and heuristics of successful PMs more widely available. The preparers have evolved a model of PM thought processes over the last decade that is now ready to be implemented as a generic PM aid. This aid consists of a series of 'specialist' expert systems (CRITIC, LIBRARIAN, IDEA MAN, CRAFTSMAN, and WRITER) that communicate with each other via a 'blackboard' architecture. The various specialist expert systems are driven to support PM training and problem solving since any 'answers' they pass to the blackboard are subjected to conflict identification (AGENDA FORMULATOR) and GOAL SETTER inference engines.

  17. Performance issues for domain-oriented time-driven distributed simulations

    NASA Technical Reports Server (NTRS)

    Nicol, David M.

    1987-01-01

    It has long been recognized that simulations form an interesting and important class of computations that may benefit from distributed or parallel processing. Since the point of parallel processing is improved performance, the recent proliferation of multiprocessors requires that we consider the performance issues that naturally arise when attempting to implement a distributed simulation. Three such issues are: (1) the problem of mapping the simulation onto the architecture, (2) the possibilities for performing redundant computation in order to reduce communication, and (3) the avoidance of deadlock due to distributed contention for message-buffer space. These issues are discussed in the context of a battlefield simulation implemented on a medium-scale multiprocessor message-passing architecture.

  18. Next Generation Mass Memory Architecture

    NASA Astrophysics Data System (ADS)

    Herpel, H.-J.; Stahle, M.; Lonsdorfer, U.; Binzer, N.

    2010-08-01

    Future Mass Memory units will have to cope with various demanding requirements driven by onboard instruments (optical and SAR) that generate a huge amount of data (>10TBit) at a data rate > 6 Gbps. For downlink data rates around 3 Gbps will be feasible using latest ka-band technology together with Variable Coding and Modulation (VCM) techniques. These high data rates and storage capacities need to be effectively managed. Therefore, data structures and data management functions have to be improved and adapted to existing standards like the Packet Utilisation Standard (PUS). In this paper we will present a highly modular and scalable architectural approach for mass memories in order to support a wide range of mission requirements.

  19. Content addressable memory project

    NASA Technical Reports Server (NTRS)

    Hall, Josh; Levy, Saul; Smith, D.; Wei, S.; Miyake, K.; Murdocca, M.

    1991-01-01

    The progress on the Rutgers CAM (Content Addressable Memory) Project is described. The overall design of the system is completed at the architectural level and described. The machine is composed of two kinds of cells: (1) the CAM cells which include both memory and processor, and support local processing within each cell; and (2) the tree cells, which have smaller instruction set, and provide global processing over the CAM cells. A parameterized design of the basic CAM cell is completed. Progress was made on the final specification of the CPS. The machine architecture was driven by the design of algorithms whose requirements are reflected in the resulted instruction set(s). A few of these algorithms are described.

  20. Electrostatic micromembrane actuator arrays as motion generator

    NASA Astrophysics Data System (ADS)

    Wu, X. T.; Hui, J.; Young, M.; Kayatta, P.; Wong, J.; Kennith, D.; Zhe, J.; Warde, C.

    2004-05-01

    A rigid-body motion generator based on an array of micromembrane actuators is described. Unlike previous microelectromechanical systems (MEMS) techniques, the architecture employs a large number (typically greater than 1000) of micron-sized (10-200 μm) membrane actuators to simultaneously generate the displacement of a large rigid body, such as a conventional optical mirror. For optical applications, the approach provides optical design freedom of MEMS mirrors by enabling large-aperture mirrors to be driven electrostatically by MEMS actuators. The micromembrane actuator arrays have been built using a stacked architecture similar to that employed in the Multiuser MEMS Process (MUMPS), and the motion transfer from the arrayed micron-sized actuators to macro-sized components was demonstrated.

  1. Soft Polydimethylsiloxane Elastomers from Architecture-driven Entanglement Free Design

    PubMed Central

    Cai, Li-Heng; Kodger, Thomas E.; Guerra, Rodrigo E.; Pegoraro, Adrian F.; Rubinstein, Michael; Weitz, David A.

    2015-01-01

    We fabricate soft, solvent-free polydimethylsiloxane (PDMS) elastomers by crosslinking bottlebrush polymers rather than linear polymers. We design the chemistry to allow commercially available linear PDMS precursors to deterministically form bottlebrush polymers, which are simultaneously crosslinked, enabling a one-step synthesis. The bottlebrush architecture prevents the formation of entanglements, resulting in elastomers with precisely controllable elastic moduli from ~1 to ~100 kPa, below the intrinsic lower limit of traditional elastomers. Moreover, the solvent-free nature of the soft PDMS elastomers enables a negligible contact adhesion compared to commercially available silicone products of similar stiffness. The exceptional combination of softness and negligible adhesiveness may greatly broaden the applications of PDMS elastomers in both industry and research. PMID:26259975

  2. Evolution of MPCV Service Module Propulsion and GNC Interface Requirements

    NASA Technical Reports Server (NTRS)

    Hickman, Heather K.; Dickens, Kevin W.; Madsen, Jennifer M.; Gutkowski, Jeffrey P.; Ierardo, Nicola; Jaeger, Markus; Lux, Johannes; Freundenberger, John L.; Paisley, Jonathan

    2014-01-01

    The Orion Multi-Purpose Crew Vehicle Service Module Propulsion Subsystem provides propulsion for the integrated Crew and Service Module. Updates in the exploration architecture between Constellation and MPCV as well as NASA's partnership with the European Space Agency have resulted in design changes to the SM Propulsion Subsystem and updates to the Propulsion interface requirements with Guidance Navigation and Control. This paper focuses on the Propulsion and GNC interface requirement updates between the Constellation Service Module and the European Service Module and how the requirement updates were driven or supported by architecture updates and the desired use of hardware with heritage to United States and European spacecraft for the Exploration Missions, EM-1 and EM-2.

  3. An avionics scenario and command model description for Space Generic Open Avionics Architecture (SGOAA)

    NASA Technical Reports Server (NTRS)

    Stovall, John R.; Wray, Richard B.

    1994-01-01

    This paper presents a description of a model for a space vehicle operational scenario and the commands for avionics. This model will be used in developing a dynamic architecture simulation model using the Statemate CASE tool for validation of the Space Generic Open Avionics Architecture (SGOAA). The SGOAA has been proposed as an avionics architecture standard to NASA through its Strategic Avionics Technology Working Group (SATWG) and has been accepted by the Society of Automotive Engineers (SAE) for conversion into an SAE Avionics Standard. This architecture was developed for the Flight Data Systems Division (FDSD) of the NASA Johnson Space Center (JSC) by the Lockheed Engineering and Sciences Company (LESC), Houston, Texas. This SGOAA includes a generic system architecture for the entities in spacecraft avionics, a generic processing external and internal hardware architecture, and a nine class model of interfaces. The SGOAA is both scalable and recursive and can be applied to any hierarchical level of hardware/software processing systems.

  4. Near-Earth Phase Risk Comparison of Human Mars Campaign Architectures

    NASA Technical Reports Server (NTRS)

    Manning, Ted A.; Nejad, Hamed S.; Mattenberger, Chris

    2013-01-01

    A risk analysis of the launch, orbital assembly, and Earth-departure phases of human Mars exploration campaign architectures was completed as an extension of a probabilistic risk assessment (PRA) originally carried out under the NASA Constellation Program Ares V Project. The objective of the updated analysis was to study the sensitivity of loss-of-campaign risk to such architectural factors as composition of the propellant delivery portion of the launch vehicle fleet (Ares V heavy-lift launch vehicle vs. smaller/cheaper commercial launchers) and the degree of launcher or Mars-bound spacecraft element sparing. Both a static PRA analysis and a dynamic, event-based Monte Carlo simulation were developed and used to evaluate the probability of loss of campaign under different sparing options. Results showed that with no sparing, loss-of-campaign risk is strongly driven by launcher count and on-orbit loiter duration, favoring an all-Ares V launch approach. Further, the reliability of the all-Ares V architecture showed significant improvement with the addition of a single spare launcher/payload. Among architectures utilizing a mix of Ares V and commercial launchers, those that minimized the on-orbit loiter duration of Mars-bound elements were found to exceed the reliability of no spare all-Ares V campaign if unlimited commercial vehicle sparing was assumed

  5. Computer architecture for efficient algorithmic executions in real-time systems: New technology for avionics systems and advanced space vehicles

    NASA Technical Reports Server (NTRS)

    Carroll, Chester C.; Youngblood, John N.; Saha, Aindam

    1987-01-01

    Improvements and advances in the development of computer architecture now provide innovative technology for the recasting of traditional sequential solutions into high-performance, low-cost, parallel system to increase system performance. Research conducted in development of specialized computer architecture for the algorithmic execution of an avionics system, guidance and control problem in real time is described. A comprehensive treatment of both the hardware and software structures of a customized computer which performs real-time computation of guidance commands with updated estimates of target motion and time-to-go is presented. An optimal, real-time allocation algorithm was developed which maps the algorithmic tasks onto the processing elements. This allocation is based on the critical path analysis. The final stage is the design and development of the hardware structures suitable for the efficient execution of the allocated task graph. The processing element is designed for rapid execution of the allocated tasks. Fault tolerance is a key feature of the overall architecture. Parallel numerical integration techniques, tasks definitions, and allocation algorithms are discussed. The parallel implementation is analytically verified and the experimental results are presented. The design of the data-driven computer architecture, customized for the execution of the particular algorithm, is discussed.

  6. Computer architecture for efficient algorithmic executions in real-time systems: new technology for avionics systems and advanced space vehicles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carroll, C.C.; Youngblood, J.N.; Saha, A.

    1987-12-01

    Improvements and advances in the development of computer architecture now provide innovative technology for the recasting of traditional sequential solutions into high-performance, low-cost, parallel system to increase system performance. Research conducted in development of specialized computer architecture for the algorithmic execution of an avionics system, guidance and control problem in real time is described. A comprehensive treatment of both the hardware and software structures of a customized computer which performs real-time computation of guidance commands with updated estimates of target motion and time-to-go is presented. An optimal, real-time allocation algorithm was developed which maps the algorithmic tasks onto the processingmore » elements. This allocation is based on the critical path analysis. The final stage is the design and development of the hardware structures suitable for the efficient execution of the allocated task graph. The processing element is designed for rapid execution of the allocated tasks. Fault tolerance is a key feature of the overall architecture. Parallel numerical integration techniques, tasks definitions, and allocation algorithms are discussed. The parallel implementation is analytically verified and the experimental results are presented. The design of the data-driven computer architecture, customized for the execution of the particular algorithm, is discussed.« less

  7. Significantly enhanced creep resistance of low volume fraction in-situ TiBw/Ti6Al4V composites by architectured network reinforcements

    PubMed Central

    Wang, S.; Huang, L. J.; Geng, L.; Scarpa, F.; Jiao, Y.; Peng, H. X.

    2017-01-01

    We present a new class of TiBw/Ti6Al4V composites with a network reinforcement architecture that exhibits a significant creep resistance compared to monolithic Ti6Al4V alloys. Creep tests performed at temperatures between 773 K and 923 K and stress range of 100 MPa-300 MPa indicate both a significant improvement of the composites creep resistance due to the network architecture made by the TiB whiskers (TiBw), and a decrease of the steady-state creep rates by augmenting the local volume fractions of TiBw in the network region. The deformation behavior is driven by a diffusion-controlled dislocation climb process. Moreover, the activation energies of these composites are significantly higher than that of Ti6Al4V alloys, indicating a higher creep resistance. The increase of the activation energy can be attributed to the TiBw architecture that severely impedes the movements of dislocation and grain boundary sliding and provides a tailoring of the stress transfer. These micromechanical mechanisms lead to a remarkable improvement of the creep resistance of these networked TiBw/Ti6Al4V composites featuring the special networked architecture. PMID:28094350

  8. Manned/Unmanned Common Architecture Program (MCAP) net centric flight tests

    NASA Astrophysics Data System (ADS)

    Johnson, Dale

    2009-04-01

    Properly architected avionics systems can reduce the costs of periodic functional improvements, maintenance, and obsolescence. With this in mind, the U.S. Army Aviation Applied Technology Directorate (AATD) initiated the Manned/Unmanned Common Architecture Program (MCAP) in 2003 to develop an affordable, high-performance embedded mission processing architecture for potential application to multiple aviation platforms. MCAP analyzed Army helicopter and unmanned air vehicle (UAV) missions, identified supporting subsystems, surveyed advanced hardware and software technologies, and defined computational infrastructure technical requirements. The project selected a set of modular open systems standards and market-driven commercial-off-theshelf (COTS) electronics and software, and, developed experimental mission processors, network architectures, and software infrastructures supporting the integration of new capabilities, interoperability, and life cycle cost reductions. MCAP integrated the new mission processing architecture into an AH-64D Apache Longbow and participated in Future Combat Systems (FCS) network-centric operations field experiments in 2006 and 2007 at White Sands Missile Range (WSMR), New Mexico and at the Nevada Test and Training Range (NTTR) in 2008. The MCAP Apache also participated in PM C4ISR On-the-Move (OTM) Capstone Experiments 2007 (E07) and 2008 (E08) at Ft. Dix, NJ and conducted Mesa, Arizona local area flight tests in December 2005, February 2006, and June 2008.

  9. Comparison of different artificial neural network architectures in modeling of Chlorella sp. flocculation.

    PubMed

    Zenooz, Alireza Moosavi; Ashtiani, Farzin Zokaee; Ranjbar, Reza; Nikbakht, Fatemeh; Bolouri, Oberon

    2017-07-03

    Biodiesel production from microalgae feedstock should be performed after growth and harvesting of the cells, and the most feasible method for harvesting and dewatering of microalgae is flocculation. Flocculation modeling can be used for evaluation and prediction of its performance under different affective parameters. However, the modeling of flocculation in microalgae is not simple and has not performed yet, under all experimental conditions, mostly due to different behaviors of microalgae cells during the process under different flocculation conditions. In the current study, the modeling of microalgae flocculation is studied with different neural network architectures. Microalgae species, Chlorella sp., was flocculated with ferric chloride under different conditions and then the experimental data modeled using artificial neural network. Neural network architectures of multilayer perceptron (MLP) and radial basis function architectures, failed to predict the targets successfully, though, modeling was effective with ensemble architecture of MLP networks. Comparison between the performances of the ensemble and each individual network explains the ability of the ensemble architecture in microalgae flocculation modeling.

  10. Practical Application of Model-based Programming and State-based Architecture to Space Missions

    NASA Technical Reports Server (NTRS)

    Horvath, Gregory; Ingham, Michel; Chung, Seung; Martin, Oliver; Williams, Brian

    2006-01-01

    A viewgraph presentation to develop models from systems engineers that accomplish mission objectives and manage the health of the system is shown. The topics include: 1) Overview; 2) Motivation; 3) Objective/Vision; 4) Approach; 5) Background: The Mission Data System; 6) Background: State-based Control Architecture System; 7) Background: State Analysis; 8) Overview of State Analysis; 9) Background: MDS Software Frameworks; 10) Background: Model-based Programming; 10) Background: Titan Model-based Executive; 11) Model-based Execution Architecture; 12) Compatibility Analysis of MDS and Titan Architectures; 13) Integrating Model-based Programming and Execution into the Architecture; 14) State Analysis and Modeling; 15) IMU Subsystem State Effects Diagram; 16) Titan Subsystem Model: IMU Health; 17) Integrating Model-based Programming and Execution into the Software IMU; 18) Testing Program; 19) Computationally Tractable State Estimation & Fault Diagnosis; 20) Diagnostic Algorithm Performance; 21) Integration and Test Issues; 22) Demonstrated Benefits; and 23) Next Steps

  11. Towards an Ontology-driven Framework to Enable Development of Personalized mHealth Solutions for Cancer Survivors' Engagement in Healthy Living.

    PubMed

    Myneni, Sahiti; Amith, Muhammad; Geng, Yimin; Tao, Cui

    2015-01-01

    Adolescent and Young Adult (AYA) cancer survivors manage an array of health-related issues. Survivorship Care Plans (SCPs) have the potential to empower these young survivors by providing information regarding treatment summary, late-effects of cancer therapies, healthy lifestyle guidance, coping with work-life-health balance, and follow-up care. However, current mHealth infrastructure used to deliver SCPs has been limited in terms of flexibility, engagement, and reusability. The objective of this study is to develop an ontology-driven survivor engagement framework to facilitate rapid development of mobile apps that are targeted, extensible, and engaging. The major components include ontology models, patient engagement features, and behavioral intervention technologies. We apply the proposed framework to characterize individual building blocks ("survivor digilegos"), which form the basis for mHealth tools that address user needs across the cancer care continuum. Results indicate that the framework (a) allows identification of AYA survivorship components, (b) facilitates infusion of engagement elements, and (c) integrates behavior change constructs into the design architecture of survivorship applications. Implications for design of patient-engaging chronic disease management solutions are discussed.

  12. Two-dimensional optoelectronic interconnect-processor and its operational bit error rate

    NASA Astrophysics Data System (ADS)

    Liu, J. Jiang; Gollsneider, Brian; Chang, Wayne H.; Carhart, Gary W.; Vorontsov, Mikhail A.; Simonis, George J.; Shoop, Barry L.

    2004-10-01

    Two-dimensional (2-D) multi-channel 8x8 optical interconnect and processor system were designed and developed using complementary metal-oxide-semiconductor (CMOS) driven 850-nm vertical-cavity surface-emitting laser (VCSEL) arrays and the photodetector (PD) arrays with corresponding wavelengths. We performed operation and bit-error-rate (BER) analysis on this free-space integrated 8x8 VCSEL optical interconnects driven by silicon-on-sapphire (SOS) circuits. Pseudo-random bit stream (PRBS) data sequence was used in operation of the interconnects. Eye diagrams were measured from individual channels and analyzed using a digital oscilloscope at data rates from 155 Mb/s to 1.5 Gb/s. Using a statistical model of Gaussian distribution for the random noise in the transmission, we developed a method to compute the BER instantaneously with the digital eye-diagrams. Direct measurements on this interconnects were also taken on a standard BER tester for verification. We found that the results of two methods were in the same order and within 50% accuracy. The integrated interconnects were investigated in an optoelectronic processing architecture of digital halftoning image processor. Error diffusion networks implemented by the inherently parallel nature of photonics promise to provide high quality digital halftoned images.

  13. Stimuli-Driven Control of the Helical Axis of Self-Organized Soft Helical Superstructures.

    PubMed

    Bisoyi, Hari Krishna; Bunning, Timothy J; Li, Quan

    2018-06-01

    Supramolecular and macromolecular functional helical superstructures are ubiquitous in nature and display an impressive catalog of intriguing and elegant properties and performances. In materials science, self-organized soft helical superstructures, i.e., cholesteric liquid crystals (CLCs), serve as model systems toward the understanding of morphology- and orientation-dependent properties of supramolecular dynamic helical architectures and their potential for technological applications. Moreover, most of the fascinating device applications of CLCs are primarily determined by different orientations of the helical axis. Here, the control of the helical axis orientation of CLCs and its dynamic switching in two and three dimensions using different external stimuli are summarized. Electric-field-, magnetic-field-, and light-irradiation-driven orientation control and reorientation of the helical axis of CLCs are described and highlighted. Different techniques and strategies developed to achieve a uniform lying helix structure are explored. Helical axis control in recently developed heliconical cholesteric systems is examined. The control of the helical axis orientation in spherical geometries such as microdroplets and microshells fabricated from these enticing photonic fluids is also explored. Future challenges and opportunities in this exciting area involving anisotropic chiral liquids are then discussed. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  14. Information Quality Evaluation of C2 Systems at Architecture Level

    DTIC Science & Technology

    2014-06-01

    based on architecture models of C2 systems, which can help to identify key factors impacting information quality and improve the system capability at the stage of architecture design of C2 system....capability evaluation of C2 systems at architecture level becomes necessary and important for improving the system capability at the stage of architecture ... design . This paper proposes a method for information quality evaluation of C2 system at architecture level. First, the information quality model is

  15. Emergence of a Common Modeling Architecture for Earth System Science (Invited)

    NASA Astrophysics Data System (ADS)

    Deluca, C.

    2010-12-01

    Common modeling architecture can be viewed as a natural outcome of common modeling infrastructure. The development of model utility and coupling packages (ESMF, MCT, OpenMI, etc.) over the last decade represents the realization of a community vision for common model infrastructure. The adoption of these packages has led to increased technical communication among modeling centers and newly coupled modeling systems. However, adoption has also exposed aspects of interoperability that must be addressed before easy exchange of model components among different groups can be achieved. These aspects include common physical architecture (how a model is divided into components) and model metadata and usage conventions. The National Unified Operational Prediction Capability (NUOPC), an operational weather prediction consortium, is collaborating with weather and climate researchers to define a common model architecture that encompasses these advanced aspects of interoperability and looks to future needs. The nature and structure of the emergent common modeling architecture will be discussed along with its implications for future model development.

  16. Error Propagation Analysis in the SAE Architecture Analysis and Design Language (AADL) and the EDICT Tool Framework

    NASA Technical Reports Server (NTRS)

    LaValley, Brian W.; Little, Phillip D.; Walter, Chris J.

    2011-01-01

    This report documents the capabilities of the EDICT tools for error modeling and error propagation analysis when operating with models defined in the Architecture Analysis & Design Language (AADL). We discuss our experience using the EDICT error analysis capabilities on a model of the Scalable Processor-Independent Design for Enhanced Reliability (SPIDER) architecture that uses the Reliable Optical Bus (ROBUS). Based on these experiences we draw some initial conclusions about model based design techniques for error modeling and analysis of highly reliable computing architectures.

  17. Prototyping a Web-of-Energy Architecture for Smart Integration of Sensor Networks in Smart Grids Domain.

    PubMed

    Caballero, Víctor; Vernet, David; Zaballos, Agustín; Corral, Guiomar

    2018-01-30

    Sensor networks and the Internet of Things have driven the evolution of traditional electric power distribution networks towards a new paradigm referred to as Smart Grid. However, the different elements that compose the Information and Communication Technologies (ICTs) layer of a Smart Grid are usually conceived as isolated systems that typically result in rigid hardware architectures which are hard to interoperate, manage, and to adapt to new situations. If the Smart Grid paradigm has to be presented as a solution to the demand for distributed and intelligent energy management system, it is necessary to deploy innovative IT infrastructures to support these smart functions. One of the main issues of Smart Grids is the heterogeneity of communication protocols used by the smart sensor devices that integrate them. The use of the concept of the Web of Things is proposed in this work to tackle this problem. More specifically, the implementation of a Smart Grid's Web of Things, coined as the Web of Energy is introduced. The purpose of this paper is to propose the usage of Web of Energy by means of the Actor Model paradigm to address the latent deployment and management limitations of Smart Grids. Smart Grid designers can use the Actor Model as a design model for an infrastructure that supports the intelligent functions demanded and is capable of grouping and converting the heterogeneity of traditional infrastructures into the homogeneity feature of the Web of Things. Conducted experimentations endorse the feasibility of this solution and encourage practitioners to point their efforts in this direction.

  18. The structures of cytosolic and plastid-located glutamine synthetases from Medicago truncatula reveal a common and dynamic architecture

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Torreira, Eva; Seabra, Ana Rita; Marriott, Hazel

    The experimental models of dicotyledonous cytoplasmic and plastid-located glutamine synthetases unveil a conserved eukaryotic-type decameric architecture, with subtle structural differences in M. truncatula isoenzymes that account for their distinct herbicide resistance. The first step of nitrogen assimilation in higher plants, the energy-driven incorporation of ammonia into glutamate, is catalyzed by glutamine synthetase. This central process yields the readily metabolizable glutamine, which in turn is at the basis of all subsequent biosynthesis of nitrogenous compounds. The essential role performed by glutamine synthetase makes it a prime target for herbicidal compounds, but also a suitable intervention point for the improvement of cropmore » yields. Although the majority of crop plants are dicotyledonous, little is known about the structural organization of glutamine synthetase in these organisms and about the functional differences between the different isoforms. Here, the structural characterization of two glutamine synthetase isoforms from the model legume Medicago truncatula is reported: the crystallographic structure of cytoplasmic GSII-1a and an electron cryomicroscopy reconstruction of plastid-located GSII-2a. Together, these structural models unveil a decameric organization of dicotyledonous glutamine synthetase, with two pentameric rings weakly connected by inter-ring loops. Moreover, rearrangement of these dynamic loops changes the relative orientation of the rings, suggesting a zipper-like mechanism for their assembly into a decameric enzyme. Finally, the atomic structure of M. truncatula GSII-1a provides important insights into the structural determinants of herbicide resistance in this family of enzymes, opening new avenues for the development of herbicide-resistant plants.« less

  19. Prototyping a Web-of-Energy Architecture for Smart Integration of Sensor Networks in Smart Grids Domain

    PubMed Central

    Vernet, David; Corral, Guiomar

    2018-01-01

    Sensor networks and the Internet of Things have driven the evolution of traditional electric power distribution networks towards a new paradigm referred to as Smart Grid. However, the different elements that compose the Information and Communication Technologies (ICTs) layer of a Smart Grid are usually conceived as isolated systems that typically result in rigid hardware architectures which are hard to interoperate, manage, and to adapt to new situations. If the Smart Grid paradigm has to be presented as a solution to the demand for distributed and intelligent energy management system, it is necessary to deploy innovative IT infrastructures to support these smart functions. One of the main issues of Smart Grids is the heterogeneity of communication protocols used by the smart sensor devices that integrate them. The use of the concept of the Web of Things is proposed in this work to tackle this problem. More specifically, the implementation of a Smart Grid’s Web of Things, coined as the Web of Energy is introduced. The purpose of this paper is to propose the usage of Web of Energy by means of the Actor Model paradigm to address the latent deployment and management limitations of Smart Grids. Smart Grid designers can use the Actor Model as a design model for an infrastructure that supports the intelligent functions demanded and is capable of grouping and converting the heterogeneity of traditional infrastructures into the homogeneity feature of the Web of Things. Conducted experimentations endorse the feasibility of this solution and encourage practitioners to point their efforts in this direction. PMID:29385748

  20. Assured Mission Support Space Architecture (AMSSA) study

    NASA Technical Reports Server (NTRS)

    Hamon, Rob

    1993-01-01

    The assured mission support space architecture (AMSSA) study was conducted with the overall goal of developing a long-term requirements-driven integrated space architecture to provide responsive and sustained space support to the combatant commands. Although derivation of an architecture was the focus of the study, there are three significant products from the effort. The first is a philosophy that defines the necessary attributes for the development and operation of space systems to ensure an integrated, interoperable architecture that, by design, provides a high degree of combat utility. The second is the architecture itself; based on an interoperable system-of-systems strategy, it reflects a long-range goal for space that will evolve as user requirements adapt to a changing world environment. The third product is the framework of a process that, when fully developed, will provide essential information to key decision makers for space systems acquisition in order to achieve the AMSSA goal. It is a categorical imperative that military space planners develop space systems that will act as true force multipliers. AMSSA provides the philosophy, process, and architecture that, when integrated with the DOD requirements and acquisition procedures, can yield an assured mission support capability from space to the combatant commanders. An important feature of the AMSSA initiative is the participation by every organization that has a role or interest in space systems development and operation. With continued community involvement, the concept of the AMSSA will become a reality. In summary, AMSSA offers a better way to think about space (philosophy) that can lead to the effective utilization of limited resources (process) with an infrastructure designed to meet the future space needs (architecture) of our combat forces.

  1. Dynamic Analytics-Driven Assessment of Vulnerabilities and Exploitation

    DTIC Science & Technology

    2016-07-15

    integration with big data technologies such as Hadoop , nor does it natively support exporting of events to external relational databases. OSSIM supports...power of big data analytics to determine correlations and temporal causality among vulnerabilities and cyber events. The vulnerability dependencies...via the SCAPE (formerly known as LLCySA [6]). This is illustrated as a big data cyber analytic system architecture in

  2. Sirepo for Synchrotron Radiation Workshop

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nagler, Robert; Moeller, Paul; Rakitin, Maksim

    Sirepo is an open source framework for cloud computing. The graphical user interface (GUI) for Sirepo, also known as the client, executes in any HTML5 compliant web browser on any computing platform, including tablets. The client is built in JavaScript, making use of the following open source libraries: Bootstrap, which is fundamental for cross-platform web applications; AngularJS, which provides a model–view–controller (MVC) architecture and GUI components; and D3.js, which provides interactive plots and data-driven transformations. The Sirepo server is built on the following Python technologies: Flask, which is a lightweight framework for web development; Jinja, which is a secure andmore » widely used templating language; and Werkzeug, a utility library that is compliant with the WSGI standard. We use Nginx as the HTTP server and proxy, which provides a scalable event-driven architecture. The physics codes supported by Sirepo execute inside a Docker container. One of the codes supported by Sirepo is the Synchrotron Radiation Workshop (SRW). SRW computes synchrotron radiation from relativistic electrons in arbitrary magnetic fields and propagates the radiation wavefronts through optical beamlines. SRW is open source and is primarily supported by Dr. Oleg Chubar of NSLS-II at Brookhaven National Laboratory.« less

  3. NASA Enterprise Architecture and Its Use in Transition of Research Results to Operations

    NASA Astrophysics Data System (ADS)

    Frisbie, T. E.; Hall, C. M.

    2006-12-01

    Enterprise architecture describes the design of the components of an enterprise, their relationships and how they support the objectives of that enterprise. NASA Stennis Space Center leads several projects involving enterprise architecture tools used to gather information on research assets within NASA's Earth Science Division. In the near future, enterprise architecture tools will link and display the relevant requirements, parameters, observatories, models, decision systems, and benefit/impact information relationships and map to the Federal Enterprise Architecture Reference Models. Components configured within the enterprise architecture serving the NASA Applied Sciences Program include the Earth Science Components Knowledge Base, the Systems Components database, and the Earth Science Architecture Tool. The Earth Science Components Knowledge Base systematically catalogues NASA missions, sensors, models, data products, model products, and network partners appropriate for consideration in NASA Earth Science applications projects. The Systems Components database is a centralized information warehouse of NASA's Earth Science research assets and a critical first link in the implementation of enterprise architecture. The Earth Science Architecture Tool is used to analyze potential NASA candidate systems that may be beneficial to decision-making capabilities of other Federal agencies. Use of the current configuration of NASA enterprise architecture (the Earth Science Components Knowledge Base, the Systems Components database, and the Earth Science Architecture Tool) has far exceeded its original intent and has tremendous potential for the transition of research results to operational entities.

  4. Client/server approach to image capturing

    NASA Astrophysics Data System (ADS)

    Tuijn, Chris; Stokes, Earle

    1998-01-01

    The diversity of the digital image capturing devices on the market today is quite astonishing and ranges from low-cost CCD scanners to digital cameras (for both action and stand-still scenes), mid-end CCD scanners for desktop publishing and pre- press applications and high-end CCD flatbed scanners and drum- scanners with photo multiplier technology. Each device and market segment has its own specific needs which explains the diversity of the associated scanner applications. What all those applications have in common is the need to communicate with a particular device to import the digital images; after the import, additional image processing might be needed as well as color management operations. Although the specific requirements for all of these applications might differ considerably, a number of image capturing and color management facilities as well as other services are needed which can be shared. In this paper, we propose a client/server architecture for scanning and image editing applications which can be used as a common component for all these applications. One of the principal components of the scan server is the input capturing module. The specification of the input jobs is based on a generic input device model. Through this model we make abstraction of the specific scanner parameters and define the scan job definitions by a number of absolute parameters. As a result, scan job definitions will be less dependent on a particular scanner and have a more universal meaning. In this context, we also elaborate on the interaction of the generic parameters and the color characterization (i.e., the ICC profile). Other topics that are covered are the scheduling and parallel processing capabilities of the server, the image processing facilities, the interaction with the ICC engine, the communication facilities (both in-memory and over the network) and the different client architectures (stand-alone applications, TWAIN servers, plug-ins, OLE or Apple-event driven applications). This paper is structured as follows. In the introduction, we further motive the need for a scan server-based architecture. In the second section, we give a brief architectural overview of the scan server and the other components it is connected to. The third chapter exposes the generic model for input devices as well as the image processing model; the fourth chapter reveals the different shapes the scanning applications (or modules) can have. In the last section, we briefly summarize the presented material and point out trends for future development.

  5. Architectural Strategies for Enabling Data-Driven Science at Scale

    NASA Astrophysics Data System (ADS)

    Crichton, D. J.; Law, E. S.; Doyle, R. J.; Little, M. M.

    2017-12-01

    The analysis of large data collections from NASA or other agencies is often executed through traditional computational and data analysis approaches, which require users to bring data to their desktops and perform local data analysis. Alternatively, data are hauled to large computational environments that provide centralized data analysis via traditional High Performance Computing (HPC). Scientific data archives, however, are not only growing massive, but are also becoming highly distributed. Neither traditional approach provides a good solution for optimizing analysis into the future. Assumptions across the NASA mission and science data lifecycle, which historically assume that all data can be collected, transmitted, processed, and archived, will not scale as more capable instruments stress legacy-based systems. New paradigms are needed to increase the productivity and effectiveness of scientific data analysis. This paradigm must recognize that architectural and analytical choices are interrelated, and must be carefully coordinated in any system that aims to allow efficient, interactive scientific exploration and discovery to exploit massive data collections, from point of collection (e.g., onboard) to analysis and decision support. The most effective approach to analyzing a distributed set of massive data may involve some exploration and iteration, putting a premium on the flexibility afforded by the architectural framework. The framework should enable scientist users to assemble workflows efficiently, manage the uncertainties related to data analysis and inference, and optimize deep-dive analytics to enhance scalability. In many cases, this "data ecosystem" needs to be able to integrate multiple observing assets, ground environments, archives, and analytics, evolving from stewardship of measurements of data to using computational methodologies to better derive insight from the data that may be fused with other sets of data. This presentation will discuss architectural strategies, including a 2015-2016 NASA AIST Study on Big Data, for evolving scientific research towards massively distributed data-driven discovery. It will include example use cases across earth science, planetary science, and other disciplines.

  6. Evolution and adaptation in Pseudomonas aeruginosa biofilms driven by mismatch repair system-deficient mutators.

    PubMed

    Luján, Adela M; Maciá, María D; Yang, Liang; Molin, Søren; Oliver, Antonio; Smania, Andrea M

    2011-01-01

    Pseudomonas aeruginosa is an important opportunistic pathogen causing chronic airway infections, especially in cystic fibrosis (CF) patients. The majority of the CF patients acquire P. aeruginosa during early childhood, and most of them develop chronic infections resulting in severe lung disease, which are rarely eradicated despite intensive antibiotic therapy. Current knowledge indicates that three major adaptive strategies, biofilm development, phenotypic diversification, and mutator phenotypes [driven by a defective mismatch repair system (MRS)], play important roles in P. aeruginosa chronic infections, but the relationship between these strategies is still poorly understood. We have used the flow-cell biofilm model system to investigate the impact of the mutS associated mutator phenotype on development, dynamics, diversification and adaptation of P. aeruginosa biofilms. Through competition experiments we demonstrate for the first time that P. aeruginosa MRS-deficient mutators had enhanced adaptability over wild-type strains when grown in structured biofilms but not as planktonic cells. This advantage was associated with enhanced micro-colony development and increased rates of phenotypic diversification, evidenced by biofilm architecture features and by a wider range and proportion of morphotypic colony variants, respectively. Additionally, morphotypic variants generated in mutator biofilms showed increased competitiveness, providing further evidence for mutator-driven adaptive evolution in the biofilm mode of growth. This work helps to understand the basis for the specific high proportion and role of mutators in chronic infections, where P. aeruginosa develops in biofilm communities.

  7. Evolution and Adaptation in Pseudomonas aeruginosa Biofilms Driven by Mismatch Repair System-Deficient Mutators

    PubMed Central

    Yang, Liang; Molin, Søren; Oliver, Antonio; Smania, Andrea M.

    2011-01-01

    Pseudomonas aeruginosa is an important opportunistic pathogen causing chronic airway infections, especially in cystic fibrosis (CF) patients. The majority of the CF patients acquire P. aeruginosa during early childhood, and most of them develop chronic infections resulting in severe lung disease, which are rarely eradicated despite intensive antibiotic therapy. Current knowledge indicates that three major adaptive strategies, biofilm development, phenotypic diversification, and mutator phenotypes [driven by a defective mismatch repair system (MRS)], play important roles in P. aeruginosa chronic infections, but the relationship between these strategies is still poorly understood. We have used the flow-cell biofilm model system to investigate the impact of the mutS associated mutator phenotype on development, dynamics, diversification and adaptation of P. aeruginosa biofilms. Through competition experiments we demonstrate for the first time that P. aeruginosa MRS-deficient mutators had enhanced adaptability over wild-type strains when grown in structured biofilms but not as planktonic cells. This advantage was associated with enhanced micro-colony development and increased rates of phenotypic diversification, evidenced by biofilm architecture features and by a wider range and proportion of morphotypic colony variants, respectively. Additionally, morphotypic variants generated in mutator biofilms showed increased competitiveness, providing further evidence for mutator-driven adaptive evolution in the biofilm mode of growth. This work helps to understand the basis for the specific high proportion and role of mutators in chronic infections, where P. aeruginosa develops in biofilm communities. PMID:22114708

  8. AnaBench: a Web/CORBA-based workbench for biomolecular sequence analysis

    PubMed Central

    Badidi, Elarbi; De Sousa, Cristina; Lang, B Franz; Burger, Gertraud

    2003-01-01

    Background Sequence data analyses such as gene identification, structure modeling or phylogenetic tree inference involve a variety of bioinformatics software tools. Due to the heterogeneity of bioinformatics tools in usage and data requirements, scientists spend much effort on technical issues including data format, storage and management of input and output, and memorization of numerous parameters and multi-step analysis procedures. Results In this paper, we present the design and implementation of AnaBench, an interactive, Web-based bioinformatics Analysis workBench allowing streamlined data analysis. Our philosophy was to minimize the technical effort not only for the scientist who uses this environment to analyze data, but also for the administrator who manages and maintains the workbench. With new bioinformatics tools published daily, AnaBench permits easy incorporation of additional tools. This flexibility is achieved by employing a three-tier distributed architecture and recent technologies including CORBA middleware, Java, JDBC, and JSP. A CORBA server permits transparent access to a workbench management database, which stores information about the users, their data, as well as the description of all bioinformatics applications that can be launched from the workbench. Conclusion AnaBench is an efficient and intuitive interactive bioinformatics environment, which offers scientists application-driven, data-driven and protocol-driven analysis approaches. The prototype of AnaBench, managed by a team at the Université de Montréal, is accessible on-line at: . Please contact the authors for details about setting up a local-network AnaBench site elsewhere. PMID:14678565

  9. Event-driven contrastive divergence for spiking neuromorphic systems.

    PubMed

    Neftci, Emre; Das, Srinjoy; Pedroni, Bruno; Kreutz-Delgado, Kenneth; Cauwenberghs, Gert

    2013-01-01

    Restricted Boltzmann Machines (RBMs) and Deep Belief Networks have been demonstrated to perform efficiently in a variety of applications, such as dimensionality reduction, feature learning, and classification. Their implementation on neuromorphic hardware platforms emulating large-scale networks of spiking neurons can have significant advantages from the perspectives of scalability, power dissipation and real-time interfacing with the environment. However, the traditional RBM architecture and the commonly used training algorithm known as Contrastive Divergence (CD) are based on discrete updates and exact arithmetics which do not directly map onto a dynamical neural substrate. Here, we present an event-driven variation of CD to train a RBM constructed with Integrate & Fire (I&F) neurons, that is constrained by the limitations of existing and near future neuromorphic hardware platforms. Our strategy is based on neural sampling, which allows us to synthesize a spiking neural network that samples from a target Boltzmann distribution. The recurrent activity of the network replaces the discrete steps of the CD algorithm, while Spike Time Dependent Plasticity (STDP) carries out the weight updates in an online, asynchronous fashion. We demonstrate our approach by training an RBM composed of leaky I&F neurons with STDP synapses to learn a generative model of the MNIST hand-written digit dataset, and by testing it in recognition, generation and cue integration tasks. Our results contribute to a machine learning-driven approach for synthesizing networks of spiking neurons capable of carrying out practical, high-level functionality.

  10. Event-driven contrastive divergence for spiking neuromorphic systems

    PubMed Central

    Neftci, Emre; Das, Srinjoy; Pedroni, Bruno; Kreutz-Delgado, Kenneth; Cauwenberghs, Gert

    2014-01-01

    Restricted Boltzmann Machines (RBMs) and Deep Belief Networks have been demonstrated to perform efficiently in a variety of applications, such as dimensionality reduction, feature learning, and classification. Their implementation on neuromorphic hardware platforms emulating large-scale networks of spiking neurons can have significant advantages from the perspectives of scalability, power dissipation and real-time interfacing with the environment. However, the traditional RBM architecture and the commonly used training algorithm known as Contrastive Divergence (CD) are based on discrete updates and exact arithmetics which do not directly map onto a dynamical neural substrate. Here, we present an event-driven variation of CD to train a RBM constructed with Integrate & Fire (I&F) neurons, that is constrained by the limitations of existing and near future neuromorphic hardware platforms. Our strategy is based on neural sampling, which allows us to synthesize a spiking neural network that samples from a target Boltzmann distribution. The recurrent activity of the network replaces the discrete steps of the CD algorithm, while Spike Time Dependent Plasticity (STDP) carries out the weight updates in an online, asynchronous fashion. We demonstrate our approach by training an RBM composed of leaky I&F neurons with STDP synapses to learn a generative model of the MNIST hand-written digit dataset, and by testing it in recognition, generation and cue integration tasks. Our results contribute to a machine learning-driven approach for synthesizing networks of spiking neurons capable of carrying out practical, high-level functionality. PMID:24574952

  11. Linking Native and Invader Traits Explains Native Spider Population Responses to Plant Invasion.

    PubMed

    Smith, Jennifer N; Emlen, Douglas J; Pearson, Dean E

    2016-01-01

    Theoretically, the functional traits of native species should determine how natives respond to invader-driven changes. To explore this idea, we simulated a large-scale plant invasion using dead spotted knapweed (Centaurea stoebe) stems to determine if native spiders' web-building behaviors could explain differences in spider population responses to structural changes arising from C. stoebe invasion. After two years, irregular web-spiders were >30 times more abundant and orb weavers were >23 times more abundant on simulated invasion plots compared to controls. Additionally, irregular web-spiders on simulated invasion plots built webs that were 4.4 times larger and 5.0 times more likely to capture prey, leading to >2-fold increases in recruitment. Orb-weavers showed no differences in web size or prey captures between treatments. Web-spider responses to simulated invasion mimicked patterns following natural invasions, confirming that C. stoebe's architecture is likely the primary attribute driving native spider responses to these invasions. Differences in spider responses were attributable to differences in web construction behaviors relative to historic web substrate constraints. Orb-weavers in this system constructed webs between multiple plants, so they were limited by the overall quantity of native substrates but not by the architecture of individual native plant species. Irregular web-spiders built their webs within individual plants and were greatly constrained by the diminutive architecture of native plant substrates, so they were limited both by quantity and quality of native substrates. Evaluating native species traits in the context of invader-driven change can explain invasion outcomes and help to identify factors limiting native populations.

  12. Linking Native and Invader Traits Explains Native Spider Population Responses to Plant Invasion

    PubMed Central

    Emlen, Douglas J.; Pearson, Dean E.

    2016-01-01

    Theoretically, the functional traits of native species should determine how natives respond to invader-driven changes. To explore this idea, we simulated a large-scale plant invasion using dead spotted knapweed (Centaurea stoebe) stems to determine if native spiders’ web-building behaviors could explain differences in spider population responses to structural changes arising from C. stoebe invasion. After two years, irregular web-spiders were >30 times more abundant and orb weavers were >23 times more abundant on simulated invasion plots compared to controls. Additionally, irregular web-spiders on simulated invasion plots built webs that were 4.4 times larger and 5.0 times more likely to capture prey, leading to >2-fold increases in recruitment. Orb-weavers showed no differences in web size or prey captures between treatments. Web-spider responses to simulated invasion mimicked patterns following natural invasions, confirming that C. stoebe’s architecture is likely the primary attribute driving native spider responses to these invasions. Differences in spider responses were attributable to differences in web construction behaviors relative to historic web substrate constraints. Orb-weavers in this system constructed webs between multiple plants, so they were limited by the overall quantity of native substrates but not by the architecture of individual native plant species. Irregular web-spiders built their webs within individual plants and were greatly constrained by the diminutive architecture of native plant substrates, so they were limited both by quantity and quality of native substrates. Evaluating native species traits in the context of invader-driven change can explain invasion outcomes and help to identify factors limiting native populations. PMID:27082240

  13. Implementation of an Integrated On-Board Aircraft Engine Diagnostic Architecture

    NASA Technical Reports Server (NTRS)

    Armstrong, Jeffrey B.; Simon, Donald L.

    2012-01-01

    An on-board diagnostic architecture for aircraft turbofan engine performance trending, parameter estimation, and gas-path fault detection and isolation has been developed and evaluated in a simulation environment. The architecture incorporates two independent models: a realtime self-tuning performance model providing parameter estimates and a performance baseline model for diagnostic purposes reflecting long-term engine degradation trends. This architecture was evaluated using flight profiles generated from a nonlinear model with realistic fleet engine health degradation distributions and sensor noise. The architecture was found to produce acceptable estimates of engine health and unmeasured parameters, and the integrated diagnostic algorithms were able to perform correct fault isolation in approximately 70 percent of the tested cases

  14. Compositional Specification of Software Architecture

    NASA Technical Reports Server (NTRS)

    Penix, John; Lau, Sonie (Technical Monitor)

    1998-01-01

    This paper describes our experience using parameterized algebraic specifications to model properties of software architectures. The goal is to model the decomposition of requirements independent of the style used to implement the architecture. We begin by providing an overview of the role of architecture specification in software development. We then describe how architecture specifications are build up from component and connector specifications and give an overview of insights gained from a case study used to validate the method.

  15. Jupiter Europa Orbiter Architecture Definition Process

    NASA Technical Reports Server (NTRS)

    Rasmussen, Robert; Shishko, Robert

    2011-01-01

    The proposed Jupiter Europa Orbiter mission, planned for launch in 2020, is using a new architectural process and framework tool to drive its model-based systems engineering effort. The process focuses on getting the architecture right before writing requirements and developing a point design. A new architecture framework tool provides for the structured entry and retrieval of architecture artifacts based on an emerging architecture meta-model. This paper describes the relationships among these artifacts and how they are used in the systems engineering effort. Some early lessons learned are discussed.

  16. The use of geospatial web services for exchanging utilities data

    NASA Astrophysics Data System (ADS)

    Kuczyńska, Joanna

    2013-04-01

    Geographic information technologies and related geo-information systems currently play an important role in the management of public administration in Poland. One of these tasks is to maintain and update Geodetic Evidence of Public Utilities (GESUT), part of the National Geodetic and Cartographic Resource, which contains an important for many institutions information of technical infrastructure. It requires an active exchange of data between the Geodesy and Cartography Documentation Centers and institutions, which administrate transmission lines. The administrator of public utilities, is legally obliged to provide information about utilities to GESUT. The aim of the research work was to develop a universal data exchange methodology, which can be implemented on a variety of hardware and software platforms. This methodology use Unified Modeling Language (UML), eXtensible Markup Language (XML), and Geography Markup Language (GML). The proposed methodology is based on the two different strategies: Model Driven Architecture (MDA) and Service Oriented Architecture (SOA). Used solutions are consistent with the INSPIRE Directive and ISO 19100 series standards for geographic information. On the basis of analysis of the input data structures, conceptual models were built for both databases. Models were written in the universal modeling language: UML. Combined model that defines a common data structure was also built. This model was transformed into developed for the exchange of geographic information GML standard. The structure of the document describing the data that may be exchanged is defined in the .xsd file. Network services were selected and implemented in the system designed for data exchange based on open source tools. Methodology was implemented and tested. Data in the agreed data structure and metadata were set up on the server. Data access was provided by geospatial network services: data searching possibilities by Catalog Service for the Web (CSW), data collection by Web Feature Service (WFS). WFS provides also operation for modification data, for example to update them by utility administrator. The proposed solution significantly increases the efficiency of data exchange and facilitates maintenance the National Geodetic and Cartographic Resource.

  17. The hierarchical and functional connectivity of higher-order cognitive mechanisms: neurorobotic model to investigate the stability and flexibility of working memory

    PubMed Central

    Alnajjar, Fady; Yamashita, Yuichi; Tani, Jun

    2013-01-01

    Higher-order cognitive mechanisms (HOCM), such as planning, cognitive branching, switching, etc., are known to be the outcomes of a unique neural organizations and dynamics between various regions of the frontal lobe. Although some recent anatomical and neuroimaging studies have shed light on the architecture underlying the formation of such mechanisms, the neural dynamics and the pathways in and between the frontal lobe to form and/or to tune the stability level of its working memory remain controversial. A model to clarify this aspect is therefore required. In this study, we propose a simple neurocomputational model that suggests the basic concept of how HOCM, including the cognitive branching and switching in particular, may mechanistically emerge from time-based neural interactions. The proposed model is constructed such that its functional and structural hierarchy mimics, to a certain degree, the biological hierarchy that is believed to exist between local regions in the frontal lobe. Thus, the hierarchy is attained not only by the force of the layout architecture of the neural connections but also through distinct types of neurons, each with different time properties. To validate the model, cognitive branching and switching tasks were simulated in a physical humanoid robot driven by the model. Results reveal that separation between the lower and the higher-level neurons in such a model is an essential factor to form an appropriate working memory to handle cognitive branching and switching. The analyses of the obtained result also illustrates that the breadth of this separation is important to determine the characteristics of the resulting memory, either static memory or dynamic memory. This work can be considered as a joint research between synthetic and empirical studies, which can open an alternative research area for better understanding of brain mechanisms. PMID:23423881

  18. The hierarchical and functional connectivity of higher-order cognitive mechanisms: neurorobotic model to investigate the stability and flexibility of working memory.

    PubMed

    Alnajjar, Fady; Yamashita, Yuichi; Tani, Jun

    2013-01-01

    Higher-order cognitive mechanisms (HOCM), such as planning, cognitive branching, switching, etc., are known to be the outcomes of a unique neural organizations and dynamics between various regions of the frontal lobe. Although some recent anatomical and neuroimaging studies have shed light on the architecture underlying the formation of such mechanisms, the neural dynamics and the pathways in and between the frontal lobe to form and/or to tune the stability level of its working memory remain controversial. A model to clarify this aspect is therefore required. In this study, we propose a simple neurocomputational model that suggests the basic concept of how HOCM, including the cognitive branching and switching in particular, may mechanistically emerge from time-based neural interactions. The proposed model is constructed such that its functional and structural hierarchy mimics, to a certain degree, the biological hierarchy that is believed to exist between local regions in the frontal lobe. Thus, the hierarchy is attained not only by the force of the layout architecture of the neural connections but also through distinct types of neurons, each with different time properties. To validate the model, cognitive branching and switching tasks were simulated in a physical humanoid robot driven by the model. Results reveal that separation between the lower and the higher-level neurons in such a model is an essential factor to form an appropriate working memory to handle cognitive branching and switching. The analyses of the obtained result also illustrates that the breadth of this separation is important to determine the characteristics of the resulting memory, either static memory or dynamic memory. This work can be considered as a joint research between synthetic and empirical studies, which can open an alternative research area for better understanding of brain mechanisms.

  19. Instruction-level performance modeling and characterization of multimedia applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Luo, Y.; Cameron, K.W.

    1999-06-01

    One of the challenges for characterizing and modeling realistic multimedia applications is the lack of access to source codes. On-chip performance counters effectively resolve this problem by monitoring run-time behaviors at the instruction-level. This paper presents a novel technique of characterizing and modeling workloads at the instruction level for realistic multimedia applications using hardware performance counters. A variety of instruction counts are collected from some multimedia applications, such as RealPlayer, GSM Vocoder, MPEG encoder/decoder, and speech synthesizer. These instruction counts can be used to form a set of abstract characteristic parameters directly related to a processor`s architectural features. Based onmore » microprocessor architectural constraints and these calculated abstract parameters, the architectural performance bottleneck for a specific application can be estimated. Meanwhile, the bottleneck estimation can provide suggestions about viable architectural/functional improvement for certain workloads. The biggest advantage of this new characterization technique is a better understanding of processor utilization efficiency and architectural bottleneck for each application. This technique also provides predictive insight of future architectural enhancements and their affect on current codes. In this paper the authors also attempt to model architectural effect on processor utilization without memory influence. They derive formulas for calculating CPI{sub 0}, CPI without memory effect, and they quantify utilization of architectural parameters. These equations are architecturally diagnostic and predictive in nature. Results provide promise in code characterization, and empirical/analytical modeling.« less

  20. Secular Resonances During Main-Sequence and Post-Main-Sequence Planetary System Dynamics

    NASA Astrophysics Data System (ADS)

    Smallwood, Jeremy L.

    We investigate gravitational perturbations of an asteroid belt by secular resonances. We ap- ply analytic and numerical models to main-sequence and post-main-sequence planetary systems. First, we investigate how the asteroid impact rate on the Earth is affected by the architecture of the planetary system. We find that the nu6 resonance plays an important role in the asteroid collision rate with the Earth. Compared to exoplanetary systems, the solar system is somewhat special in its lack of a super-Earth mass planet in the inner solar system. We therefore consider the effects of the presence of a super-Earth in the terrestrial planet region. We find a significant effect for super-Earths with a mass of around 10 M_{Earth} and a separation greater than about 0.7 AU. These results have implications for the habitability of exoplanetary systems. Secondly, we model white dwarf pollution by asteroids from secular resonances. In the past few decades, observations have revealed signatures of metals polluting the atmospheres of white dwarfs that require a continu- ous accretion of asteroids. We show that secular resonances driven by two outer companions can provide a source of pollution if an inner terrestrial planet is engulfed during the red-giant branch phase. Secular resonances may be a viable mechanism for the pollution of white dwarfs in a variety of exoplanetary system architectures including systems with two giant planets and systems with one giant planet and a binary star companion.

  1. Prediction of daily sea surface temperature using efficient neural networks

    NASA Astrophysics Data System (ADS)

    Patil, Kalpesh; Deo, Makaranad Chintamani

    2017-04-01

    Short-term prediction of sea surface temperature (SST) is commonly achieved through numerical models. Numerical approaches are more suitable for use over a large spatial domain than in a specific site because of the difficulties involved in resolving various physical sub-processes at local levels. Therefore, for a given location, a data-driven approach such as neural networks may provide a better alternative. The application of neural networks, however, needs a large experimentation in their architecture, training methods, and formation of appropriate input-output pairs. A network trained in this manner can provide more attractive results if the advances in network architecture are additionally considered. With this in mind, we propose the use of wavelet neural networks (WNNs) for prediction of daily SST values. The prediction of daily SST values was carried out using WNN over 5 days into the future at six different locations in the Indian Ocean. First, the accuracy of site-specific SST values predicted by a numerical model, ROMS, was assessed against the in situ records. The result pointed out the necessity for alternative approaches. First, traditional networks were tried and after noticing their poor performance, WNN was used. This approach produced attractive forecasts when judged through various error statistics. When all locations were viewed together, the mean absolute error was within 0.18 to 0.32 °C for a 5-day-ahead forecast. The WNN approach was thus found to add value to the numerical method of SST prediction when location-specific information is desired.

  2. A VO-Driven Astronomical Data Grid in China

    NASA Astrophysics Data System (ADS)

    Cui, C.; He, B.; Yang, Y.; Zhao, Y.

    2010-12-01

    With the implementation of many ambitious observation projects, including LAMOST, FAST, and Antarctic observatory at Doom A, observational astronomy in China is stepping into a brand new era with emerging data avalanche. In the era of e-Science, both these cutting-edge projects and traditional astronomy research need much more powerful data management, sharing and interoperability. Based on data-grid concept, taking advantages of the IVOA interoperability technologies, China-VO is developing a VO-driven astronomical data grid environment to enable multi-wavelength science and large database science. In the paper, latest progress and data flow of the LAMOST, architecture of the data grid, and its supports to the VO are discussed.

  3. Data-assisted reduced-order modeling of extreme events in complex dynamical systems

    PubMed Central

    Koumoutsakos, Petros

    2018-01-01

    The prediction of extreme events, from avalanches and droughts to tsunamis and epidemics, depends on the formulation and analysis of relevant, complex dynamical systems. Such dynamical systems are characterized by high intrinsic dimensionality with extreme events having the form of rare transitions that are several standard deviations away from the mean. Such systems are not amenable to classical order-reduction methods through projection of the governing equations due to the large intrinsic dimensionality of the underlying attractor as well as the complexity of the transient events. Alternatively, data-driven techniques aim to quantify the dynamics of specific, critical modes by utilizing data-streams and by expanding the dimensionality of the reduced-order model using delayed coordinates. In turn, these methods have major limitations in regions of the phase space with sparse data, which is the case for extreme events. In this work, we develop a novel hybrid framework that complements an imperfect reduced order model, with data-streams that are integrated though a recurrent neural network (RNN) architecture. The reduced order model has the form of projected equations into a low-dimensional subspace that still contains important dynamical information about the system and it is expanded by a long short-term memory (LSTM) regularization. The LSTM-RNN is trained by analyzing the mismatch between the imperfect model and the data-streams, projected to the reduced-order space. The data-driven model assists the imperfect model in regions where data is available, while for locations where data is sparse the imperfect model still provides a baseline for the prediction of the system state. We assess the developed framework on two challenging prototype systems exhibiting extreme events. We show that the blended approach has improved performance compared with methods that use either data streams or the imperfect model alone. Notably the improvement is more significant in regions associated with extreme events, where data is sparse. PMID:29795631

  4. Data-assisted reduced-order modeling of extreme events in complex dynamical systems.

    PubMed

    Wan, Zhong Yi; Vlachas, Pantelis; Koumoutsakos, Petros; Sapsis, Themistoklis

    2018-01-01

    The prediction of extreme events, from avalanches and droughts to tsunamis and epidemics, depends on the formulation and analysis of relevant, complex dynamical systems. Such dynamical systems are characterized by high intrinsic dimensionality with extreme events having the form of rare transitions that are several standard deviations away from the mean. Such systems are not amenable to classical order-reduction methods through projection of the governing equations due to the large intrinsic dimensionality of the underlying attractor as well as the complexity of the transient events. Alternatively, data-driven techniques aim to quantify the dynamics of specific, critical modes by utilizing data-streams and by expanding the dimensionality of the reduced-order model using delayed coordinates. In turn, these methods have major limitations in regions of the phase space with sparse data, which is the case for extreme events. In this work, we develop a novel hybrid framework that complements an imperfect reduced order model, with data-streams that are integrated though a recurrent neural network (RNN) architecture. The reduced order model has the form of projected equations into a low-dimensional subspace that still contains important dynamical information about the system and it is expanded by a long short-term memory (LSTM) regularization. The LSTM-RNN is trained by analyzing the mismatch between the imperfect model and the data-streams, projected to the reduced-order space. The data-driven model assists the imperfect model in regions where data is available, while for locations where data is sparse the imperfect model still provides a baseline for the prediction of the system state. We assess the developed framework on two challenging prototype systems exhibiting extreme events. We show that the blended approach has improved performance compared with methods that use either data streams or the imperfect model alone. Notably the improvement is more significant in regions associated with extreme events, where data is sparse.

  5. Modeling Techniques for High Dependability Protocols and Architecture

    NASA Technical Reports Server (NTRS)

    LaValley, Brian; Ellis, Peter; Walter, Chris J.

    2012-01-01

    This report documents an investigation into modeling high dependability protocols and some specific challenges that were identified as a result of the experiments. The need for an approach was established and foundational concepts proposed for modeling different layers of a complex protocol and capturing the compositional properties that provide high dependability services for a system architecture. The approach centers around the definition of an architecture layer, its interfaces for composability with other layers and its bindings to a platform specific architecture model that implements the protocols required for the layer.

  6. Assessing the effects of architectural variations on light partitioning within virtual wheat–pea mixtures

    PubMed Central

    Barillot, Romain; Escobar-Gutiérrez, Abraham J.; Fournier, Christian; Huynh, Pierre; Combes, Didier

    2014-01-01

    Background and Aims Predicting light partitioning in crop mixtures is a critical step in improving the productivity of such complex systems, and light interception has been shown to be closely linked to plant architecture. The aim of the present work was to analyse the relationships between plant architecture and light partitioning within wheat–pea (Triticum aestivum–Pisum sativum) mixtures. An existing model for wheat was utilized and a new model for pea morphogenesis was developed. Both models were then used to assess the effects of architectural variations in light partitioning. Methods First, a deterministic model (L-Pea) was developed in order to obtain dynamic reconstructions of pea architecture. The L-Pea model is based on L-systems formalism and consists of modules for ‘vegetative development’ and ‘organ extension’. A tripartite simulator was then built up from pea and wheat models interfaced with a radiative transfer model. Architectural parameters from both plant models, selected on the basis of their contribution to leaf area index (LAI), height and leaf geometry, were then modified in order to generate contrasting architectures of wheat and pea. Key results By scaling down the analysis to the organ level, it could be shown that the number of branches/tillers and length of internodes significantly determined the partitioning of light within mixtures. Temporal relationships between light partitioning and the LAI and height of the different species showed that light capture was mainly related to the architectural traits involved in plant LAI during the early stages of development, and in plant height during the onset of interspecific competition. Conclusions In silico experiments enabled the study of the intrinsic effects of architectural parameters on the partitioning of light in crop mixtures of wheat and pea. The findings show that plant architecture is an important criterion for the identification/breeding of plant ideotypes, particularly with respect to light partitioning. PMID:24907314

  7. RACE/A: An Architectural Account of the Interactions between Learning, Task Control, and Retrieval Dynamics

    ERIC Educational Resources Information Center

    van Maanen, Leendert; van Rijn, Hedderik; Taatgen, Niels

    2012-01-01

    This article discusses how sequential sampling models can be integrated in a cognitive architecture. The new theory Retrieval by Accumulating Evidence in an Architecture (RACE/A) combines the level of detail typically provided by sequential sampling models with the level of task complexity typically provided by cognitive architectures. We will use…

  8. DICCCOL: Dense Individualized and Common Connectivity-Based Cortical Landmarks

    PubMed Central

    Zhu, Dajiang; Guo, Lei; Jiang, Xi; Zhang, Tuo; Zhang, Degang; Chen, Hanbo; Deng, Fan; Faraco, Carlos; Jin, Changfeng; Wee, Chong-Yaw; Yuan, Yixuan; Lv, Peili; Yin, Yan; Hu, Xiaolei; Duan, Lian; Hu, Xintao; Han, Junwei; Wang, Lihong; Shen, Dinggang; Miller, L Stephen

    2013-01-01

    Is there a common structural and functional cortical architecture that can be quantitatively encoded and precisely reproduced across individuals and populations? This question is still largely unanswered due to the vast complexity, variability, and nonlinearity of the cerebral cortex. Here, we hypothesize that the common cortical architecture can be effectively represented by group-wise consistent structural fiber connections and take a novel data-driven approach to explore the cortical architecture. We report a dense and consistent map of 358 cortical landmarks, named Dense Individualized and Common Connectivity–based Cortical Landmarks (DICCCOLs). Each DICCCOL is defined by group-wise consistent white-matter fiber connection patterns derived from diffusion tensor imaging (DTI) data. Our results have shown that these 358 landmarks are remarkably reproducible over more than one hundred human brains and possess accurate intrinsically established structural and functional cross-subject correspondences validated by large-scale functional magnetic resonance imaging data. In particular, these 358 cortical landmarks can be accurately and efficiently predicted in a new single brain with DTI data. Thus, this set of 358 DICCCOL landmarks comprehensively encodes the common structural and functional cortical architectures, providing opportunities for many applications in brain science including mapping human brain connectomes, as demonstrated in this work. PMID:22490548

  9. The Acoustical Properties of the Polyurethane Concrete Made of Oyster Shell Waste Comparing Other Concretes as Architectural Design Components

    NASA Astrophysics Data System (ADS)

    Setyowati, Erni; Hardiman, Gagoek; Purwanto

    2018-02-01

    This research aims to determine the acoustical properties of concrete material made of polyurethane and oyster shell waste as both fine aggregate and coarse aggregate comparing to other concrete mortar. Architecture needs aesthetics materials, so the innovation in architectural material should be driven through the efforts of research on materials for building designs. The DOE methods was used by mixing cement, oyster shell, sands, and polyurethane by composition of 160 ml:40 ml:100 ml: 120 ml respectively. Refer to the results of previous research, then cement consumption is reduced up to 20% to keep the concept of green material. This study compared three different compositions of mortars, namely portland cement concrete with gravel (PCG), polyurethane concrete of oyster shell (PCO) and concrete with plastics aggregate (PCP). The methods of acoustical tests were conducted refer to the ASTM E413-04 standard. The research results showed that polyurethane concrete with oyster shell waste aggregate has absorption coefficient 0.52 and STL 63 dB and has a more beautiful appearance when it was pressed into moulding. It can be concluded that polyurethane concrete with oyster shell aggregate (PCO) is well implemented in architectural acoustics-components.

  10. A flexible data fusion architecture for persistent surveillance using ultra-low-power wireless sensor networks

    NASA Astrophysics Data System (ADS)

    Hanson, Jeffrey A.; McLaughlin, Keith L.; Sereno, Thomas J.

    2011-06-01

    We have developed a flexible, target-driven, multi-modal, physics-based fusion architecture that efficiently searches sensor detections for targets and rejects clutter while controlling the combinatoric problems that commonly arise in datadriven fusion systems. The informational constraints imposed by long lifetime requirements make systems vulnerable to false alarms. We demonstrate that our data fusion system significantly reduces false alarms while maintaining high sensitivity to threats. In addition, mission goals can vary substantially in terms of targets-of-interest, required characterization, acceptable latency, and false alarm rates. Our fusion architecture provides the flexibility to match these trade-offs with mission requirements unlike many conventional systems that require significant modifications for each new mission. We illustrate our data fusion performance with case studies that span many of the potential mission scenarios including border surveillance, base security, and infrastructure protection. In these studies, we deployed multi-modal sensor nodes - including geophones, magnetometers, accelerometers and PIR sensors - with low-power processing algorithms and low-bandwidth wireless mesh networking to create networks capable of multi-year operation. The results show our data fusion architecture maintains high sensitivities while suppressing most false alarms for a variety of environments and targets.

  11. NOTCH-mediated non-cell autonomous regulation of chromatin structure during senescence.

    PubMed

    Parry, Aled J; Hoare, Matthew; Bihary, Dóra; Hänsel-Hertsch, Robert; Smith, Stephen; Tomimatsu, Kosuke; Mannion, Elizabeth; Smith, Amy; D'Santos, Paula; Russell, I Alasdair; Balasubramanian, Shankar; Kimura, Hiroshi; Samarajiwa, Shamith A; Narita, Masashi

    2018-05-09

    Senescent cells interact with the surrounding microenvironment achieving diverse functional outcomes. We have recently identified that NOTCH1 can drive 'lateral induction' of a unique senescence phenotype in adjacent cells by specifically upregulating the NOTCH ligand JAG1. Here we show that NOTCH signalling can modulate chromatin structure autonomously and non-autonomously. In addition to senescence-associated heterochromatic foci (SAHF), oncogenic RAS-induced senescent (RIS) cells exhibit a massive increase in chromatin accessibility. NOTCH signalling suppresses SAHF and increased chromatin accessibility in this context. Strikingly, NOTCH-induced senescent cells, or cancer cells with high JAG1 expression, drive similar chromatin architectural changes in adjacent cells through cell-cell contact. Mechanistically, we show that NOTCH signalling represses the chromatin architectural protein HMGA1, an association found in multiple human cancers. Thus, HMGA1 is involved not only in SAHFs but also in RIS-driven chromatin accessibility. In conclusion, this study identifies that the JAG1-NOTCH-HMGA1 axis mediates the juxtacrine regulation of chromatin architecture.

  12. Deep learning architecture for iris recognition based on optimal Gabor filters and deep belief network

    NASA Astrophysics Data System (ADS)

    He, Fei; Han, Ye; Wang, Han; Ji, Jinchao; Liu, Yuanning; Ma, Zhiqiang

    2017-03-01

    Gabor filters are widely utilized to detect iris texture information in several state-of-the-art iris recognition systems. However, the proper Gabor kernels and the generative pattern of iris Gabor features need to be predetermined in application. The traditional empirical Gabor filters and shallow iris encoding ways are incapable of dealing with such complex variations in iris imaging including illumination, aging, deformation, and device variations. Thereby, an adaptive Gabor filter selection strategy and deep learning architecture are presented. We first employ particle swarm optimization approach and its binary version to define a set of data-driven Gabor kernels for fitting the most informative filtering bands, and then capture complex pattern from the optimal Gabor filtered coefficients by a trained deep belief network. A succession of comparative experiments validate that our optimal Gabor filters may produce more distinctive Gabor coefficients and our iris deep representations be more robust and stable than traditional iris Gabor codes. Furthermore, the depth and scales of the deep learning architecture are also discussed.

  13. Terahertz Array Receivers with Integrated Antennas

    NASA Technical Reports Server (NTRS)

    Chattopadhyay, Goutam; Llombart, Nuria; Lee, Choonsup; Jung, Cecile; Lin, Robert; Cooper, Ken B.; Reck, Theodore; Siles, Jose; Schlecht, Erich; Peralta, Alessandro; hide

    2011-01-01

    Highly sensitive terahertz heterodyne receivers have been mostly single-pixel. However, now there is a real need of multi-pixel array receivers at these frequencies driven by the science and instrument requirements. In this paper we explore various receiver font-end and antenna architectures for use in multi-pixel integrated arrays at terahertz frequencies. Development of wafer-level integrated terahertz receiver front-end by using advanced semiconductor fabrication technologies has progressed very well over the past few years. Novel stacking of micro-machined silicon wafers which allows for the 3-dimensional integration of various terahertz receiver components in extremely small packages has made it possible to design multi-pixel heterodyne arrays. One of the critical technologies to achieve fully integrated system is the antenna arrays compatible with the receiver array architecture. In this paper we explore different receiver and antenna architectures for multi-pixel heterodyne and direct detector arrays for various applications such as multi-pixel high resolution spectrometer and imaging radar at terahertz frequencies.

  14. On TTEthernet for Integrated Fault-Tolerant Spacecraft Networks

    NASA Technical Reports Server (NTRS)

    Loveless, Andrew

    2015-01-01

    There has recently been a push for adopting integrated modular avionics (IMA) principles in designing spacecraft architectures. This consolidation of multiple vehicle functions to shared computing platforms can significantly reduce spacecraft cost, weight, and de- sign complexity. Ethernet technology is attractive for inclusion in more integrated avionic systems due to its high speed, flexibility, and the availability of inexpensive commercial off-the-shelf (COTS) components. Furthermore, Ethernet can be augmented with a variety of quality of service (QoS) enhancements that enable its use for transmitting critical data. TTEthernet introduces a decentralized clock synchronization paradigm enabling the use of time-triggered Ethernet messaging appropriate for hard real-time applications. TTEthernet can also provide two forms of event-driven communication, therefore accommodating the full spectrum of traffic criticality levels required in IMA architectures. This paper explores the application of TTEthernet technology to future IMA spacecraft architectures as part of the Avionics and Software (A&S) project chartered by NASA's Advanced Exploration Systems (AES) program.

  15. Ontology driven integration platform for clinical and translational research

    PubMed Central

    Mirhaji, Parsa; Zhu, Min; Vagnoni, Mattew; Bernstam, Elmer V; Zhang, Jiajie; Smith, Jack W

    2009-01-01

    Semantic Web technologies offer a promising framework for integration of disparate biomedical data. In this paper we present the semantic information integration platform under development at the Center for Clinical and Translational Sciences (CCTS) at the University of Texas Health Science Center at Houston (UTHSC-H) as part of our Clinical and Translational Science Award (CTSA) program. We utilize the Semantic Web technologies not only for integrating, repurposing and classification of multi-source clinical data, but also to construct a distributed environment for information sharing, and collaboration online. Service Oriented Architecture (SOA) is used to modularize and distribute reusable services in a dynamic and distributed environment. Components of the semantic solution and its overall architecture are described. PMID:19208190

  16. Multilayered nano-architecture of variable sized graphene nanosheets for enhanced supercapacitor electrode performance.

    PubMed

    Biswas, Sanjib; Drzal, Lawrence T

    2010-08-01

    The diverse physical and chemical aspects of graphene nanosheets such as particle size surface area and edge chemistry were combined to fabricate a new supercapacitor electrode architecture consisting of a highly aligned network of large-sized nanosheets as a series of current collectors within a multilayer configuration of bulk electrode. Capillary driven self-assembly of monolayers of graphene nanosheets was employed to create a flexible, multilayer, free-standing film of highly hydrophobic nanosheets over large macroscopic areas. This nanoarchitecture exhibits a high-frequency capacitative response and a nearly rectangular cyclic voltammogram at 1000 mV/s scanning rate and possesses a rapid current response, small equivalent series resistance (ESR), and fast ionic diffusion for high-power electrical double-layer capacitor (EDLC) application.

  17. Demand Activated Manufacturing Architecture (DAMA) model for supply chain collaboration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    CHAPMAN,LEON D.; PETERSEN,MARJORIE B.

    The Demand Activated Manufacturing Architecture (DAMA) project during the last five years of work with the U.S. Integrated Textile Complex (retail, apparel, textile, and fiber sectors) has developed an inter-enterprise architecture and collaborative model for supply chains. This model will enable improved collaborative business across any supply chain. The DAMA Model for Supply Chain Collaboration is a high-level model for collaboration to achieve Demand Activated Manufacturing. The five major elements of the architecture to support collaboration are (1) activity or process, (2) information, (3) application, (4) data, and (5) infrastructure. These five elements are tied to the application of themore » DAMA architecture to three phases of collaboration - prepare, pilot, and scale. There are six collaborative activities that may be employed in this model: (1) Develop Business Planning Agreements, (2) Define Products, (3) Forecast and Plan Capacity Commitments, (4) Schedule Product and Product Delivery, (5) Expedite Production and Delivery Exceptions, and (6) Populate Supply Chain Utility. The Supply Chain Utility is a set of applications implemented to support collaborative product definition, forecast visibility, planning, scheduling, and execution. The DAMA architecture and model will be presented along with the process for implementing this DAMA model.« less

  18. HTMT-class Latency Tolerant Parallel Architecture for Petaflops Scale Computation

    NASA Technical Reports Server (NTRS)

    Sterling, Thomas; Bergman, Larry

    2000-01-01

    Computational Aero Sciences and other numeric intensive computation disciplines demand computing throughputs substantially greater than the Teraflops scale systems only now becoming available. The related fields of fluids, structures, thermal, combustion, and dynamic controls are among the interdisciplinary areas that in combination with sufficient resolution and advanced adaptive techniques may force performance requirements towards Petaflops. This will be especially true for compute intensive models such as Navier-Stokes are or when such system models are only part of a larger design optimization computation involving many design points. Yet recent experience with conventional MPP configurations comprising commodity processing and memory components has shown that larger scale frequently results in higher programming difficulty and lower system efficiency. While important advances in system software and algorithms techniques have had some impact on efficiency and programmability for certain classes of problems, in general it is unlikely that software alone will resolve the challenges to higher scalability. As in the past, future generations of high-end computers may require a combination of hardware architecture and system software advances to enable efficient operation at a Petaflops level. The NASA led HTMT project has engaged the talents of a broad interdisciplinary team to develop a new strategy in high-end system architecture to deliver petaflops scale computing in the 2004/5 timeframe. The Hybrid-Technology, MultiThreaded parallel computer architecture incorporates several advanced technologies in combination with an innovative dynamic adaptive scheduling mechanism to provide unprecedented performance and efficiency within practical constraints of cost, complexity, and power consumption. The emerging superconductor Rapid Single Flux Quantum electronics can operate at 100 GHz (the record is 770 GHz) and one percent of the power required by convention semiconductor logic. Wave Division Multiplexing optical communications can approach a peak per fiber bandwidth of 1 Tbps and the new Data Vortex network topology employing this technology can connect tens of thousands of ports providing a bi-section bandwidth on the order of a Petabyte per second with latencies well below 100 nanoseconds, even under heavy loads. Processor-in-Memory (PIM) technology combines logic and memory on the same chip exposing the internal bandwidth of the memory row buffers at low latency. And holographic storage photorefractive storage technologies provide high-density memory with access a thousand times faster than conventional disk technologies. Together these technologies enable a new class of shared memory system architecture with a peak performance in the range of a Petaflops but size and power requirements comparable to today's largest Teraflops scale systems. To achieve high-sustained performance, HTMT combines an advanced multithreading processor architecture with a memory-driven coarse-grained latency management strategy called "percolation", yielding high efficiency while reducing the much of the parallel programming burden. This paper will present the basic system architecture characteristics made possible through this series of advanced technologies and then give a detailed description of the new percolation approach to runtime latency management.

  19. Structural basis of ligand interaction with atypical chemokine receptor 3

    NASA Astrophysics Data System (ADS)

    Gustavsson, Martin; Wang, Liwen; van Gils, Noortje; Stephens, Bryan S.; Zhang, Penglie; Schall, Thomas J.; Yang, Sichun; Abagyan, Ruben; Chance, Mark R.; Kufareva, Irina; Handel, Tracy M.

    2017-01-01

    Chemokines drive cell migration through their interactions with seven-transmembrane (7TM) chemokine receptors on cell surfaces. The atypical chemokine receptor 3 (ACKR3) binds chemokines CXCL11 and CXCL12 and signals exclusively through β-arrestin-mediated pathways, without activating canonical G-protein signalling. This receptor is upregulated in numerous cancers making it a potential drug target. Here we collected over 100 distinct structural probes from radiolytic footprinting, disulfide trapping, and mutagenesis to map the structures of ACKR3:CXCL12 and ACKR3:small-molecule complexes, including dynamic regions that proved unresolvable by X-ray crystallography in homologous receptors. The data are integrated with molecular modelling to produce complete and cohesive experimentally driven models that confirm and expand on the existing knowledge of the architecture of receptor:chemokine and receptor:small-molecule complexes. Additionally, we detected and characterized ligand-induced conformational changes in the transmembrane and intracellular regions of ACKR3 that elucidate fundamental structural elements of agonism in this atypical receptor.

  20. Architectural Blueprint for Plate Boundary Observatories based on interoperable Data Management Platforms

    NASA Astrophysics Data System (ADS)

    Kerschke, D. I.; Häner, R.; Schurr, B.; Oncken, O.; Wächter, J.

    2014-12-01

    Interoperable data management platforms play an increasing role in the advancement of knowledge and technology in many scientific disciplines. Through high quality services they support the establishment of efficient and innovative research environments. Well-designed research environments can facilitate the sustainable utilization, exchange, and re-use of scientific data and functionality by using standardized community models. Together with innovative 3D/4D visualization, these concepts provide added value in improving scientific knowledge-gain, even across the boundaries of disciplines. A project benefiting from the added value is the Integrated Plate boundary Observatory in Chile (IPOC). IPOC is a European-South American network to study earthquakes and deformation at the Chilean continental margin and to monitor the plate boundary system for capturing an anticipated great earthquake in a seismic gap. In contrast to conventional observatories that monitor individual signals only, IPOC captures a large range of different processes through various observation methods (e.g., seismographs, GPS, magneto-telluric sensors, creep-meter, accelerometer, InSAR). For IPOC a conceptual design has been devised that comprises an architectural blueprint for a data management platform based on common and standardized data models, protocols, and encodings as well as on an exclusive use of Free and Open Source Software (FOSS) including visualization components. Following the principles of event-driven service-oriented architectures, the design enables novel processes by sharing and re-using functionality and information on the basis of innovative data mining and data fusion technologies. This platform can help to improve the understanding of the physical processes underlying plate deformations as well as the natural hazards induced by them. Through the use of standards, this blueprint can not only be facilitated for other plate observing systems (e.g., the European Plate Observing System EPOS), it also supports integrated approaches to include sensor networks that provide complementary processes for dynamic monitoring. Moreover, the integration of such observatories into superordinate research infrastructures (federation of virtual observatories) will be enabled.

  1. Development of a model-based flood emergency management system in Yujiang River Basin, South China

    NASA Astrophysics Data System (ADS)

    Zeng, Yong; Cai, Yanpeng; Jia, Peng; Mao, Jiansu

    2014-06-01

    Flooding is the most frequent disaster in China. It affects people's lives and properties, causing considerable economic loss. Flood forecast and operation of reservoirs are important in flood emergency management. Although great progress has been achieved in flood forecast and reservoir operation through using computer, network technology, and geographic information system technology in China, the prediction accuracy of models are not satisfactory due to the unavailability of real-time monitoring data. Also, real-time flood control scenario analysis is not effective in many regions and can seldom provide online decision support function. In this research, a decision support system for real-time flood forecasting in Yujiang River Basin, South China (DSS-YRB) is introduced in this paper. This system is based on hydrological and hydraulic mathematical models. The conceptual framework and detailed components of the proposed DSS-YRB is illustrated, which employs real-time rainfall data conversion, model-driven hydrologic forecasting, model calibration, data assimilation methods, and reservoir operational scenario analysis. Multi-tiered architecture offers great flexibility, portability, reusability, and reliability. The applied case study results show the development and application of a decision support system for real-time flood forecasting and operation is beneficial for flood control.

  2. On Using SysML, DoDAF 2.0 and UPDM to Model the Architecture for the NOAA's Joint Polar Satellite System (JPSS) Ground System (GS)

    NASA Technical Reports Server (NTRS)

    Hayden, Jeffrey L.; Jeffries, Alan

    2012-01-01

    The JPSS Ground System is a lIexible system of systems responsible for telemetry, tracking & command (TT &C), data acquisition, routing and data processing services for a varied lIeet of satellites to support weather prediction, modeling and climate modeling. To assist in this engineering effort, architecture modeling tools are being employed to translate the former NPOESS baseline to the new JPSS baseline, The paper will focus on the methodology for the system engineering process and the use of these architecture modeling tools within that process, The Department of Defense Architecture Framework version 2,0 (DoDAF 2.0) viewpoints and views that are being used to describe the JPSS GS architecture are discussed. The Unified Profile for DoOAF and MODAF (UPDM) and Systems Modeling Language (SysML), as ' provided by extensions to the MagicDraw UML modeling tool, are used to develop the diagrams and tables that make up the architecture model. The model development process and structure are discussed, examples are shown, and details of handling the complexities of a large System of Systems (SoS), such as the JPSS GS, with an equally complex modeling tool, are described

  3. Toward a Mobility-Driven Architecture for Multimodal Underwater Networking

    DTIC Science & Technology

    2017-02-01

    applications. By equipping AUVs with short-range, high -bandwidth underwater wireless communications , which feature lower energy-per-bit cost than acoustic...protocols. They suffer from significant transmission path losses at high frequencies , long propagation delays, low and distance-dependent bandwidth, time...of data preprocessing, data compression, and either tethering to a surface buoy able to use radio frequency (RF) communications or using undersea

  4. Integrated Optoelectronic Networks for Application-Driven Multicore Computing

    DTIC Science & Technology

    2017-05-08

    hybrid photonic torus, the all-optical Corona crossbar, and the hybrid hierarchical Firefly crossbar. • The key challenges for waveguide photonics...improves SXR but with relatively higher EDP overhead. Our evaluation results indicate that the encoding schemes improve worst-case-SXR in Corona and...photonic crossbar architectures ( Corona and Firefly) indicate that our approach improves worst-case signal-to-noise ratio (SNR) by up to 51.7

  5. Methode de conception dirigee par les modeles pour les systemes avioniques modulaires integres basee sur une approche de cosimulation

    NASA Astrophysics Data System (ADS)

    Bao, Lin

    In the aerospace industry, with the development of avionic systems becomes more and more complex, the integrated modular avionics (IMA) architecture was proposed to replace its predecessor - the federated architecture, in order to reduce the weight, power consumption and the dimension of the avionics equipment. The research work presented in this thesis, which is considered as a part of the research project AVIO509, aims to propose to the aviation industry a set of time-effective and cost-effective solutions for the development and the functional validation of IMA systems. The proposed methodologies mainly focus on two design flows that are based on: 1) the concept of model-driven engineering design and 2) a cosimulation platform. In the first design flow, the modeling language AADL is used to describe the IMA architecture. The environment OCARINA, a code generator initially designed for POK, was modified so that it can generate avionic applications from an AADL model for the simulator SIMA (an IMA simulator compliant to the ARINC653 standards). In the second design flow, Simulink is used to simulate the external world of IMA module thanks to the availability of avionic library that can offer lots of avionics sensors and actuators, and as well as its effectiveness in creating the Simulink models. The cosimulation platform is composed of two simulators: Simulink for the simulation of peripherals and SIMA for the simulation of IMA module, the latter is considered as an ideal alternative for the super expensive commercial development environment. In order to have a good portability, a SIMA partition is reserved as the role of " adapter " to synchronize the communication between these two simulators via the TCP/IP protocol. When the avionics applications are ported to the implementation platform (such as PikeOS) after the simulation, there is only the " adapter " to be modified because the internal communication and the system configuration are the same. An avionics application was developed as a case study, in order to demonstrate the validation of the proposed design flows. The research presented in this paper is a continuation of project of the AVIO509 research team, and parallelly may be extended in the future work.

  6. Regulatory Architecture of Gene Expression Variation in the Threespine Stickleback Gasterosteus aculeatus.

    PubMed

    Pritchard, Victoria L; Viitaniemi, Heidi M; McCairns, R J Scott; Merilä, Juha; Nikinmaa, Mikko; Primmer, Craig R; Leder, Erica H

    2017-01-05

    Much adaptive evolutionary change is underlain by mutational variation in regions of the genome that regulate gene expression rather than in the coding regions of the genes themselves. An understanding of the role of gene expression variation in facilitating local adaptation will be aided by an understanding of underlying regulatory networks. Here, we characterize the genetic architecture of gene expression variation in the threespine stickleback (Gasterosteus aculeatus), an important model in the study of adaptive evolution. We collected transcriptomic and genomic data from 60 half-sib families using an expression microarray and genotyping-by-sequencing, and located expression quantitative trait loci (eQTL) underlying the variation in gene expression in liver tissue using an interval mapping approach. We identified eQTL for several thousand expression traits. Expression was influenced by polymorphism in both cis- and trans-regulatory regions. Trans-eQTL clustered into hotspots. We did not identify master transcriptional regulators in hotspot locations: rather, the presence of hotspots may be driven by complex interactions between multiple transcription factors. One observed hotspot colocated with a QTL recently found to underlie salinity tolerance in the threespine stickleback. However, most other observed hotspots did not colocate with regions of the genome known to be involved in adaptive divergence between marine and freshwater habitats. Copyright © 2017 Pritchard et al.

  7. Ecologically Driven Ultrastructural and Hydrodynamic Designs in Stomatopod Cuticles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grunenfelder, Lessa Kay; Milliron, Garrett; Herrera, Steven

    Ecological pressures and varied feeding behaviors in a multitude of organ-isms have necessitated the drive for adaptation. One such change is seen in the feeding appendages of stomatopods, a group of highly predatory marine crustaceans. Stomatopods include “spearers,” who ambush and snare soft bodied prey, and “smashers,” who bludgeon hard-shelled prey with a heavily mineralized club. The regional substructural complexity of the stomatopod dactyl club from the smashing predator Odontodactylus scyllarus represents a model system in the study of impact tolerant biominerals. The club consists of a highly mineralized impact region, a characteristic Bouligand architec-ture (common to arthropods), and amore » unique section of the club, the striated region, composed of highly aligned sheets of mineralized fibers. Detailed ultrastructural investigations of the striated region within O. scyllarus and a related species of spearing stomatopod, Lysiosquillina maculate show consistent organization of mineral and organic, but distinct differences in macro-scale architecture. Evidence is provided for the function and substructural exaptation of the striated region, which facilitated redeployment of a rapto-rial feeding appendage as a biological hammer. Furthermore, given the need to accelerate underwater and “grab” or “smash” their prey, the spearer and smasher appendages are specifically designed with a significantly reduced drag force.« less

  8. Ecologically Driven Ultrastructural and Hydrodynamic Designs in Stomatopod Cuticles

    DOE PAGES

    Grunenfelder, Lessa Kay; Milliron, Garrett; Herrera, Steven; ...

    2018-01-16

    Ecological pressures and varied feeding behaviors in a multitude of organ-isms have necessitated the drive for adaptation. One such change is seen in the feeding appendages of stomatopods, a group of highly predatory marine crustaceans. Stomatopods include “spearers,” who ambush and snare soft bodied prey, and “smashers,” who bludgeon hard-shelled prey with a heavily mineralized club. The regional substructural complexity of the stomatopod dactyl club from the smashing predator Odontodactylus scyllarus represents a model system in the study of impact tolerant biominerals. The club consists of a highly mineralized impact region, a characteristic Bouligand architec-ture (common to arthropods), and amore » unique section of the club, the striated region, composed of highly aligned sheets of mineralized fibers. Detailed ultrastructural investigations of the striated region within O. scyllarus and a related species of spearing stomatopod, Lysiosquillina maculate show consistent organization of mineral and organic, but distinct differences in macro-scale architecture. Evidence is provided for the function and substructural exaptation of the striated region, which facilitated redeployment of a rapto-rial feeding appendage as a biological hammer. Furthermore, given the need to accelerate underwater and “grab” or “smash” their prey, the spearer and smasher appendages are specifically designed with a significantly reduced drag force.« less

  9. A Methodology for the Design and Verification of Globally Asynchronous/Locally Synchronous Architectures

    NASA Technical Reports Server (NTRS)

    Miller, Steven P.; Whalen, Mike W.; O'Brien, Dan; Heimdahl, Mats P.; Joshi, Anjali

    2005-01-01

    Recent advanced in model-checking have made it practical to formally verify the correctness of many complex synchronous systems (i.e., systems driven by a single clock). However, many computer systems are implemented by asynchronously composing several synchronous components, where each component has its own clock and these clocks are not synchronized. Formal verification of such Globally Asynchronous/Locally Synchronous (GA/LS) architectures is a much more difficult task. In this report, we describe a methodology for developing and reasoning about such systems. This approach allows a developer to start from an ideal system specification and refine it along two axes. Along one axis, the system can be refined one component at a time towards an implementation. Along the other axis, the behavior of the system can be relaxed to produce a more cost effective but still acceptable solution. We illustrate this process by applying it to the synchronization logic of a Dual Fight Guidance System, evolving the system from an ideal case in which the components do not fail and communicate synchronously to one in which the components can fail and communicate asynchronously. For each step, we show how the system requirements have to change if the system is to be implemented and prove that each implementation meets the revised system requirements through modelchecking.

  10. Regulatory Architecture of Gene Expression Variation in the Threespine Stickleback Gasterosteus aculeatus

    PubMed Central

    Pritchard, Victoria L.; Viitaniemi, Heidi M.; McCairns, R. J. Scott; Merilä, Juha; Nikinmaa, Mikko; Primmer, Craig R.; Leder, Erica H.

    2016-01-01

    Much adaptive evolutionary change is underlain by mutational variation in regions of the genome that regulate gene expression rather than in the coding regions of the genes themselves. An understanding of the role of gene expression variation in facilitating local adaptation will be aided by an understanding of underlying regulatory networks. Here, we characterize the genetic architecture of gene expression variation in the threespine stickleback (Gasterosteus aculeatus), an important model in the study of adaptive evolution. We collected transcriptomic and genomic data from 60 half-sib families using an expression microarray and genotyping-by-sequencing, and located expression quantitative trait loci (eQTL) underlying the variation in gene expression in liver tissue using an interval mapping approach. We identified eQTL for several thousand expression traits. Expression was influenced by polymorphism in both cis- and trans-regulatory regions. Trans-eQTL clustered into hotspots. We did not identify master transcriptional regulators in hotspot locations: rather, the presence of hotspots may be driven by complex interactions between multiple transcription factors. One observed hotspot colocated with a QTL recently found to underlie salinity tolerance in the threespine stickleback. However, most other observed hotspots did not colocate with regions of the genome known to be involved in adaptive divergence between marine and freshwater habitats. PMID:27836907

  11. Livestock Helminths in a Changing Climate: Approaches and Restrictions to Meaningful Predictions

    PubMed Central

    Fox, Naomi J.; Marion, Glenn; Davidson, Ross S.; White, Piran C. L.; Hutchings, Michael R.

    2012-01-01

    Simple Summary Parasitic helminths represent one of the most pervasive challenges to livestock, and their intensity and distribution will be influenced by climate change. There is a need for long-term predictions to identify potential risks and highlight opportunities for control. We explore the approaches to modelling future helminth risk to livestock under climate change. One of the limitations to model creation is the lack of purpose driven data collection. We also conclude that models need to include a broad view of the livestock system to generate meaningful predictions. Abstract Climate change is a driving force for livestock parasite risk. This is especially true for helminths including the nematodes Haemonchus contortus, Teladorsagia circumcincta, Nematodirus battus, and the trematode Fasciola hepatica, since survival and development of free-living stages is chiefly affected by temperature and moisture. The paucity of long term predictions of helminth risk under climate change has driven us to explore optimal modelling approaches and identify current bottlenecks to generating meaningful predictions. We classify approaches as correlative or mechanistic, exploring their strengths and limitations. Climate is one aspect of a complex system and, at the farm level, husbandry has a dominant influence on helminth transmission. Continuing environmental change will necessitate the adoption of mitigation and adaptation strategies in husbandry. Long term predictive models need to have the architecture to incorporate these changes. Ultimately, an optimal modelling approach is likely to combine mechanistic processes and physiological thresholds with correlative bioclimatic modelling, incorporating changes in livestock husbandry and disease control. Irrespective of approach, the principal limitation to parasite predictions is the availability of active surveillance data and empirical data on physiological responses to climate variables. By combining improved empirical data and refined models with a broad view of the livestock system, robust projections of helminth risk can be developed. PMID:26486780

  12. Feedback, Lineages and Self-Organizing Morphogenesis

    PubMed Central

    Calof, Anne L.; Lowengrub, John S.; Lander, Arthur D.

    2016-01-01

    Feedback regulation of cell lineage progression plays an important role in tissue size homeostasis, but whether such feedback also plays an important role in tissue morphogenesis has yet to be explored. Here we use mathematical modeling to show that a particular feedback architecture in which both positive and negative diffusible signals act on stem and/or progenitor cells leads to the appearance of bistable or bi-modal growth behaviors, ultrasensitivity to external growth cues, local growth-driven budding, self-sustaining elongation, and the triggering of self-organization in the form of lamellar fingers. Such behaviors arise not through regulation of cell cycle speeds, but through the control of stem or progenitor self-renewal. Even though the spatial patterns that arise in this setting are the result of interactions between diffusible factors with antagonistic effects, morphogenesis is not the consequence of Turing-type instabilities. PMID:26989903

  13. Morphological analysis of GeTe in inline phase change switches

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    King, Matthew R., E-mail: matthew.king2@ngc.com; Department of Materials Science and Engineering, North Carolina State University, Raleigh, North Carolina 27695; El-Hinnawy, Nabil

    2015-09-07

    Crystallization and amorphization phenomena in indirectly heated phase change material-based devices were investigated. Scanning transmission electron microscopy was utilized to explore GeTe phase transition processes in the context of the unique inline phase change switch (IPCS) architecture. A monolithically integrated thin film heating element successfully converted GeTe to ON and OFF states. Device cycling prompted the formation of an active area which sustains the majority of structural changes during pulsing. A transition region on both sides of the active area consisting of polycrystalline GeTe and small nuclei (<15 nm) in an amorphous matrix was also observed. The switching mechanism, determined bymore » variations in pulsing parameters, was shown to be predominantly growth-driven. A preliminary model for crystallization and amorphization in IPCS devices is presented.« less

  14. "Machine" consciousness and "artificial" thought: an operational architectonics model guided approach.

    PubMed

    Fingelkurts, Andrew A; Fingelkurts, Alexander A; Neves, Carlos F H

    2012-01-05

    Instead of using low-level neurophysiology mimicking and exploratory programming methods commonly used in the machine consciousness field, the hierarchical operational architectonics (OA) framework of brain and mind functioning proposes an alternative conceptual-theoretical framework as a new direction in the area of model-driven machine (robot) consciousness engineering. The unified brain-mind theoretical OA model explicitly captures (though in an informal way) the basic essence of brain functional architecture, which indeed constitutes a theory of consciousness. The OA describes the neurophysiological basis of the phenomenal level of brain organization. In this context the problem of producing man-made "machine" consciousness and "artificial" thought is a matter of duplicating all levels of the operational architectonics hierarchy (with its inherent rules and mechanisms) found in the brain electromagnetic field. We hope that the conceptual-theoretical framework described in this paper will stimulate the interest of mathematicians and/or computer scientists to abstract and formalize principles of hierarchy of brain operations which are the building blocks for phenomenal consciousness and thought. Copyright © 2010 Elsevier B.V. All rights reserved.

  15. Quantum memristor in a superconducting circuit

    NASA Astrophysics Data System (ADS)

    Salmilehto, Juha; Sanz, Mikel; di Ventra, Massimiliano; Solano, Enrique

    Memristors, resistive elements that retain information of their past, have garnered interest due to their paradigm-changing potential in information processing and electronics. The emergent hysteretic behaviour allows for novel architectural applications and has recently been classically demonstrated in a simplified superconducting setup using the phase-dependent conductance in the tunnel-junction-microscopic model. In this contribution, we present a truly quantum model for a memristor constructed using established elements and techniques in superconducting nanoelectronics, and explore the parameters for feasible operation as well as refine the methods for quantifying the memory retention. In particular, the memristive behaviour is shown to arise from quasiparticle-induced tunneling in the full dissipative model and can be observed in the phase-driven tunneling current. The relevant hysteretic behaviour should be observable using current state-of-the-art measurements for detecting quasiparticle excitations. Our theoretical findings constitute the first quantum memristor in a superconducting circuit and act as the starting point for designing further circuit elements that have non-Markovian characteristics The authors acknowledge support from the CCQED EU project and the Finnish Cultural Foundation.

  16. Global competition and local cooperation in a network of neural oscillators

    NASA Astrophysics Data System (ADS)

    Terman, David; Wang, DeLiang

    An architecture of locally excitatory, globally inhibitory oscillator networks is proposed and investigated both analytically and by computer simulation. The model for each oscillator corresponds to a standard relaxation oscillator with two time scales. Oscillators are locally coupled by a scheme that resembles excitatory synaptic coupling, and each oscillator also inhibits other oscillators through a common inhibitor. Oscillators are driven to be oscillatory by external stimulation. The network exhibits a mechanism of selective gating, whereby an oscillator jumping up to its active phase rapidly recruits the oscillators stimulated by the same pattern, while preventing the other oscillators from jumping up. We show analytically that with the selective gating mechanism, the network rapidly achieves both synchronization within blocks of oscillators that are stimulated by connected regions and desynchronization between different blocks. Computer simulations demonstrate the model's promising ability for segmenting multiple input patterns in real time. This model lays a physical foundation for the oscillatory correlation theory of feature binding and may provide an effective computational framework for scene segmentation and figure/ ground segregation.

  17. Deep-Learning-Enabled On-Demand Design of Chiral Metamaterials.

    PubMed

    Ma, Wei; Cheng, Feng; Liu, Yongmin

    2018-06-11

    Deep-learning framework has significantly impelled the development of modern machine learning technology by continuously pushing the limit of traditional recognition and processing of images, speech, and videos. In the meantime, it starts to penetrate other disciplines, such as biology, genetics, materials science, and physics. Here, we report a deep-learning-based model, comprising two bidirectional neural networks assembled by a partial stacking strategy, to automatically design and optimize three-dimensional chiral metamaterials with strong chiroptical responses at predesignated wavelengths. The model can help to discover the intricate, nonintuitive relationship between a metamaterial structure and its optical responses from a number of training examples, which circumvents the time-consuming, case-by-case numerical simulations in conventional metamaterial designs. This approach not only realizes the forward prediction of optical performance much more accurately and efficiently but also enables one to inversely retrieve designs from given requirements. Our results demonstrate that such a data-driven model can be applied as a very powerful tool in studying complicated light-matter interactions and accelerating the on-demand design of nanophotonic devices, systems, and architectures for real world applications.

  18. Integrating Cache Performance Modeling and Tuning Support in Parallelization Tools

    NASA Technical Reports Server (NTRS)

    Waheed, Abdul; Yan, Jerry; Saini, Subhash (Technical Monitor)

    1998-01-01

    With the resurgence of distributed shared memory (DSM) systems based on cache-coherent Non Uniform Memory Access (ccNUMA) architectures and increasing disparity between memory and processors speeds, data locality overheads are becoming the greatest bottlenecks in the way of realizing potential high performance of these systems. While parallelization tools and compilers facilitate the users in porting their sequential applications to a DSM system, a lot of time and effort is needed to tune the memory performance of these applications to achieve reasonable speedup. In this paper, we show that integrating cache performance modeling and tuning support within a parallelization environment can alleviate this problem. The Cache Performance Modeling and Prediction Tool (CPMP), employs trace-driven simulation techniques without the overhead of generating and managing detailed address traces. CPMP predicts the cache performance impact of source code level "what-if" modifications in a program to assist a user in the tuning process. CPMP is built on top of a customized version of the Computer Aided Parallelization Tools (CAPTools) environment. Finally, we demonstrate how CPMP can be applied to tune a real Computational Fluid Dynamics (CFD) application.

  19. Time kinetics of acute changes in muscle architecture in response to resistance exercise.

    PubMed

    Csapo, Robert; Alegre, Luis M; Baron, Ramon

    2011-05-01

    The study aimed to assess acute changes in muscle architecture and its recovery after exhaustive exercise. We hypothesised that repetitive leg press exercise would decrease vastus lateralis fascicle length, while increasing both muscle thickness and pennation angles. By investigating the time kinetics of recovery of these parameters, we wished to gain insight into the mechanisms responsible for muscle architectural changes during exercise. Muscle architecture was assessed in 41 male volunteers (25.2±3.7 yrs; 1.78±0.06 m; 76.4±11.7 kg) before and directly after, as well as 5, 10, 15, and 30 min after induction of fatigue by leg press exercise. Vastus lateralis muscle thickness, pennation angles and fascicle lengths were measured at rest by ultrasonography. Muscular fatigue was induced by an exhaustive series of maximum power, single leg press repetitions. Following leg press exercise vastus lateralis muscle thickness and pennation angles were increased by approximately 7 and 10%, whereas fascicle lengths decreased by 2%. Different recovery times (muscle thickness: 30 min; pennation angles: 15 min; fascicle lengths: 5 min) were observed. The differential time courses of recovery suggest that changes in muscle thickness, pennation angles, and fascicle lengths are driven by different exercise-related stimuli. Increased muscle perfusion and tendon creep are likely candidates accounting for short-term changes in muscle architecture. Copyright © 2011 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.

  20. Nonlinear Shaping Architecture Designed with Using Evolutionary Structural Optimization Tools

    NASA Astrophysics Data System (ADS)

    Januszkiewicz, Krystyna; Banachowicz, Marta

    2017-10-01

    The paper explores the possibilities of using Structural Optimization Tools (ESO) digital tools in an integrated structural and architectural design in response to the current needs geared towards sustainability, combining ecological and economic efficiency. The first part of the paper defines the Evolutionary Structural Optimization tools, which were developed specifically for engineering purposes using finite element analysis as a framework. The development of ESO has led to several incarnations, which are all briefly discussed (Additive ESO, Bi-directional ESO, Extended ESO). The second part presents result of using these tools in structural and architectural design. Actual building projects which involve optimization as a part of the original design process will be presented (Crematorium in Kakamigahara Gifu, Japan, 2006 SANAA“s Learning Centre, EPFL in Lausanne, Switzerland 2008 among others). The conclusion emphasizes that the structural engineering and architectural design mean directing attention to the solutions which are used by Nature, designing works optimally shaped and forming their own environments. Architectural forms never constitute the optimum shape derived through a form-finding process driven only by structural optimization, but rather embody and integrate a multitude of parameters. It might be assumed that there is a similarity between these processes in nature and the presented design methods. Contemporary digital methods make the simulation of such processes possible, and thus enable us to refer back to the empirical methods of previous generations.

  1. Hydrologic Synthesis Across the Critical Zone Observatory Network: A Step Towards Understanding the Coevolution of Critical Zone Function and Structure

    NASA Astrophysics Data System (ADS)

    Wlostowski, A. N.; Harman, C. J.; Molotch, N. P.

    2017-12-01

    The physical and biological architecture of the Earth's Critical Zone controls hydrologic partitioning, storage, and chemical evolution of precipitated water. The Critical Zone Observatory (CZO) Network provides an ideal platform to explore linkages between catchment structure and hydrologic function across a gradient of geologic and climatic settings. A legacy of hypothesis-motivated research at each site has generated a wealth of data characterizing the architecture and hydrologic function of the critical zone. We will present a synthesis of this data that aims to elucidate and explain (in the sense of making mutually intelligible) variations in hydrologic function across the CZO network. Top-down quantitative signatures of the storage and partitioning of water at catchment scales extracted from precipitation, streamflow, and meteorological data will be compared with each other, and provide quantitative benchmarks to assess differences in perceptual models of hydrologic function at each CZO site. Annual water balance analyses show that CZO sites span a wide gradient of aridity and evaporative partitioning. The aridity index (PET/P) ranges from 0.3 at Luquillo to 4.3 at Reynolds Creek, while the evaporative index (E/P) ranges from 0.3 at Luquillo (Rio Mamayes) to 0.9 at Reynolds Creek (Reynolds Creek Outlet). Snow depth and SWE observations reveal that snowpack is an important seasonal storage reservoir at three sites: Boulder, Jemez, Reynolds Creek and Southern Sierra. Simple dynamical models are also used to infer seasonal patterns of subsurface catchment storage. A root-zone water balance model reveals unique seasonal variations in plant-available water storage. Seasonal patterns of plant-available storage are driven by the asynchronicity of seasonal precipitation and evaporation cycles. Catchment sensitivity functions are derived at each site to infer relative changes in hydraulic storage (the apparent storage reservoir responsible for modulating streamflow generation). Storage-discharge relationships vary widely across the Network, and may be associated with inter-site differences in sub-surface architecture. Moving forward, we seek to reconcile top-down analysis results against the bottom-up understanding of critical zone structure and hydrologic function at each CZO site.

  2. Comparing root architectural models

    NASA Astrophysics Data System (ADS)

    Schnepf, Andrea; Javaux, Mathieu; Vanderborght, Jan

    2017-04-01

    Plant roots play an important role in several soil processes (Gregory 2006). Root architecture development determines the sites in soil where roots provide input of carbon and energy and take up water and solutes. However, root architecture is difficult to determine experimentally when grown in opaque soil. Thus, root architectural models have been widely used and been further developed into functional-structural models that are able to simulate the fate of water and solutes in the soil-root system (Dunbabin et al. 2013). Still, a systematic comparison of the different root architectural models is missing. In this work, we focus on discrete root architecture models where roots are described by connected line segments. These models differ (a) in their model concepts, such as the description of distance between branches based on a prescribed distance (inter-nodal distance) or based on a prescribed time interval. Furthermore, these models differ (b) in the implementation of the same concept, such as the time step size, the spatial discretization along the root axes or the way stochasticity of parameters such as root growth direction, growth rate, branch spacing, branching angles are treated. Based on the example of two such different root models, the root growth module of R-SWMS and RootBox, we show the impact of these differences on simulated root architecture and aggregated information computed from this detailed simulation results, taking into account the stochastic nature of those models. References Dunbabin, V.M., Postma, J.A., Schnepf, A., Pagès, L., Javaux, M., Wu, L., Leitner, D., Chen, Y.L., Rengel, Z., Diggle, A.J. Modelling root-soil interactions using three-dimensional models of root growth, architecture and function (2013) Plant and Soil, 372 (1-2), pp. 93 - 124. Gregory (2006) Roots, rhizosphere and soil: the route to a better understanding of soil science? European Journal of Soil Science 57: 2-12.

  3. An Agent-Based Dynamic Model for Analysis of Distributed Space Exploration Architectures

    NASA Astrophysics Data System (ADS)

    Sindiy, Oleg V.; DeLaurentis, Daniel A.; Stein, William B.

    2009-07-01

    A range of complex challenges, but also potentially unique rewards, underlie the development of exploration architectures that use a distributed, dynamic network of resources across the solar system. From a methodological perspective, the prime challenge is to systematically model the evolution (and quantify comparative performance) of such architectures, under uncertainty, to effectively direct further study of specialized trajectories, spacecraft technologies, concept of operations, and resource allocation. A process model for System-of-Systems Engineering is used to define time-varying performance measures for comparative architecture analysis and identification of distinguishing patterns among interoperating systems. Agent-based modeling serves as the means to create a discrete-time simulation that generates dynamics for the study of architecture evolution. A Solar System Mobility Network proof-of-concept problem is introduced representing a set of longer-term, distributed exploration architectures. Options within this set revolve around deployment of human and robotic exploration and infrastructure assets, their organization, interoperability, and evolution, i.e., a system-of-systems. Agent-based simulations quantify relative payoffs for a fully distributed architecture (which can be significant over the long term), the latency period before they are manifest, and the up-front investment (which can be substantial compared to alternatives). Verification and sensitivity results provide further insight on development paths and indicate that the framework and simulation modeling approach may be useful in architectural design of other space exploration mass, energy, and information exchange settings.

  4. Using crustal thickness, subsidence and P-T-t history on the Iberia-Newfoundland & Alpine Tethys margins to constrain lithosphere deformation modes during continental breakup

    NASA Astrophysics Data System (ADS)

    Jeanniot, L.; Kusznir, N. J.; Manatschal, G.; Mohn, G.; Beltrando, M.

    2013-12-01

    Observations at magma-poor rifted margins such as Iberia-Newfoundland show a complex lithosphere deformation history and OCT architecture, resulting in hyper-extended continental crust and lithosphere, exhumed mantle and scattered embryonic oceanic crust before continental breakup and seafloor spreading. Initiation of seafloor spreading requires both the rupture of the continental crust and lithospheric mantle, and the onset of decompressional melting. Their relative timing controls when mantle exhumation may occur; the presence or absence of exhumed mantle provides useful information on the timing of these events and constraints on lithosphere deformation modes. A single kinematic lithosphere deformation mode leading to continental breakup and sea-floor spreading cannot explain observations. We have determined the sequence of lithosphere deformation events, using forward modelling of crustal thickness, subsidence and P-T-t history calibrated against observations on the present-day Iberia-Newfoundland and the fossil analogue Alpine Tethys margins. Lithosphere deformation modes, represented by flow fields, are generated by a 2D finite element viscous flow model (FeMargin), and used to advect lithosphere and asthenosphere temperature and material. FeMargin is kinematically driven by divergent deformation in the topmost upper lithosphere inducing passive upwelling beneath that layer; the upper lithosphere is assumed to deform by extensional faulting and magmatic intrusions, consistent with observations of deformation processes occurring at slow spreading ocean ridges (Cannat, 1996). Buoyancy enhanced upwelling is also included in the kinematic model as predicted by Braun et al (2000). We predict melt generation by decompressional melting using the parameterization and methodology of Katz et al., 2003. We use a series of numerical experiments, tested and calibrated against crustal thicknesses and subsidence observations, to determine the distribution of lithosphere deformation, the contribution of buoyancy driven upwelling and their spatial and temporal evolution including lateral migration. Particle tracking is used to predict P-T-t histories for both Iberia-Newfoundland and the Alpine Tethys conjugate margin transects. The lateral migration of the deformation flow axis has an important control on the rupture of continental crust and lithosphere, melt initiation, their relative timing, the resulting OCT architecture and conjugate margin asymmetry. Initial continental crust thickness and lithosphere temperature structure are important in controlling initial elevation and subsequent subsidence and depositional histories. Numerical models are used to examine the possible isostatic responses of the present-day and fossil analogue rifted margins.

  5. Electromagnetic Physics Models for Parallel Computing Architectures

    NASA Astrophysics Data System (ADS)

    Amadio, G.; Ananya, A.; Apostolakis, J.; Aurora, A.; Bandieramonte, M.; Bhattacharyya, A.; Bianchini, C.; Brun, R.; Canal, P.; Carminati, F.; Duhem, L.; Elvira, D.; Gheata, A.; Gheata, M.; Goulas, I.; Iope, R.; Jun, S. Y.; Lima, G.; Mohanty, A.; Nikitina, T.; Novak, M.; Pokorski, W.; Ribon, A.; Seghal, R.; Shadura, O.; Vallecorsa, S.; Wenzel, S.; Zhang, Y.

    2016-10-01

    The recent emergence of hardware architectures characterized by many-core or accelerated processors has opened new opportunities for concurrent programming models taking advantage of both SIMD and SIMT architectures. GeantV, a next generation detector simulation, has been designed to exploit both the vector capability of mainstream CPUs and multi-threading capabilities of coprocessors including NVidia GPUs and Intel Xeon Phi. The characteristics of these architectures are very different in terms of the vectorization depth and type of parallelization needed to achieve optimal performance. In this paper we describe implementation of electromagnetic physics models developed for parallel computing architectures as a part of the GeantV project. Results of preliminary performance evaluation and physics validation are presented as well.

  6. Modelling of internal architecture of kinesin nanomotor as a machine language.

    PubMed

    Khataee, H R; Ibrahim, M Y

    2012-09-01

    Kinesin is a protein-based natural nanomotor that transports molecular cargoes within cells by walking along microtubules. Kinesin nanomotor is considered as a bio-nanoagent which is able to sense the cell through its sensors (i.e. its heads and tail), make the decision internally and perform actions on the cell through its actuator (i.e. its motor domain). The study maps the agent-based architectural model of internal decision-making process of kinesin nanomotor to a machine language using an automata algorithm. The applied automata algorithm receives the internal agent-based architectural model of kinesin nanomotor as a deterministic finite automaton (DFA) model and generates a regular machine language. The generated regular machine language was acceptable by the architectural DFA model of the nanomotor and also in good agreement with its natural behaviour. The internal agent-based architectural model of kinesin nanomotor indicates the degree of autonomy and intelligence of the nanomotor interactions with its cell. Thus, our developed regular machine language can model the degree of autonomy and intelligence of kinesin nanomotor interactions with its cell as a language. Modelling of internal architectures of autonomous and intelligent bio-nanosystems as machine languages can lay the foundation towards the concept of bio-nanoswarms and next phases of the bio-nanorobotic systems development.

  7. Detailed Primitive-Based 3d Modeling of Architectural Elements

    NASA Astrophysics Data System (ADS)

    Remondino, F.; Lo Buglio, D.; Nony, N.; De Luca, L.

    2012-07-01

    The article describes a pipeline, based on image-data, for the 3D reconstruction of building façades or architectural elements and the successive modeling using geometric primitives. The approach overcome some existing problems in modeling architectural elements and deliver efficient-in-size reality-based textured 3D models useful for metric applications. For the 3D reconstruction, an opensource pipeline developed within the TAPENADE project is employed. In the successive modeling steps, the user manually selects an area containing an architectural element (capital, column, bas-relief, window tympanum, etc.) and then the procedure fits geometric primitives and computes disparity and displacement maps in order to tie visual and geometric information together in a light but detailed 3D model. Examples are reported and commented.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Yao; Balaprakash, Prasanna; Meng, Jiayuan

    We present Raexplore, a performance modeling framework for architecture exploration. Raexplore enables rapid, automated, and systematic search of architecture design space by combining hardware counter-based performance characterization and analytical performance modeling. We demonstrate Raexplore for two recent manycore processors IBM Blue- Gene/Q compute chip and Intel Xeon Phi, targeting a set of scientific applications. Our framework is able to capture complex interactions between architectural components including instruction pipeline, cache, and memory, and to achieve a 3–22% error for same-architecture and cross-architecture performance predictions. Furthermore, we apply our framework to assess the two processors, and discover and evaluate a list ofmore » architectural scaling options for future processor designs.« less

  9. Overview and Summary of the Advanced Mirror Technology Development Project

    NASA Astrophysics Data System (ADS)

    Stahl, H. P.

    2014-01-01

    Advanced Mirror Technology Development (AMTD) is a NASA Strategic Astrophysics Technology project to mature to TRL-6 the critical technologies needed to produce 4-m or larger flight-qualified UVOIR mirrors by 2018 so that a viable mission can be considered by the 2020 Decadal Review. The developed mirror technology must enable missions capable of both general astrophysics & ultra-high contrast observations of exoplanets. Just as JWST’s architecture was driven by launch vehicle, a future UVOIR mission’s architectures (monolithic, segmented or interferometric) will depend on capacities of future launch vehicles (and budget). Since we cannot predict the future, we must prepare for all potential futures. Therefore, to provide the science community with options, we are pursuing multiple technology paths. AMTD uses a science-driven systems engineering approach. We derived engineering specifications for potential future monolithic or segmented space telescopes based on science needs and implement constraints. And we are maturing six inter-linked critical technologies to enable potential future large aperture UVOIR space telescope: 1) Large-Aperture, Low Areal Density, High Stiffness Mirrors, 2) Support Systems, 3) Mid/High Spatial Frequency Figure Error, 4) Segment Edges, 5) Segment-to-Segment Gap Phasing, and 6) Integrated Model Validation Science Advisory Team and a Systems Engineering Team. We are maturing all six technologies simultaneously because all are required to make a primary mirror assembly (PMA); and, it is the PMA’s on-orbit performance which determines science return. PMA stiffness depends on substrate and support stiffness. Ability to cost-effectively eliminate mid/high spatial figure errors and polishing edges depends on substrate stiffness. On-orbit thermal and mechanical performance depends on substrate stiffness, the coefficient of thermal expansion (CTE) and thermal mass. And, segment-to-segment phasing depends on substrate & structure stiffness. This presentation will introduce the goals and objectives of the AMTD project and summarize its recent accomplishments.

  10. Little by Little Does the Trick: Design and Construction of a Discrete Event Agent-Based Simulation Framework

    DTIC Science & Technology

    2007-12-01

    model. Finally, we build a small agent-based model using the component architecture to demonstrate the library’s functionality. 15. NUMBER OF...and a Behavioral model. Finally, we build a small agent-based model using the component architecture to demonstrate the library’s functionality...prototypes an architectural design which is generalizable, reusable, and extensible. We have created an initial set of model elements that demonstrate

  11. Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning.

    PubMed

    Shin, Hoo-Chang; Roth, Holger R; Gao, Mingchen; Lu, Le; Xu, Ziyue; Nogues, Isabella; Yao, Jianhua; Mollura, Daniel; Summers, Ronald M

    2016-05-01

    Remarkable progress has been made in image recognition, primarily due to the availability of large-scale annotated datasets and deep convolutional neural networks (CNNs). CNNs enable learning data-driven, highly representative, hierarchical image features from sufficient training data. However, obtaining datasets as comprehensively annotated as ImageNet in the medical imaging domain remains a challenge. There are currently three major techniques that successfully employ CNNs to medical image classification: training the CNN from scratch, using off-the-shelf pre-trained CNN features, and conducting unsupervised CNN pre-training with supervised fine-tuning. Another effective method is transfer learning, i.e., fine-tuning CNN models pre-trained from natural image dataset to medical image tasks. In this paper, we exploit three important, but previously understudied factors of employing deep convolutional neural networks to computer-aided detection problems. We first explore and evaluate different CNN architectures. The studied models contain 5 thousand to 160 million parameters, and vary in numbers of layers. We then evaluate the influence of dataset scale and spatial image context on performance. Finally, we examine when and why transfer learning from pre-trained ImageNet (via fine-tuning) can be useful. We study two specific computer-aided detection (CADe) problems, namely thoraco-abdominal lymph node (LN) detection and interstitial lung disease (ILD) classification. We achieve the state-of-the-art performance on the mediastinal LN detection, and report the first five-fold cross-validation classification results on predicting axial CT slices with ILD categories. Our extensive empirical evaluation, CNN model analysis and valuable insights can be extended to the design of high performance CAD systems for other medical imaging tasks.

  12. Plant architecture, growth and radiative transfer for terrestrial and space environments

    NASA Technical Reports Server (NTRS)

    Norman, John M.; Goel, Narendra S.

    1993-01-01

    The overall objective of this research was to develop a hardware implemented model that would incorporate realistic and dynamic descriptions of canopy architecture in physiologically based models of plant growth and functioning, with an emphasis on radiative transfer while accommodating other environmental constraints. The general approach has five parts: a realistic mathematical treatment of canopy architecture, a methodology for combining this general canopy architectural description with a general radiative transfer model, the inclusion of physiological and environmental aspects of plant growth, inclusion of plant phenology, and integration.

  13. Roofline model toolkit: A practical tool for architectural and program analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lo, Yu Jung; Williams, Samuel; Van Straalen, Brian

    We present preliminary results of the Roofline Toolkit for multicore, many core, and accelerated architectures. This paper focuses on the processor architecture characterization engine, a collection of portable instrumented micro benchmarks implemented with Message Passing Interface (MPI), and OpenMP used to express thread-level parallelism. These benchmarks are specialized to quantify the behavior of different architectural features. Compared to previous work on performance characterization, these microbenchmarks focus on capturing the performance of each level of the memory hierarchy, along with thread-level parallelism, instruction-level parallelism and explicit SIMD parallelism, measured in the context of the compilers and run-time environments. We also measuremore » sustained PCIe throughput with four GPU memory managed mechanisms. By combining results from the architecture characterization with the Roofline model based solely on architectural specifications, this work offers insights for performance prediction of current and future architectures and their software systems. To that end, we instrument three applications and plot their resultant performance on the corresponding Roofline model when run on a Blue Gene/Q architecture.« less

  14. Architecture with GIDEON, A Program for Design in Structural DNA Nanotechnology

    PubMed Central

    Birac, Jeffrey J.; Sherman, William B.; Kopatsch, Jens; Constantinou, Pamela E.; Seeman, Nadrian C.

    2012-01-01

    We present geometry based design strategies for DNA nanostructures. The strategies have been implemented with GIDEON – a Graphical Integrated Development Environment for OligoNucleotides. GIDEON has a highly flexible graphical user interface that facilitates the development of simple yet precise models, and the evaluation of strains therein. Models are built on a simple model of undistorted B-DNA double-helical domains. Simple point and click manipulations of the model allow the minimization of strain in the phosphate-backbone linkages between these domains and the identification of any steric clashes that might occur as a result. Detailed analysis of 3D triangles yields clear predictions of the strains associated with triangles of different sizes. We have carried out experiments that confirm that 3D triangles form well only when their geometrical strain is less than 4% deviation from the estimated relaxed structure. Thus geometry-based techniques alone, without energetic considerations, can be used to explain general trends in DNA structure formation. We have used GIDEON to build detailed models of double crossover and triple crossover molecules, evaluating the non-planarity associated with base tilt and junction mis-alignments. Computer modeling using a graphical user interface overcomes the limited precision of physical models for larger systems, and the limited interaction rate associated with earlier, command-line driven software. PMID:16630733

  15. CLUSTERED PRIMARY BRANCH 1, a new allele of DWARF11, controls panicle architecture and seed size in rice.

    PubMed

    Wu, Yongzhen; Fu, Yongcai; Zhao, Shuangshuang; Gu, Ping; Zhu, Zuofeng; Sun, Chuanqing; Tan, Lubin

    2016-01-01

    Panicle architecture and seed size are important agronomic traits that directly determine grain yield in rice (Oryza sativa L.). Although a number of key genes controlling panicle architecture and seed size have been cloned and characterized in recent years, their genetic and molecular mechanisms remain unclear. In this study, we identified a mutant that produced panicles with fascicled primary branching and reduced seeds in size. We isolated the underlying CLUSTERED PRIMARY BRANCH 1 (CPB1) gene, a new allele of DWARF11 (D11) encoding a cytochrome P450 protein involved in brassinosteroid (BR) biosynthesis pathway. Genetic transformation experiments confirmed that a His360Leu amino acid substitution residing in the highly conserved region of CPB1/D11 was responsible for the panicle architecture and seed size changes in the cpb1 mutants. Overexpression of CPB1/D11 under the background of cpb1 mutant not only rescued normal panicle architecture and plant height, but also had a larger leaf angle and seed size than the controls. Furthermore, the CPB1/D11 transgenic plants driven by panicle-specific promoters can enlarge seed size and enhance grain yield without affecting other favourable agronomic traits. These results demonstrated that the specific mutation in CPB1/D11 influenced development of panicle architecture and seed size, and manipulation of CPB1/D11 expression using the panicle-specific promoter could be used to increase seed size, leading to grain yield improvement in rice. © 2015 Society for Experimental Biology, Association of Applied Biologists and John Wiley & Sons Ltd.

  16. a Framework for Architectural Heritage Hbim Semantization and Development

    NASA Astrophysics Data System (ADS)

    Brusaporci, S.; Maiezza, P.; Tata, A.

    2018-05-01

    Despite the recognized advantages of the use of BIM in the field of architecture and engineering, the extension of this procedure to the architectural heritage is neither immediate nor critical. The uniqueness and irregularity of historical architecture, on the one hand, and the great quantity of information necessary for the knowledge of architectural heritage, on the other, require appropriate reflections. The aim of this paper is to define a general framework for the use of BIM procedures for architectural heritage. The proposed methodology consists of three different Level of Development (LoD), depending on the characteristics of the building and the objectives of the study: a simplified model with a low geometric accuracy and a minimum quantity of information (LoD 200); a model nearer to the reality but, however, with a high deviation between virtual and real model (LoD 300); a detailed BIM model that reproduce as much as possible the geometric irregularities of the building and is enriched by the maximum quantity of information available (LoD 400).

  17. Quality evaluation of health information system's architectures developed using the HIS-DF methodology.

    PubMed

    López, Diego M; Blobel, Bernd; Gonzalez, Carolina

    2010-01-01

    Requirement analysis, design, implementation, evaluation, use, and maintenance of semantically interoperable Health Information Systems (HIS) have to be based on eHealth standards. HIS-DF is a comprehensive approach for HIS architectural development based on standard information models and vocabulary. The empirical validity of HIS-DF has not been demonstrated so far. Through an empirical experiment, the paper demonstrates that using HIS-DF and HL7 information models, semantic quality of HIS architecture can be improved, compared to architectures developed using traditional RUP process. Semantic quality of the architecture has been measured in terms of model's completeness and validity metrics. The experimental results demonstrated an increased completeness of 14.38% and an increased validity of 16.63% when using the HIS-DF and HL7 information models in a sample HIS development project. Quality assurance of the system architecture in earlier stages of HIS development presumes an increased quality of final HIS systems, which supposes an indirect impact on patient care.

  18. An e-consent-based shared EHR system architecture for integrated healthcare networks.

    PubMed

    Bergmann, Joachim; Bott, Oliver J; Pretschner, Dietrich P; Haux, Reinhold

    2007-01-01

    Virtual integration of distributed patient data promises advantages over a consolidated health record, but raises questions mainly about practicability and authorization concepts. Our work aims on specification and development of a virtual shared health record architecture using a patient-centred integration and authorization model. A literature survey summarizes considerations of current architectural approaches. Complemented by a methodical analysis in two regional settings, a formal architecture model was specified and implemented. Results presented in this paper are a survey of architectural approaches for shared health records and an architecture model for a virtual shared EHR, which combines a patient-centred integration policy with provider-oriented document management. An electronic consent system assures, that access to the shared record remains under control of the patient. A corresponding system prototype has been developed and is currently being introduced and evaluated in a regional setting. The proposed architecture is capable of partly replacing message-based communications. Operating highly available provider repositories for the virtual shared EHR requires advanced technology and probably means additional costs for care providers. Acceptance of the proposed architecture depends on transparently embedding document validation and digital signature into the work processes. The paradigm shift from paper-based messaging to a "pull model" needs further evaluation.

  19. MACCIS 2.0 - An Architecture Description Framework for Technical Infostructures and Their Enterprise Environment

    DTIC Science & Technology

    2004-06-01

    Viewpoint Component Viewpoint View Architecture Description of Enterprise or Infostructure View Security Concern Business Security Model Business...security concern, when applied to the different viewpoints, addresses both stakeholders, and is described as a business security model or component...Viewpoint View Architecture Description of Enterprise or Infostructure View Security Concern Business Security Model Business Stakeholder IT Architect

  20. A Model for Communications Satellite System Architecture Assessment

    DTIC Science & Technology

    2011-09-01

    This is shown in Equation 4. The total system cost includes all development, acquisition, fielding, operations, maintenance and upgrades, and system...protection. A mathematical model was implemented to enable the analysis of communications satellite system architectures based on multiple system... implemented to enable the analysis of communications satellite system architectures based on multiple system attributes. Utilization of the model in

  1. Clinical engineering and risk management in healthcare technological process using architecture framework.

    PubMed

    Signori, Marcos R; Garcia, Renato

    2010-01-01

    This paper presents a model that aids the Clinical Engineering to deal with Risk Management in the Healthcare Technological Process. The healthcare technological setting is complex and supported by three basics entities: infrastructure (IS), healthcare technology (HT), and human resource (HR). Was used an Enterprise Architecture - MODAF (Ministry of Defence Architecture Framework) - to model this process for risk management. Thus, was created a new model to contribute to the risk management in the HT process, through the Clinical Engineering viewpoint. This architecture model can support and improve the decision making process of the Clinical Engineering to the Risk Management in the Healthcare Technological process.

  2. An information model for a virtual private optical network (OVPN) using virtual routers (VRs)

    NASA Astrophysics Data System (ADS)

    Vo, Viet Minh Nhat

    2002-05-01

    This paper describes a virtual private optical network architecture (Optical VPN - OVPN) based on virtual router (VR). It improves over architectures suggested for virtual private networks by using virtual routers with optical networks. The new things in this architecture are necessary changes to adapt to devices and protocols used in optical networks. This paper also presents information models for the OVPN: at the architecture level and at the service level. These are extensions to the DEN (directory enable network) and CIM (Common Information Model) for OVPNs using VRs. The goal is to propose a common management model using policies.

  3. Modeling the evolution of protein domain architectures using maximum parsimony.

    PubMed

    Fong, Jessica H; Geer, Lewis Y; Panchenko, Anna R; Bryant, Stephen H

    2007-02-09

    Domains are basic evolutionary units of proteins and most proteins have more than one domain. Advances in domain modeling and collection are making it possible to annotate a large fraction of known protein sequences by a linear ordering of their domains, yielding their architecture. Protein domain architectures link evolutionarily related proteins and underscore their shared functions. Here, we attempt to better understand this association by identifying the evolutionary pathways by which extant architectures may have evolved. We propose a model of evolution in which architectures arise through rearrangements of inferred precursor architectures and acquisition of new domains. These pathways are ranked using a parsimony principle, whereby scenarios requiring the fewest number of independent recombination events, namely fission and fusion operations, are assumed to be more likely. Using a data set of domain architectures present in 159 proteomes that represent all three major branches of the tree of life allows us to estimate the history of over 85% of all architectures in the sequence database. We find that the distribution of rearrangement classes is robust with respect to alternative parsimony rules for inferring the presence of precursor architectures in ancestral species. Analyzing the most parsimonious pathways, we find 87% of architectures to gain complexity over time through simple changes, among which fusion events account for 5.6 times as many architectures as fission. Our results may be used to compute domain architecture similarities, for example, based on the number of historical recombination events separating them. Domain architecture "neighbors" identified in this way may lead to new insights about the evolution of protein function.

  4. Modeling the Evolution of Protein Domain Architectures Using Maximum Parsimony

    PubMed Central

    Fong, Jessica H.; Geer, Lewis Y.; Panchenko, Anna R.; Bryant, Stephen H.

    2007-01-01

    Domains are basic evolutionary units of proteins and most proteins have more than one domain. Advances in domain modeling and collection are making it possible to annotate a large fraction of known protein sequences by a linear ordering of their domains, yielding their architecture. Protein domain architectures link evolutionarily related proteins and underscore their shared functions. Here, we attempt to better understand this association by identifying the evolutionary pathways by which extant architectures may have evolved. We propose a model of evolution in which architectures arise through rearrangements of inferred precursor architectures and acquisition of new domains. These pathways are ranked using a parsimony principle, whereby scenarios requiring the fewest number of independent recombination events, namely fission and fusion operations, are assumed to be more likely. Using a data set of domain architectures present in 159 proteomes that represent all three major branches of the tree of life allows us to estimate the history of over 85% of all architectures in the sequence database. We find that the distribution of rearrangement classes is robust with respect to alternative parsimony rules for inferring the presence of precursor architectures in ancestral species. Analyzing the most parsimonious pathways, we find 87% of architectures to gain complexity over time through simple changes, among which fusion events account for 5.6 times as many architectures as fission. Our results may be used to compute domain architecture similarities, for example, based on the number of historical recombination events separating them. Domain architecture “neighbors” identified in this way may lead to new insights about the evolution of protein function. PMID:17166515

  5. Automation for deep space vehicle monitoring

    NASA Technical Reports Server (NTRS)

    Schwuttke, Ursula M.

    1991-01-01

    Information on automation for deep space vehicle monitoring is given in viewgraph form. Information is given on automation goals and strategy; the Monitor Analyzer of Real-time Voyager Engineering Link (MARVEL); intelligent input data management; decision theory for making tradeoffs; dynamic tradeoff evaluation; evaluation of anomaly detection results; evaluation of data management methods; system level analysis with cooperating expert systems; the distributed architecture of multiple expert systems; and event driven response.

  6. A Modular and Configurable Instrument Electronics Architecture for "MiniSAR"- An Advanced Smallsat SAR Instrument

    NASA Astrophysics Data System (ADS)

    Gomez, Jaime; Pastena, Max; Bierens, Laurens

    2013-08-01

    MiniSAR is a Dutch program focused on the development of a commercial smallsat featuring a SAR instrument, led by SSBV as prime contractor. In this paper an Instrument Electronics (IEL) system concept to meet the MiniSAR demands is presented. This system has several specificities wrt similar initiatives in the European space industry, driven by our main requirement: keep it small.

  7. Architecture of a Message-Driven Processor,

    DTIC Science & Technology

    1987-11-01

    Jon Kaplan, Paul Song, Brian Totty, and Scott Wills Artifcial Intelligence Laboratory -4 Laboratory for Computer Science Massachusetts Institute of...Information Dally, Chao, Chien, Hassoun, Horwat, Kaplan, Song, Totty & Wills: Artificial Intelligence i Laboratory and Laboratory for Computer Science, MIT...applied to a problem if we could are 36 bits long (32 data bits + 4 tag bits) and are used to hold efficiently run programs with a granularity of 5s

  8. Modeling driver behavior in a cognitive architecture.

    PubMed

    Salvucci, Dario D

    2006-01-01

    This paper explores the development of a rigorous computational model of driver behavior in a cognitive architecture--a computational framework with underlying psychological theories that incorporate basic properties and limitations of the human system. Computational modeling has emerged as a powerful tool for studying the complex task of driving, allowing researchers to simulate driver behavior and explore the parameters and constraints of this behavior. An integrated driver model developed in the ACT-R (Adaptive Control of Thought-Rational) cognitive architecture is described that focuses on the component processes of control, monitoring, and decision making in a multilane highway environment. This model accounts for the steering profiles, lateral position profiles, and gaze distributions of human drivers during lane keeping, curve negotiation, and lane changing. The model demonstrates how cognitive architectures facilitate understanding of driver behavior in the context of general human abilities and constraints and how the driving domain benefits cognitive architectures by pushing model development toward more complex, realistic tasks. The model can also serve as a core computational engine for practical applications that predict and recognize driver behavior and distraction.

  9. Conversion of Continuous-Valued Deep Networks to Efficient Event-Driven Networks for Image Classification.

    PubMed

    Rueckauer, Bodo; Lungu, Iulia-Alexandra; Hu, Yuhuang; Pfeiffer, Michael; Liu, Shih-Chii

    2017-01-01

    Spiking neural networks (SNNs) can potentially offer an efficient way of doing inference because the neurons in the networks are sparsely activated and computations are event-driven. Previous work showed that simple continuous-valued deep Convolutional Neural Networks (CNNs) can be converted into accurate spiking equivalents. These networks did not include certain common operations such as max-pooling, softmax, batch-normalization and Inception-modules. This paper presents spiking equivalents of these operations therefore allowing conversion of nearly arbitrary CNN architectures. We show conversion of popular CNN architectures, including VGG-16 and Inception-v3, into SNNs that produce the best results reported to date on MNIST, CIFAR-10 and the challenging ImageNet dataset. SNNs can trade off classification error rate against the number of available operations whereas deep continuous-valued neural networks require a fixed number of operations to achieve their classification error rate. From the examples of LeNet for MNIST and BinaryNet for CIFAR-10, we show that with an increase in error rate of a few percentage points, the SNNs can achieve more than 2x reductions in operations compared to the original CNNs. This highlights the potential of SNNs in particular when deployed on power-efficient neuromorphic spiking neuron chips, for use in embedded applications.

  10. Conversion of Continuous-Valued Deep Networks to Efficient Event-Driven Networks for Image Classification

    PubMed Central

    Rueckauer, Bodo; Lungu, Iulia-Alexandra; Hu, Yuhuang; Pfeiffer, Michael; Liu, Shih-Chii

    2017-01-01

    Spiking neural networks (SNNs) can potentially offer an efficient way of doing inference because the neurons in the networks are sparsely activated and computations are event-driven. Previous work showed that simple continuous-valued deep Convolutional Neural Networks (CNNs) can be converted into accurate spiking equivalents. These networks did not include certain common operations such as max-pooling, softmax, batch-normalization and Inception-modules. This paper presents spiking equivalents of these operations therefore allowing conversion of nearly arbitrary CNN architectures. We show conversion of popular CNN architectures, including VGG-16 and Inception-v3, into SNNs that produce the best results reported to date on MNIST, CIFAR-10 and the challenging ImageNet dataset. SNNs can trade off classification error rate against the number of available operations whereas deep continuous-valued neural networks require a fixed number of operations to achieve their classification error rate. From the examples of LeNet for MNIST and BinaryNet for CIFAR-10, we show that with an increase in error rate of a few percentage points, the SNNs can achieve more than 2x reductions in operations compared to the original CNNs. This highlights the potential of SNNs in particular when deployed on power-efficient neuromorphic spiking neuron chips, for use in embedded applications. PMID:29375284

  11. Model-Based Systems Engineering for Capturing Mission Architecture System Processes with an Application Case Study - Orion Flight Test 1

    NASA Technical Reports Server (NTRS)

    Bonanne, Kevin H.

    2011-01-01

    Model-based Systems Engineering (MBSE) is an emerging methodology that can be leveraged to enhance many system development processes. MBSE allows for the centralization of an architecture description that would otherwise be stored in various locations and formats, thus simplifying communication among the project stakeholders, inducing commonality in representation, and expediting report generation. This paper outlines the MBSE approach taken to capture the processes of two different, but related, architectures by employing the Systems Modeling Language (SysML) as a standard for architecture description and the modeling tool MagicDraw. The overarching goal of this study was to demonstrate the effectiveness of MBSE as a means of capturing and designing a mission systems architecture. The first portion of the project focused on capturing the necessary system engineering activities that occur when designing, developing, and deploying a mission systems architecture for a space mission. The second part applies activities from the first to an application problem - the system engineering of the Orion Flight Test 1 (OFT-1) End-to-End Information System (EEIS). By modeling the activities required to create a space mission architecture and then implementing those activities in an application problem, the utility of MBSE as an approach to systems engineering can be demonstrated.

  12. From Modelling to Execution of Enterprise Integration Scenarios: The GENIUS Tool

    NASA Astrophysics Data System (ADS)

    Scheibler, Thorsten; Leymann, Frank

    One of the predominant problems IT companies are facing today is Enterprise Application Integration (EAI). Most of the infrastructures built to tackle integration issues are proprietary because no standards exist for how to model, develop, and actually execute integration scenarios. EAI patterns gain importance for non-technical business users to ease and harmonize the development of EAI scenarios. These patterns describe recurring EAI challenges and propose possible solutions in an abstract way. Therefore, one can use those patterns to describe enterprise architectures in a technology neutral manner. However, patterns are documentation only used by developers and systems architects to decide how to implement an integration scenario manually. Thus, patterns are not theoretical thought to stand for artefacts that will immediately be executed. This paper presents a tool supporting a method how EAI patterns can be used to generate executable artefacts for various target platforms automatically using a model-driven development approach, hence turning patterns into something executable. Therefore, we introduce a continuous tool chain beginning at the design phase and ending in executing an integration solution in a completely automatically manner. For evaluation purposes we introduce a scenario demonstrating how the tool is utilized for modelling and actually executing an integration scenario.

  13. Toward Petascale Biologically Plausible Neural Networks

    NASA Astrophysics Data System (ADS)

    Long, Lyle

    This talk will describe an approach to achieving petascale neural networks. Artificial intelligence has been oversold for many decades. Computers in the beginning could only do about 16,000 operations per second. Computer processing power, however, has been doubling every two years thanks to Moore's law, and growing even faster due to massively parallel architectures. Finally, 60 years after the first AI conference we have computers on the order of the performance of the human brain (1016 operations per second). The main issues now are algorithms, software, and learning. We have excellent models of neurons, such as the Hodgkin-Huxley model, but we do not know how the human neurons are wired together. With careful attention to efficient parallel computing, event-driven programming, table lookups, and memory minimization massive scale simulations can be performed. The code that will be described was written in C + + and uses the Message Passing Interface (MPI). It uses the full Hodgkin-Huxley neuron model, not a simplified model. It also allows arbitrary network structures (deep, recurrent, convolutional, all-to-all, etc.). The code is scalable, and has, so far, been tested on up to 2,048 processor cores using 107 neurons and 109 synapses.

  14. Model implementation for dynamic computation of system cost

    NASA Astrophysics Data System (ADS)

    Levri, J.; Vaccari, D.

    The Advanced Life Support (ALS) Program metric is the ratio of the equivalent system mass (ESM) of a mission based on International Space Station (ISS) technology to the ESM of that same mission based on ALS technology. ESM is a mission cost analog that converts the volume, power, cooling and crewtime requirements of a mission into mass units to compute an estimate of the life support system emplacement cost. Traditionally, ESM has been computed statically, using nominal values for system sizing. However, computation of ESM with static, nominal sizing estimates cannot capture the peak sizing requirements driven by system dynamics. In this paper, a dynamic model for a near-term Mars mission is described. The model is implemented in Matlab/Simulink' for the purpose of dynamically computing ESM. This paper provides a general overview of the crew, food, biomass, waste, water and air blocks in the Simulink' model. Dynamic simulations of the life support system track mass flow, volume and crewtime needs, as well as power and cooling requirement profiles. The mission's ESM is computed, based upon simulation responses. Ultimately, computed ESM values for various system architectures will feed into an optimization search (non-derivative) algorithm to predict parameter combinations that result in reduced objective function values.

  15. Scalable and Resilient Middleware to Handle Information Exchange during Environment Crisis

    NASA Astrophysics Data System (ADS)

    Tao, R.; Poslad, S.; Moßgraber, J.; Middleton, S.; Hammitzsch, M.

    2012-04-01

    The EU FP7 TRIDEC project focuses on enabling real-time, intelligent, information management of collaborative, complex, critical decision processes for earth management. A key challenge is to promote a communication infrastructure to facilitate interoperable environment information services during environment events and crises such as tsunamis and drilling, during which increasing volumes and dimensionality of disparate information sources, including sensor-based and human-based ones, can result, and need to be managed. Such a system needs to support: scalable, distributed messaging; asynchronous messaging; open messaging to handling changing clients such as new and retired automated system and human information sources becoming online or offline; flexible data filtering, and heterogeneous access networks (e.g., GSM, WLAN and LAN). In addition, the system needs to be resilient to handle the ICT system failures, e.g. failure, degradation and overloads, during environment events. There are several system middleware choices for TRIDEC based upon a Service-oriented-architecture (SOA), Event-driven-Architecture (EDA), Cloud Computing, and Enterprise Service Bus (ESB). In an SOA, everything is a service (e.g. data access, processing and exchange); clients can request on demand or subscribe to services registered by providers; more often interaction is synchronous. In an EDA system, events that represent significant changes in state can be processed simply, or as streams or more complexly. Cloud computing is a virtualization, interoperable and elastic resource allocation model. An ESB, a fundamental component for enterprise messaging, supports synchronous and asynchronous message exchange models and has inbuilt resilience against ICT failure. Our middleware proposal is an ESB based hybrid architecture model: an SOA extension supports more synchronous workflows; EDA assists the ESB to handle more complex event processing; Cloud computing can be used to increase and decrease the ESB resources on demand. To reify this hybrid ESB centric architecture, we will adopt two complementary approaches: an open source one for scalability and resilience improvement while a commercial one can be used for ultra-speed messaging, whilst we can bridge between these two to support interoperability. In TRIDEC, to manage such a hybrid messaging system, overlay and underlay management techniques will be adopted. The managers (both global and local) will collect, store and update status information (e.g. CPU utilization, free space, number of clients) and balance the usage, throughput, and delays to improve resilience and scalability. The expected resilience improvement includes dynamic failover, self-healing, pre-emptive load balancing, and bottleneck prediction while the expected improvement for scalability includes capacity estimation, Http Bridge, and automatic configuration and reconfiguration (e.g. add or delete clients and servers).

  16. Analyses Made to Order: Using Transformation to Rapidly Configure a Multidisciplinary Environment

    NASA Technical Reports Server (NTRS)

    Cole, Bjorn

    2013-01-01

    Aerospace problems are highly multidisciplinary. Four or more major disciplines are involved in analyzing any particular vehicle. Moreover, the choice of implementation technology of various subsystems can lead to a change of leading domain or reformation of the driving equations. An excellent example is the change of expertise required to consider aircraft built from composite or metallic structures, or those propelled by chemical or electrical thrusters. Another example is in the major reconfiguration of handling and stability equations with different control surface configuration (e.g., canards, t-tail v four-post tail). Combinatorial problems are also commonplace anytime that a major system is to be designed. If there are only 5 attributes of a design to consider with 4 different options, this is already 1024 options. Adding just 5 more dimensions to the study explodes the space to over one million. Even generous assumptions like the idea that only 10% of the combinations are physically feasible can only contain the problem for so long. To make matters worse, the simple number of combinations is only the beginning. Combining the issue of trade space size with the need to reformulate the design problem for many of the possibilities makes life exponentially more difficult. Advances in software modeling approaches have led to the development of model-driven architecture. This approach uses the transformation of models into inferred models (e.g. inferred execution traces from state machines) or the skeletons for code generation. When the emphasis on transformation is applied to aerospace, it becomes possible to exploit redundancy in the information specified in multiple domain models into a unified system model. F1urther, it becomes possible to overcome the combinatorial nature of specifying integrated system behavior by manually combining the equations governing a given component technology. Transformations from a system specification combined with a system-analysis mapping specification enable one-click combination of domain analyses. This is a flexibility that has been missing from many engineering codes, which often entangle design specification and physical examination much more than is required to conduct the analysis. This capability has been investigated and cultivated within the DARPA F6 program by a team of JPL and Phoenix Integration engineers building the Adapatable Systems Design and Analysis (ASDA) framework. By embracing system modeling with SysML and the Query-View-Transformation (QVT) language, the ASDA team has been able to build a flexible, easily reconfigurable framework for building up and solving large tradespaces. Examples of application and lessons learned in building the framework will be described in this paper. In addition, the motivation will be laid for various tool vendors to develop open model description standards while being able to maintain competitive advantage through proprietary algorithms and approaches. These standards will also be compared to the underpinnings of model-driven architecture and the OMG standards of the Meta-Object Facility (MOF), SysML, and QVT.

  17. Towards a Framework for Modeling Space Systems Architectures

    NASA Technical Reports Server (NTRS)

    Shames, Peter; Skipper, Joseph

    2006-01-01

    Topics covered include: 1) Statement of the problem: a) Space system architecture is complex; b) Existing terrestrial approaches must be adapted for space; c) Need a common architecture methodology and information model; d) Need appropriate set of viewpoints. 2) Requirements on a space systems model. 3) Model Based Engineering and Design (MBED) project: a) Evaluated different methods; b) Adapted and utilized RASDS & RM-ODP; c) Identified useful set of viewpoints; d) Did actual model exchanges among selected subset of tools. 4) Lessons learned & future vision.

  18. Critical Dynamics of Gravito-Convective Mixing in Geological Carbon Sequestration

    PubMed Central

    Soltanian, Mohamad Reza; Amooie, Mohammad Amin; Dai, Zhenxue; Cole, David; Moortgat, Joachim

    2016-01-01

    When CO2 is injected in saline aquifers, dissolution causes a local increase in brine density that can cause Rayleigh-Taylor-type gravitational instabilities. Depending on the Rayleigh number, density-driven flow may mix dissolved CO2 throughout the aquifer at fast advective time-scales through convective mixing. Heterogeneity can impact density-driven flow to different degrees. Zones with low effective vertical permeability may suppress fingering and reduce vertical spreading, while potentially increasing transverse mixing. In more complex heterogeneity, arising from the spatial organization of sedimentary facies, finger propagation is reduced in low permeability facies, but may be enhanced through more permeable facies. The connectivity of facies is critical in determining the large-scale transport of CO2-rich brine. We perform high-resolution finite element simulations of advection-diffusion transport of CO2 with a focus on facies-based bimodal heterogeneity. Permeability fields are generated by a Markov Chain approach, which represent facies architecture by commonly observed characteristics such as volume fractions. CO2 dissolution and phase behavior are modeled with the cubic-plus-association equation-of-state. Our results show that the organization of high-permeability facies and their connectivity control the dynamics of gravitationally unstable flow. We discover new flow regimes in both homogeneous and heterogeneous media and present quantitative scaling relations for their temporal evolution. PMID:27808178

  19. Network analysis of patient flow in two UK acute care hospitals identifies key sub-networks for A&E performance

    PubMed Central

    Stringer, Clive; Beeknoo, Neeraj

    2017-01-01

    The topology of the patient flow network in a hospital is complex, comprising hundreds of overlapping patient journeys, and is a determinant of operational efficiency. To understand the network architecture of patient flow, we performed a data-driven network analysis of patient flow through two acute hospital sites of King’s College Hospital NHS Foundation Trust. Administration databases were queried for all intra-hospital patient transfers in an 18-month period and modelled as a dynamic weighted directed graph. A ‘core’ subnetwork containing only 13–17% of all edges channelled 83–90% of the patient flow, while an ‘ephemeral’ network constituted the remainder. Unsupervised cluster analysis and differential network analysis identified sub-networks where traffic is most associated with A&E performance. Increased flow to clinical decision units was associated with the best A&E performance in both sites. The component analysis also detected a weekend effect on patient transfers which was not associated with performance. We have performed the first data-driven hypothesis-free analysis of patient flow which can enhance understanding of whole healthcare systems. Such analysis can drive transformation in healthcare as it has in industries such as manufacturing. PMID:28968472

  20. Magma-driven antiform structures in the Afar rift: The Ali Sabieh range, Djibouti

    NASA Astrophysics Data System (ADS)

    Le Gall, Bernard; Daoud, Mohamed Ahmed; Maury, René C.; Rolet, Joël; Guillou, Hervé; Sue, Christian

    2010-06-01

    The Ali Sabieh Range, SE Afar, is an antiform involving Mesozoic sedimentary rocks and synrift volcanics. Previous studies have postulated a tectonic origin for this structure, in either a contractional or extensional regime. New stratigraphic, mapping and structural data demonstrate that large-scale doming took place at an early stage of rifting, in response to a mafic laccolithic intrusion dated between 28 and 20 Ma from new K-Ar age determinations. Our 'laccolith' model is chiefly supported by: (i) the geometry of the intrusion roof, (ii) the recognition of roof pendants in its axial part, and (iii) the mapping relationships between the intrusion, the associated dyke-sill network, and the upper volcanic/volcaniclastic sequences. The laccolith is assumed to have inflated with time, and to have upwardly bent its sedimentary roof rocks. From the architecture of the ˜1 km-thick Mesozoic overburden sequences, ca. 2 km of roof lifting are assumed to have occurred, probably in association with reactivated transverse discontinuities. Computed paleostress tensors indicate that the minimum principal stress axis is consistently horizontal and oriented E-W, with a dominance of extensional versus strike-slip regimes. The Ali Sabieh laccolith is the first regional-scale magma-driven antiform structure reported so far in the Afro-Arabian rift system.

  1. Can clues from evolution unlock the molecular development of the cerebellum?

    PubMed

    Butts, Thomas; Chaplin, Natalie; Wingate, Richard J T

    2011-02-01

    The cerebellum sits at the rostral end of the vertebrate hindbrain and is responsible for sensory and motor integration. Owing to its relatively simple architecture, it is one of the most powerful model systems for studying brain evolution and development. Over the last decade, the combination of molecular fate mapping techniques in the mouse and experimental studies, both in vitro and in vivo, in mouse and chick have significantly advanced our understanding of cerebellar neurogenesis in space and time. In amniotes, the most numerous cell type in the cerebellum, and indeed the brain, is the cerebellar granule neurons, and these are born from a transient secondary proliferative zone, the external granule layer (EGL), where proliferation is driven by sonic hedgehog signalling and causes cerebellar foliation. Recent studies in zebrafish and sharks have shown that while the molecular mechanisms of neurogenesis appear conserved across vertebrates, the EGL as a site of shh-driven transit amplification is not, and is therefore implicated as a key amniote innovation that facilitated the evolution of the elaborate foliated cerebella found in birds and mammals. Ellucidating the molecular mechanisms underlying the origin of the EGL in evolution could have significant impacts on our understanding of the molecular details of cerebellar development.

  2. A direct-to-drive neural data acquisition system.

    PubMed

    Kinney, Justin P; Bernstein, Jacob G; Meyer, Andrew J; Barber, Jessica B; Bolivar, Marti; Newbold, Bryan; Scholvin, Jorg; Moore-Kochlacs, Caroline; Wentz, Christian T; Kopell, Nancy J; Boyden, Edward S

    2015-01-01

    Driven by the increasing channel count of neural probes, there is much effort being directed to creating increasingly scalable electrophysiology data acquisition (DAQ) systems. However, all such systems still rely on personal computers for data storage, and thus are limited by the bandwidth and cost of the computers, especially as the scale of recording increases. Here we present a novel architecture in which a digital processor receives data from an analog-to-digital converter, and writes that data directly to hard drives, without the need for a personal computer to serve as an intermediary in the DAQ process. This minimalist architecture may support exceptionally high data throughput, without incurring costs to support unnecessary hardware and overhead associated with personal computers, thus facilitating scaling of electrophysiological recording in the future.

  3. A direct-to-drive neural data acquisition system

    PubMed Central

    Kinney, Justin P.; Bernstein, Jacob G.; Meyer, Andrew J.; Barber, Jessica B.; Bolivar, Marti; Newbold, Bryan; Scholvin, Jorg; Moore-Kochlacs, Caroline; Wentz, Christian T.; Kopell, Nancy J.; Boyden, Edward S.

    2015-01-01

    Driven by the increasing channel count of neural probes, there is much effort being directed to creating increasingly scalable electrophysiology data acquisition (DAQ) systems. However, all such systems still rely on personal computers for data storage, and thus are limited by the bandwidth and cost of the computers, especially as the scale of recording increases. Here we present a novel architecture in which a digital processor receives data from an analog-to-digital converter, and writes that data directly to hard drives, without the need for a personal computer to serve as an intermediary in the DAQ process. This minimalist architecture may support exceptionally high data throughput, without incurring costs to support unnecessary hardware and overhead associated with personal computers, thus facilitating scaling of electrophysiological recording in the future. PMID:26388740

  4. Software Defined Networking for Next Generation Converged Metro-Access Networks

    NASA Astrophysics Data System (ADS)

    Ruffini, M.; Slyne, F.; Bluemm, C.; Kitsuwan, N.; McGettrick, S.

    2015-12-01

    While the concept of Software Defined Networking (SDN) has seen a rapid deployment within the data center community, its adoption in telecommunications network has progressed slowly, although the concept has been swiftly adopted by all major telecoms vendors. This paper presents a control plane architecture for SDN-driven converged metro-access networks, developed through the DISCUS European FP7 project. The SDN-based controller architecture was developed in a testbed implementation targeting two main scenarios: fast feeder fiber protection over dual-homed Passive Optical Networks (PONs) and dynamic service provisioning over a multi-wavelength PON. Implementation details and results of the experiment carried out over the second scenario are reported in the paper, showing the potential of SDN in providing assured on-demand services to end-users.

  5. Design of a QoS-controlled ATM-based communications system in chorus

    NASA Astrophysics Data System (ADS)

    Coulson, Geoff; Campbell, Andrew; Robin, Philippe; Blair, Gordon; Papathomas, Michael; Shepherd, Doug

    1995-05-01

    We describe the design of an application platform able to run distributed real-time and multimedia applications alongside conventional UNIX programs. The platform is embedded in a microkernel/PC environment and supported by an ATM-based, QoS-driven communications stack. In particular, we focus on resource-management aspects of the design and deal with CPU scheduling, network resource-management and memory-management issues. An architecture is presented that guarantees QoS levels of both communications and processing with varying degrees of commitment as specified by user-level QoS parameters. The architecture uses admission tests to determine whether or not new activities can be accepted and includes modules to translate user-level QoS parameters into representations usable by the scheduling, network, and memory-management subsystems.

  6. A self-scaling, distributed information architecture for public health, research, and clinical care.

    PubMed

    McMurry, Andrew J; Gilbert, Clint A; Reis, Ben Y; Chueh, Henry C; Kohane, Isaac S; Mandl, Kenneth D

    2007-01-01

    This study sought to define a scalable architecture to support the National Health Information Network (NHIN). This architecture must concurrently support a wide range of public health, research, and clinical care activities. The architecture fulfils five desiderata: (1) adopt a distributed approach to data storage to protect privacy, (2) enable strong institutional autonomy to engender participation, (3) provide oversight and transparency to ensure patient trust, (4) allow variable levels of access according to investigator needs and institutional policies, (5) define a self-scaling architecture that encourages voluntary regional collaborations that coalesce to form a nationwide network. Our model has been validated by a large-scale, multi-institution study involving seven medical centers for cancer research. It is the basis of one of four open architectures developed under funding from the Office of the National Coordinator of Health Information Technology, fulfilling the biosurveillance use case defined by the American Health Information Community. The model supports broad applicability for regional and national clinical information exchanges. This model shows the feasibility of an architecture wherein the requirements of care providers, investigators, and public health authorities are served by a distributed model that grants autonomy, protects privacy, and promotes participation.

  7. The NASA Integrated Information Technology Architecture

    NASA Technical Reports Server (NTRS)

    Baldridge, Tim

    1997-01-01

    This document defines an Information Technology Architecture for the National Aeronautics and Space Administration (NASA), where Information Technology (IT) refers to the hardware, software, standards, protocols and processes that enable the creation, manipulation, storage, organization and sharing of information. An architecture provides an itemization and definition of these IT structures, a view of the relationship of the structures to each other and, most importantly, an accessible view of the whole. It is a fundamental assumption of this document that a useful, interoperable and affordable IT environment is key to the execution of the core NASA scientific and project competencies and business practices. This Architecture represents the highest level system design and guideline for NASA IT related activities and has been created on the authority of the NASA Chief Information Officer (CIO) and will be maintained under the auspices of that office. It addresses all aspects of general purpose, research, administrative and scientific computing and networking throughout the NASA Agency and is applicable to all NASA administrative offices, projects, field centers and remote sites. Through the establishment of five Objectives and six Principles this Architecture provides a blueprint for all NASA IT service providers: civil service, contractor and outsourcer. The most significant of the Objectives and Principles are the commitment to customer-driven IT implementations and the commitment to a simpler, cost-efficient, standards-based, modular IT infrastructure. In order to ensure that the Architecture is presented and defined in the context of the mission, project and business goals of NASA, this Architecture consists of four layers in which each subsequent layer builds on the previous layer. They are: 1) the Business Architecture: the operational functions of the business, or Enterprise, 2) the Systems Architecture: the specific Enterprise activities within the context of IT systems, 3) the Technical Architecture: a common, vendor-independent framework for design, integration and implementation of IT systems and 4) the Product Architecture: vendor=specific IT solutions. The Systems Architecture is effectively a description of the end-user "requirements". Generalized end-user requirements are discussed and subsequently organized into specific mission and project functions. The Technical Architecture depicts the framework, and relationship, of the specific IT components that enable the end-user functionality as described in the Systems Architecture. The primary components as described in the Technical Architecture are: 1) Applications: Basic Client Component, Object Creation Applications, Collaborative Applications, Object Analysis Applications, 2) Services: Messaging, Information Broker, Collaboration, Distributed Processing, and 3) Infrastructure: Network, Security, Directory, Certificate Management, Enterprise Management and File System. This Architecture also provides specific Implementation Recommendations, the most significant of which is the recognition of IT as core to NASA activities and defines a plan, which is aligned with the NASA strategic planning processes, for keeping the Architecture alive and useful.

  8. Electromagnetic physics models for parallel computing architectures

    DOE PAGES

    Amadio, G.; Ananya, A.; Apostolakis, J.; ...

    2016-11-21

    The recent emergence of hardware architectures characterized by many-core or accelerated processors has opened new opportunities for concurrent programming models taking advantage of both SIMD and SIMT architectures. GeantV, a next generation detector simulation, has been designed to exploit both the vector capability of mainstream CPUs and multi-threading capabilities of coprocessors including NVidia GPUs and Intel Xeon Phi. The characteristics of these architectures are very different in terms of the vectorization depth and type of parallelization needed to achieve optimal performance. In this paper we describe implementation of electromagnetic physics models developed for parallel computing architectures as a part ofmore » the GeantV project. Finally, the results of preliminary performance evaluation and physics validation are presented as well.« less

  9. Smooth Particle Hydrodynamics-based Wind Representation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Prescott, Steven; Smith, Curtis; Hess, Stephen

    2016-12-01

    As a result of the 2011 accident at the Fukushima Dai-ichi NPP and other operational NPP experience, there is an identified need to better characterize and evaluate the potential impacts of externally generated hazards on NPP safety. Due to the ubiquitous occurrence of high winds around the world and the possible extreme magnitude of the hazard that has been observed, the assessment of the impact of the high-winds hazard has been identified as an important activity by both NPP owner-operators and regulatory authorities. However, recent experience obtained from the conduct of high-winds risk assessments indicates that such activities have beenmore » both labor-intensive and expensive to perform. Additionally, the existing suite of methods and tools to conduct such assessments (which were developed decades ago) do not make use of modern computational architectures (e.g., parallel processing, object-oriented programming techniques, or simple user interfaces) or methods (e.g., efficient and robust numerical-solution schemes). As a result, the current suite of methods and tools will rapidly become obsolete. Physics-based 3D simulation methods can provide information to assist in the RISMC PRA methodology. This research is intended to determine what benefits SPH methods could bring to high-winds simulations for the purposes of assessing their potential impact on NPP safety. The initial investigation has determined that SPH can simulate key areas of high-wind events with reasonable accuracy, compared to other methods. Some problems, such as simulation voids, need to be addressed, but possible solutions have been identified and will be tested with continued work. This work also demonstrated that SPH simulations can provide a means for simulating debris movement; however, further investigations into the capability to determine the impact of high winds and the impacts of wind-driven debris that lead to SSC failures need to be done. SPH simulations alone would be limited in size and computation time. An advanced method of combing results from grid-based methods with SPH through a data-driven model is proposed. This method could allow for more accurate simulation of particle movement near rigid bodies even with larger SPH particle sizes. If successful, the data-driven model would eliminate the need for a SPH turbulence model and increase the simulation domain size. Continued research beyond the scope of this project will be needed in order to determine the viability of a data-driven model.« less

  10. Programming model for distributed intelligent systems

    NASA Technical Reports Server (NTRS)

    Sztipanovits, J.; Biegl, C.; Karsai, G.; Bogunovic, N.; Purves, B.; Williams, R.; Christiansen, T.

    1988-01-01

    A programming model and architecture which was developed for the design and implementation of complex, heterogeneous measurement and control systems is described. The Multigraph Architecture integrates artificial intelligence techniques with conventional software technologies, offers a unified framework for distributed and shared memory based parallel computational models and supports multiple programming paradigms. The system can be implemented on different hardware architectures and can be adapted to strongly different applications.

  11. Executable Architecture Research at Old Dominion University

    NASA Technical Reports Server (NTRS)

    Tolk, Andreas; Shuman, Edwin A.; Garcia, Johnny J.

    2011-01-01

    Executable Architectures allow the evaluation of system architectures not only regarding their static, but also their dynamic behavior. However, the systems engineering community do not agree on a common formal specification of executable architectures. To close this gap and identify necessary elements of an executable architecture, a modeling language, and a modeling formalism is topic of ongoing PhD research. In addition, systems are generally defined and applied in an operational context to provide capabilities and enable missions. To maximize the benefits of executable architectures, a second PhD effort introduces the idea of creating an executable context in addition to the executable architecture. The results move the validation of architectures from the current information domain into the knowledge domain and improve the reliability of such validation efforts. The paper presents research and results of both doctoral research efforts and puts them into a common context of state-of-the-art of systems engineering methods supporting more agility.

  12. Space Generic Open Avionics Architecture (SGOAA) standard specification

    NASA Technical Reports Server (NTRS)

    Wray, Richard B.; Stovall, John R.

    1994-01-01

    This standard establishes the Space Generic Open Avionics Architecture (SGOAA). The SGOAA includes a generic functional model, processing structural model, and an architecture interface model. This standard defines the requirements for applying these models to the development of spacecraft core avionics systems. The purpose of this standard is to provide an umbrella set of requirements for applying the generic architecture models to the design of a specific avionics hardware/software processing system. This standard defines a generic set of system interface points to facilitate identification of critical services and interfaces. It establishes the requirement for applying appropriate low level detailed implementation standards to those interfaces points. The generic core avionics functions and processing structural models provided herein are robustly tailorable to specific system applications and provide a platform upon which the interface model is to be applied.

  13. Resource utilization model for the algorithm to architecture mapping model

    NASA Technical Reports Server (NTRS)

    Stoughton, John W.; Patel, Rakesh R.

    1993-01-01

    The analytical model for resource utilization and the variable node time and conditional node model for the enhanced ATAMM model for a real-time data flow architecture are presented in this research. The Algorithm To Architecture Mapping Model, ATAMM, is a Petri net based graph theoretic model developed at Old Dominion University, and is capable of modeling the execution of large-grained algorithms on a real-time data flow architecture. Using the resource utilization model, the resource envelope may be obtained directly from a given graph and, consequently, the maximum number of required resources may be evaluated. The node timing diagram for one iteration period may be obtained using the analytical resource envelope. The variable node time model, which describes the change in resource requirement for the execution of an algorithm under node time variation, is useful to expand the applicability of the ATAMM model to heterogeneous architectures. The model also describes a method of detecting the presence of resource limited mode and its subsequent prevention. Graphs with conditional nodes are shown to be reduced to equivalent graphs with time varying nodes and, subsequently, may be analyzed using the variable node time model to determine resource requirements. Case studies are performed on three graphs for the illustration of applicability of the analytical theories.

  14. Absolute electrical impedance tomography (aEIT) guided ventilation therapy in critical care patients: simulations and future trends.

    PubMed

    Denaï, Mouloud A; Mahfouf, Mahdi; Mohamad-Samuri, Suzani; Panoutsos, George; Brown, Brian H; Mills, Gary H

    2010-05-01

    Thoracic electrical impedance tomography (EIT) is a noninvasive, radiation-free monitoring technique whose aim is to reconstruct a cross-sectional image of the internal spatial distribution of conductivity from electrical measurements made by injecting small alternating currents via an electrode array placed on the surface of the thorax. The purpose of this paper is to discuss the fundamentals of EIT and demonstrate the principles of mechanical ventilation, lung recruitment, and EIT imaging on a comprehensive physiological model, which combines a model of respiratory mechanics, a model of the human lung absolute resistivity as a function of air content, and a 2-D finite-element mesh of the thorax to simulate EIT image reconstruction during mechanical ventilation. The overall model gives a good understanding of respiratory physiology and EIT monitoring techniques in mechanically ventilated patients. The model proposed here was able to reproduce consistent images of ventilation distribution in simulated acutely injured and collapsed lung conditions. A new advisory system architecture integrating a previously developed data-driven physiological model for continuous and noninvasive predictions of blood gas parameters with the regional lung function data/information generated from absolute EIT (aEIT) is proposed for monitoring and ventilator therapy management of critical care patients.

  15. Implementation and Performance of a GPS/INS Tightly Coupled Assisted PLL Architecture Using MEMS Inertial Sensors

    PubMed Central

    Tawk, Youssef; Tomé, Phillip; Botteron, Cyril; Stebler, Yannick; Farine, Pierre-André

    2014-01-01

    The use of global navigation satellite system receivers for navigation still presents many challenges in urban canyon and indoor environments, where satellite availability is typically reduced and received signals are attenuated. To improve the navigation performance in such environments, several enhancement methods can be implemented. For instance, external aid provided through coupling with other sensors has proven to contribute substantially to enhancing navigation performance and robustness. Within this context, coupling a very simple GPS receiver with an Inertial Navigation System (INS) based on low-cost micro-electro-mechanical systems (MEMS) inertial sensors is considered in this paper. In particular, we propose a GPS/INS Tightly Coupled Assisted PLL (TCAPLL) architecture, and present most of the associated challenges that need to be addressed when dealing with very-low-performance MEMS inertial sensors. In addition, we propose a data monitoring system in charge of checking the quality of the measurement flow in the architecture. The implementation of the TCAPLL is discussed in detail, and its performance under different scenarios is assessed. Finally, the architecture is evaluated through a test campaign using a vehicle that is driven in urban environments, with the purpose of highlighting the pros and cons of combining MEMS inertial sensors with GPS over GPS alone. PMID:24569773

  16. Mobile Agents: A Distributed Voice-Commanded Sensory and Robotic System for Surface EVA Assistance

    NASA Technical Reports Server (NTRS)

    Clancey, William J.; Sierhuis, Maarten; Alena, Rick; Crawford, Sekou; Dowding, John; Graham, Jeff; Kaskiris, Charis; Tyree, Kim S.; vanHoof, Ronnie

    2003-01-01

    A model-based, distributed architecture integrates diverse components in a system designed for lunar and planetary surface operations: spacesuit biosensors, cameras, GPS, and a robotic assistant. The system transmits data and assists communication between the extra-vehicular activity (EVA) astronauts, the crew in a local habitat, and a remote mission support team. Software processes ("agents"), implemented in a system called Brahms, run on multiple, mobile platforms, including the spacesuit backpacks, all-terrain vehicles, and robot. These "mobile agents" interpret and transform available data to help people and robotic systems coordinate their actions to make operations more safe and efficient. Different types of agents relate platforms to each other ("proxy agents"), devices to software ("comm agents"), and people to the system ("personal agents"). A state-of-the-art spoken dialogue interface enables people to communicate with their personal agents, supporting a speech-driven navigation and scheduling tool, field observation record, and rover command system. An important aspect of the engineering methodology involves first simulating the entire hardware and software system in Brahms, and then configuring the agents into a runtime system. Design of mobile agent functionality has been based on ethnographic observation of scientists working in Mars analog settings in the High Canadian Arctic on Devon Island and the southeast Utah desert. The Mobile Agents system is developed iteratively in the context of use, with people doing authentic work. This paper provides a brief introduction to the architecture and emphasizes the method of empirical requirements analysis, through which observation, modeling, design, and testing are integrated in simulated EVA operations.

  17. A Distributed Intelligent E-Learning System

    ERIC Educational Resources Information Center

    Kristensen, Terje

    2016-01-01

    An E-learning system based on a multi-agent (MAS) architecture combined with the Dynamic Content Manager (DCM) model of E-learning, is presented. We discuss the benefits of using such a multi-agent architecture. Finally, the MAS architecture is compared with a pure service-oriented architecture (SOA). This MAS architecture may also be used within…

  18. Unique Color Converter Architecture Enabling Phosphor-in-Glass (PiG) Films Suitable for High-Power and High-Luminance Laser-Driven White Lighting.

    PubMed

    Zheng, Peng; Li, Shuxing; Wang, Le; Zhou, Tian-Liang; You, Shihai; Takeda, Takashi; Hirosaki, Naoto; Xie, Rong-Jun

    2018-05-02

    As a next-generation high-power lighting technology, laser lighting has attracted great attention in high-luminance applications. However, thermally robust and highly efficient color converters suitable for high-quality laser lighting are scarce. Despite its versatility, the phosphor-in-glass (PiG) has been seldom applied in laser lighting because of its low thermal conductivity. In this work, we develop a unique architecture in which a phosphor-in-glass (PiG) film was directly sintered on a high thermally conductive sapphire substrate coated by one-dimensional photonic crystals. The designed color converter with the composite architecture exhibits a high internal quantum efficiency close to that of the original phosphor powders and an excellent packaging efficiency up to 90%. Furthermore, the PiG film can even be survived under the 11.2 W mm -2 blue laser excitation. Combining blue laser diodes with the YAG-PiG-on-sapphire plate, a uniform white light with a high luminance of 845 Mcd m -2 (luminous flux: 1839 lm), luminous efficacy of 210 lm W -1 , and correlated color temperature of 6504 K was obtained. A high color rendering index of 74 was attained by adding a robust orange or red phosphor layer to the architecture. These outstanding properties meet the standards of vehicle regulations, enabling the PiG films with the composite architecture to be applied in automotive lighting or other high-power and high-luminance laser lighting.

  19. Kinetics and Thermodynamics of Watson-Crick Base Pairing Driven DNA Origami Dimerization.

    PubMed

    Zenk, John; Tuntivate, Chanon; Schulman, Rebecca

    2016-03-16

    We investigate the kinetics and thermodynamics of DNA origami dimerization using flat rectangle origami components and different architectures of Watson-Crick complementary single-stranded DNA ("sticky end") linking strategies. We systematically vary the number of linkers, the length of the sticky ends on the linker, and linker architecture and measure the corresponding yields as well as forward and reverse reaction rate constants through fluorescence quenching assays. Yields were further verified using atomic force microscopy. We calculate values of H° and ΔS° for various interface designs and find nonlinear van't Hoff behavior, best described by two linear equations, suggesting distinct regimes of dimerization between those with and those without well-formed interfaces. We find that self-assembly reactions can be tuned by manipulating the interface architecture without suffering a loss in yield, even when yield is high, ∼75-80%. We show that the second-order forward reaction rate constant (k(on)) depends on both linker architecture and number of linkers used, with typical values on the order of 10(5)-10(6) (M·s)(-1), values that are similar to those of bimolecular association of small, complementary DNA strands. The k(on) values are generally non-Arrhenius, tending to increase with decreasing temperature. Finally, we use kinetic and thermodynamic information about the optimal linking architecture to extend the system to an infinite, two-component repeating lattice system and show that we can form micron-sized lattices, with well-formed structures up to 8 μm(2).

  20. Pluripotent stem cell-derived organoids: using principles of developmental biology to grow human tissues in a dish.

    PubMed

    McCauley, Heather A; Wells, James M

    2017-03-15

    Pluripotent stem cell (PSC)-derived organoids are miniature, three-dimensional human tissues generated by the application of developmental biological principles to PSCs in vitro The approach to generate organoids uses a combination of directed differentiation, morphogenetic processes, and the intrinsically driven self-assembly of cells that mimics organogenesis in the developing embryo. The resulting organoids have remarkable cell type complexity, architecture and function similar to their in vivo counterparts. In the past five years, human PSC-derived organoids with components of all three germ layers have been generated, resulting in the establishment of a new human model system. Here, and in the accompanying poster, we provide an overview of how principles of developmental biology have been essential for generating human organoids in vitro , and how organoids are now being used as a primary research tool to investigate human developmental biology. © 2017. Published by The Company of Biologists Ltd.

  1. Complex modulation using tandem polarization modulators

    NASA Astrophysics Data System (ADS)

    Hasan, Mehedi; Hall, Trevor

    2017-11-01

    A novel photonic technique for implementing frequency up-conversion or complex modulation is proposed. The proposed circuit consists of a sandwich of a quarter-wave plate between two polarization modulators, driven, respectively, by an in-phase and quadrature-phase signals. The operation of the circuit is modelled using a transmission matrix method. The theoretical prediction is then validated by simulation using an industry-standard software tool. The intrinsic conversion efficiency of the architecture is improved by 6 dB over a functionally equivalent design based on dual parallel Mach-Zehnder modulators. Non-ideal scenarios such as imperfect alignment of the optical components and power imbalances and phase errors in the electric drive signals are also analysed. As light travels, along one physical path, the proposed design can be implemented using discrete components with greater control of relative optical path length differences. The circuit can further be integrated in any material platform that offers electro-optic polarization modulators.

  2. A vibration-based health monitoring program for a large and seismically vulnerable masonry dome

    NASA Astrophysics Data System (ADS)

    Pecorelli, M. L.; Ceravolo, R.; De Lucia, G.; Epicoco, R.

    2017-05-01

    Vibration-based health monitoring of monumental structures must rely on efficient and, as far as possible, automatic modal analysis procedures. Relatively low excitation energy provided by traffic, wind and other sources is usually sufficient to detect structural changes, as those produced by earthquakes and extreme events. Above all, in-operation modal analysis is a non-invasive diagnostic technique that can support optimal strategies for the preservation of architectural heritage, especially if complemented by model-driven procedures. In this paper, the preliminary steps towards a fully automated vibration-based monitoring of the world’s largest masonry oval dome (internal axes of 37.23 by 24.89 m) are presented. More specifically, the paper reports on signal treatment operations conducted to set up the permanent dynamic monitoring system of the dome and to realise a robust automatic identification procedure. Preliminary considerations on the effects of temperature on dynamic parameters are finally reported.

  3. Investigating dynamical information transfer in the brain following a TMS pulse: Insights from structural architecture.

    PubMed

    Amico, Enrico; Van Mierlo, Pieter; Marinazzo, Daniele; Laureys, Steven

    2015-01-01

    Transcranial magnetic stimulation (TMS) has been used for more than 20 years to investigate connectivity and plasticity in the human cortex. By combining TMS with high-density electroencephalography (hd-EEG), one can stimulate any cortical area and measure the effects produced by this perturbation in the rest of the cerebral cortex. The purpose of this paper is to investigate changes of information flow in the brain after TMS from a functional and structural perspective, using multimodal modeling of source reconstructed TMS/hd-EEG recordings and DTI tractography. We prove how brain dynamics induced by TMS is constrained and driven by its structure, at different spatial and temporal scales, especially when considering cross-frequency interactions. These results shed light on the function-structure organization of the brain network at the global level, and on the huge variety of information contained in it.

  4. RE-PLAN: An Extensible Software Architecture to Facilitate Disaster Response Planning

    PubMed Central

    O’Neill, Martin; Mikler, Armin R.; Indrakanti, Saratchandra; Tiwari, Chetan; Jimenez, Tamara

    2014-01-01

    Computational tools are needed to make data-driven disaster mitigation planning accessible to planners and policymakers without the need for programming or GIS expertise. To address this problem, we have created modules to facilitate quantitative analyses pertinent to a variety of different disaster scenarios. These modules, which comprise the REsponse PLan ANalyzer (RE-PLAN) framework, may be used to create tools for specific disaster scenarios that allow planners to harness large amounts of disparate data and execute computational models through a point-and-click interface. Bio-E, a user-friendly tool built using this framework, was designed to develop and analyze the feasibility of ad hoc clinics for treating populations following a biological emergency event. In this article, the design and implementation of the RE-PLAN framework are described, and the functionality of the modules used in the Bio-E biological emergency mitigation tool are demonstrated. PMID:25419503

  5. Influence of Chirality in Ordered Block Copolymer Phases

    NASA Astrophysics Data System (ADS)

    Prasad, Ishan; Grason, Gregory

    2015-03-01

    Block copolymers are known to assemble into rich spectrum of ordered phases, with many complex phases driven by asymmetry in copolymer architecture. Despite decades of study, the influence of intrinsic chirality on equilibrium mesophase assembly of block copolymers is not well understood and largely unexplored. Self-consistent field theory has played a major role in prediction of physical properties of polymeric systems. Only recently, a polar orientational self-consistent field (oSCF) approach was adopted to model chiral BCP having a thermodynamic preference for cholesteric ordering in chiral segments. We implement oSCF theory for chiral nematic copolymers, where segment orientations are characterized by quadrupolar chiral interactions, and focus our study on the thermodynamic stability of bi-continuous network morphologies, and the transfer of molecular chirality to mesoscale chirality of networks. Unique photonic properties observed in butterfly wings have been attributed to presence of chiral single-gyroid networks, this has made it an attractive target for chiral metamaterial design.

  6. Stimulation-Based Control of Dynamic Brain Networks

    PubMed Central

    Pasqualetti, Fabio; Gu, Shi; Cieslak, Matthew

    2016-01-01

    The ability to modulate brain states using targeted stimulation is increasingly being employed to treat neurological disorders and to enhance human performance. Despite the growing interest in brain stimulation as a form of neuromodulation, much remains unknown about the network-level impact of these focal perturbations. To study the system wide impact of regional stimulation, we employ a data-driven computational model of nonlinear brain dynamics to systematically explore the effects of targeted stimulation. Validating predictions from network control theory, we uncover the relationship between regional controllability and the focal versus global impact of stimulation, and we relate these findings to differences in the underlying network architecture. Finally, by mapping brain regions to cognitive systems, we observe that the default mode system imparts large global change despite being highly constrained by structural connectivity. This work forms an important step towards the development of personalized stimulation protocols for medical treatment or performance enhancement. PMID:27611328

  7. Object localization in handheld thermal images for fireground understanding

    NASA Astrophysics Data System (ADS)

    Vandecasteele, Florian; Merci, Bart; Jalalvand, Azarakhsh; Verstockt, Steven

    2017-05-01

    Despite the broad application of the handheld thermal imaging cameras in firefighting, its usage is mostly limited to subjective interpretation by the person carrying the device. As remedies to overcome this limitation, object localization and classification mechanisms could assist the fireground understanding and help with the automated localization, characterization and spatio-temporal (spreading) analysis of the fire. An automated understanding of thermal images can enrich the conventional knowledge-based firefighting techniques by providing the information from the data and sensing-driven approaches. In this work, transfer learning is applied on multi-labeling convolutional neural network architectures for object localization and recognition in monocular visual, infrared and multispectral dynamic images. Furthermore, the possibility of analyzing fire scene images is studied and their current limitations are discussed. Finally, the understanding of the room configuration (i.e., objects location) for indoor localization in reduced visibility environments and the linking with Building Information Models (BIM) are investigated.

  8. Modeling Operations Costs for Human Exploration Architectures

    NASA Technical Reports Server (NTRS)

    Shishko, Robert

    2013-01-01

    Operations and support (O&S) costs for human spaceflight have not received the same attention in the cost estimating community as have development costs. This is unfortunate as O&S costs typically comprise a majority of life-cycle costs (LCC) in such programs as the International Space Station (ISS) and the now-cancelled Constellation Program. Recognizing this, the Constellation Program and NASA HQs supported the development of an O&S cost model specifically for human spaceflight. This model, known as the Exploration Architectures Operations Cost Model (ExAOCM), provided the operations cost estimates for a variety of alternative human missions to the moon, Mars, and Near-Earth Objects (NEOs) in architectural studies. ExAOCM is philosophically based on the DoD Architecture Framework (DoDAF) concepts of operational nodes, systems, operational functions, and milestones. This paper presents some of the historical background surrounding the development of the model, and discusses the underlying structure, its unusual user interface, and lastly, previous examples of its use in the aforementioned architectural studies.

  9. Sensitivity Analysis of a Cognitive Architecture for the Cultural Geography Model

    DTIC Science & Technology

    2011-12-01

    developmental inquiry. American Psychologist, 34(10), 906–911. Gazzaniga, M. S . (2004) The cognitive neurosciences III. Cambridge: MIT Press. Greeno, J. G...129 ix LIST OF FIGURES Situation-Based Cognitive Architecture (From Alt et al., 2011) .....................13 Figure 1. Theory of Planned...Harold, CG Model developer at TRAC-MTRY, who spend countless hours explaining to me the implementation of the Cognitive Architecture and CG model

  10. Capability-Based Modeling Methodology: A Fleet-First Approach to Architecture

    DTIC Science & Technology

    2014-02-01

    reconnaissance (ISR) aircraft , or unmanned systems . Accordingly, a mission architecture used to model SAG operations for a given Fleet unit should include all...would use an ISR aircraft to increase fidelity of a targeting solution; another mission thread to show how unmanned systems can augment targeting... unmanned systems . Therefore, an architect can generate, from a comprehensive SAG mission architecture, individual mission threads that model how a SAG

  11. Upper and Lower Limb Muscle Architecture of a 104 Year-Old Cadaver

    PubMed Central

    Infantolino, Benjamin

    2016-01-01

    Muscle architecture is an important component to typical musculoskeletal models. Previous studies of human muscle architecture have focused on a single joint, two adjacent joints, or an entire limb. To date, no study has presented muscle architecture for the upper and lower limbs of a single cadaver. Additionally, muscle architectural parameters from elderly cadavers are lacking, making it difficult to accurately model elderly populations. Therefore, the purpose of this study was to present muscle architecture of the upper and lower limbs of a 104 year old female cadaver. The major muscles of the upper and lower limbs were removed and the musculotendon mass, tendon mass, musculotendon length, tendon length, pennation angle, optimal fascicle length, physiological cross-sectional area, and tendon cross-sectional area were determined for each muscle. Data from this complete cadaver are presented in table format. The data from this study can be used to construct a musculoskeletal model of a specific individual who was ambulatory, something which has not been possible to date. This should increase the accuracy of the model output as the model will be representing a specific individual, not a synthesis of measurements from multiple individuals. Additionally, an elderly individual can be modeled which will provide insight into muscle function as we age. PMID:28033339

  12. Upper and Lower Limb Muscle Architecture of a 104 Year-Old Cadaver.

    PubMed

    Ruggiero, Marissa; Cless, Daniel; Infantolino, Benjamin

    2016-01-01

    Muscle architecture is an important component to typical musculoskeletal models. Previous studies of human muscle architecture have focused on a single joint, two adjacent joints, or an entire limb. To date, no study has presented muscle architecture for the upper and lower limbs of a single cadaver. Additionally, muscle architectural parameters from elderly cadavers are lacking, making it difficult to accurately model elderly populations. Therefore, the purpose of this study was to present muscle architecture of the upper and lower limbs of a 104 year old female cadaver. The major muscles of the upper and lower limbs were removed and the musculotendon mass, tendon mass, musculotendon length, tendon length, pennation angle, optimal fascicle length, physiological cross-sectional area, and tendon cross-sectional area were determined for each muscle. Data from this complete cadaver are presented in table format. The data from this study can be used to construct a musculoskeletal model of a specific individual who was ambulatory, something which has not been possible to date. This should increase the accuracy of the model output as the model will be representing a specific individual, not a synthesis of measurements from multiple individuals. Additionally, an elderly individual can be modeled which will provide insight into muscle function as we age.

  13. Event management for large scale event-driven digital hardware spiking neural networks.

    PubMed

    Caron, Louis-Charles; D'Haene, Michiel; Mailhot, Frédéric; Schrauwen, Benjamin; Rouat, Jean

    2013-09-01

    The interest in brain-like computation has led to the design of a plethora of innovative neuromorphic systems. Individually, spiking neural networks (SNNs), event-driven simulation and digital hardware neuromorphic systems get a lot of attention. Despite the popularity of event-driven SNNs in software, very few digital hardware architectures are found. This is because existing hardware solutions for event management scale badly with the number of events. This paper introduces the structured heap queue, a pipelined digital hardware data structure, and demonstrates its suitability for event management. The structured heap queue scales gracefully with the number of events, allowing the efficient implementation of large scale digital hardware event-driven SNNs. The scaling is linear for memory, logarithmic for logic resources and constant for processing time. The use of the structured heap queue is demonstrated on a field-programmable gate array (FPGA) with an image segmentation experiment and a SNN of 65,536 neurons and 513,184 synapses. Events can be processed at the rate of 1 every 7 clock cycles and a 406×158 pixel image is segmented in 200 ms. Copyright © 2013 Elsevier Ltd. All rights reserved.

  14. Critical Technology Determination for Future Human Space Flight

    NASA Technical Reports Server (NTRS)

    Mercer, Carolyn R.; Vangen, Scott D.; Williams-Byrd, Julie A.; Steckleim, Jonette M.; Alexander, Leslie; Rahman, Shamin A.; Rosenthal, Matthew; Wiley, Dianne S.; Davison, Stephan C.; Korsmeyer, David J.; hide

    2012-01-01

    As the National Aeronautics and Space Administration (NASA) prepares to extend human presence throughout the solar system, technical capabilities must be developed to enable long duration flights to destinations such as near Earth asteroids, Mars, and extended stays on the Moon. As part of the NASA Human Spaceflight Architecture Team, a Technology Development Assessment Team has identified a suite of critical technologies needed to support this broad range of missions. Dialog between mission planners, vehicle developers, and technologists was used to identify a minimum but sufficient set of technologies, noting that needs are created by specific mission architecture requirements, yet specific designs are enabled by technologies. Further consideration was given to the re-use of underlying technologies to cover multiple missions to effectively use scarce resources. This suite of critical technologies is expected to provide the needed base capability to enable a variety of possible destinations and missions. This paper describes the methodology used to provide an architecture driven technology development assessment (technology pull), including technology advancement needs identified by trade studies encompassing a spectrum of flight elements and destination design reference missions.

  15. Application Driven Self-Assembly of Discrete, Three-Dimensional Architectures in Water.

    PubMed

    Taylor, Lauren; Riddell, Imogen; Smulders, Maarten M J

    2018-06-25

    In this review we discuss the recent advances in the construction of discrete, self-assembled architectures in water, a field that has gained significant interest in recent years because of the wide range of applications that arises from their well-defined 3D structure. We jointly discuss the efforts undertaken by supramolecular chemists and biotechnologists who previously worked independently to overcome discipline-specific challenges associated with construction of assemblies from synthetic and bio-derived components, respectively. We propose that going forward a more interdisciplinary research approach will expedite development of both synthetic and bio-derived complexes with real-world applications that exploit the benefits of compartmentalisation. In support of this, we summarise advances made in the development of discrete, water-soluble architectures, paying particular attention to their current and prospective applications. We also highlight keys areas where understanding and methodologies developed in the field of synthetic supramolecular chemistry can be integrated into the field of biotechnology and vice versa, in anticipation this will yield advances not possible from either field alone. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  16. Critical Technology Determination for Future Human Space Flight

    NASA Technical Reports Server (NTRS)

    Mercer, Carolyn R.; Vangen, Scott D.; Williams-Byrd, Julie A.; Stecklein, Jonette M.; Rahman, Shamim A.; Rosenthal, Matthew E.; Hornyak, David M.; Alexander, Leslie; Korsmeyer, David J.; Tu, Eugene L.; hide

    2012-01-01

    As the National Aeronautics and Space Administration (NASA) prepares to extend human presence throughout the solar system, technical capabilities must be developed to enable long duration flights to destinations such as near Earth asteroids, Mars, and extended stays on the Moon. As part of the NASA Human Spaceflight Architecture Team, a Technology Development Assessment Team has identified a suite of critical technologies needed to support this broad range of missions. Dialog between mission planners, vehicle developers, and technologists was used to identify a minimum but sufficient set of technologies, noting that needs are created by specific mission architecture requirements, yet specific designs are enabled by technologies. Further consideration was given to the re-use of underlying technologies to cover multiple missions to effectively use scarce resources. This suite of critical technologies is expected to provide the needed base capability to enable a variety of possible destinations and missions. This paper describes the methodology used to provide an architecture-driven technology development assessment ("technology pull"), including technology advancement needs identified by trade studies encompassing a spectrum of flight elements and destination design reference missions.

  17. Pre-PDK block-level PPAC assessment of technology options for sub-7nm high-performance logic

    NASA Astrophysics Data System (ADS)

    Liebmann, L.; Northrop, G.; Facchini, M.; Riviere Cazaux, L.; Baum, Z.; Nakamoto, N.; Sun, K.; Chanemougame, D.; Han, G.; Gerousis, V.

    2018-03-01

    This paper describes a rigorous yet flexible standard cell place-and-route flow that is used to quantify block-level power, performance, and area trade-offs driven by two unique cell architectures and their associated design rule differences. The two architectures examined in this paper differ primarily in their use of different power-distribution-networks to achieve the desired circuit performance for high-performance logic designs. The paper shows the importance of incorporating block-level routability experiments in the early phases of design-technology co-optimization by reviewing a series of routing trials that explore different aspects of the technology definition. Since the electrical and physical parameters leading to critical process assumptions and design rules are unique to specific integration schemes and design objectives, it is understood that the goal of this work is not to promote one cell-architecture over another, but rather to convey the importance of exploring critical trade-offs long before the process details of the technology node are finalized to a point where a process design kit can be published.

  18. Logs Analysis of Adapted Pedagogical Scenarios Generated by a Simulation Serious Game Architecture

    ERIC Educational Resources Information Center

    Callies, Sophie; Gravel, Mathieu; Beaudry, Eric; Basque, Josianne

    2017-01-01

    This paper presents an architecture designed for simulation serious games, which automatically generates game-based scenarios adapted to learner's learning progression. We present three central modules of the architecture: (1) the learner model, (2) the adaptation module and (3) the logs module. The learner model estimates the progression of the…

  19. A Concept Transformation Learning Model for Architectural Design Learning Process

    ERIC Educational Resources Information Center

    Wu, Yun-Wu; Weng, Kuo-Hua; Young, Li-Ming

    2016-01-01

    Generally, in the foundation course of architectural design, much emphasis is placed on teaching of the basic design skills without focusing on teaching students to apply the basic design concepts in their architectural designs or promoting students' own creativity. Therefore, this study aims to propose a concept transformation learning model to…

  20. Summary Report of Working Group 2: Computation

    NASA Astrophysics Data System (ADS)

    Stoltz, P. H.; Tsung, R. S.

    2009-01-01

    The working group on computation addressed three physics areas: (i) plasma-based accelerators (laser-driven and beam-driven), (ii) high gradient structure-based accelerators, and (iii) electron beam sources and transport [1]. Highlights of the talks in these areas included new models of breakdown on the microscopic scale, new three-dimensional multipacting calculations with both finite difference and finite element codes, and detailed comparisons of new electron gun models with standard models such as PARMELA. The group also addressed two areas of advances in computation: (i) new algorithms, including simulation in a Lorentz-boosted frame that can reduce computation time orders of magnitude, and (ii) new hardware architectures, like graphics processing units and Cell processors that promise dramatic increases in computing power. Highlights of the talks in these areas included results from the first large-scale parallel finite element particle-in-cell code (PIC), many order-of-magnitude speedup of, and details of porting the VPIC code to the Roadrunner supercomputer. The working group featured two plenary talks, one by Brian Albright of Los Alamos National Laboratory on the performance of the VPIC code on the Roadrunner supercomputer, and one by David Bruhwiler of Tech-X Corporation on recent advances in computation for advanced accelerators. Highlights of the talk by Albright included the first one trillion particle simulations, a sustained performance of 0.3 petaflops, and an eight times speedup of science calculations, including back-scatter in laser-plasma interaction. Highlights of the talk by Bruhwiler included simulations of 10 GeV accelerator laser wakefield stages including external injection, new developments in electromagnetic simulations of electron guns using finite difference and finite element approaches.

Top